By Jon Peddie
Being in the right place at the right time is one way to success (assuming you’re smart enough to recognize the opportunity). Anticipating a market developing is another way, and creating a market is yet one more way to success. Nvidia has done all that and more with AI.
Before Large Language Models (LLMs), transformers, and generative AI exploded on the scene, Nvidia was already seeding what was called then “accelerated-compute,” or GPU-compute, and used its CUDA C++ like programming language as a catalyst and gateway to exploiting the power of parallel processing with a GPU. GPUs are complex devices and getting multiple threads of data to behave properly and in sync is a tricky process. CUDA took a lot of the drudgery out of that work and the payoff was so good that hundreds of developers in large organizations took advantage of it and built up a huge library of proprietary and open programs that ran on Nvidia GPUs.