What, really, is Nvidia’s Moat?

Aakash Gupta
3 min readJun 1, 2024

--

For a company founded in ’93, Nvidia’s ascent to $2.7T market cap has been FAST. But what really is Nvidia’s moat?

Let’s break it down.

Nvidia has been exponentially growing AI compute.

Part 1 — Software

The story starts all the way back in the early 2000s. That’s when Jensen Huang, Nvidia CEO, and his team were out meeting researchers using their products.

Most researchers were hacking graphics packages to run complex parallel compute tasks. It was not ideal. To say the least.

So, when the Nvidia team met Ian Buck, who had the vision of running general purpose programming languages on GPUs, they funded his Ph.D. After graduation, Ian came to Nvidia to commercialize the tech.

Two years later, in 2006, Nvidia released CUDA.

C ompute

U nified

D evice

A rchitecture

CUDA made all those parallelization hacks the researchers were doing available to everyone. Over time, CUDA became the default choice for researchers.

CUDA allowed accessible customization of the low-level hardware. So developers loved it.

Nowadays, when startups like MosaicML evaluate the available technology vs CUDA, they inevitably choose CUDA.

The ecosystem around CUDA has grown so robust that its lead is virtually unbeatable. This software layer is at the core of Nvidia’s moat.

Part 2 — Hardware

The other side of Nvidia’s moat is hardware. But it’s not graphics cards for crypto and gaming. The hardware that matters is AI supercomputers.

The story of these supercomputers begins in the late 2000s. As Nvidia was developing CUDA, Jensen asked the team to build a supercomputer to help him build better chips.

The result was a massive supercomputer that weighed 100 pounds and strung together many GPUs with world-class networking for ultra-fast computing.

In the early 2010s, Jensen gave a talk at a conference about this AI supercomputer. Elon Musk got wind of it and said, “I want one.”

So, in 2016, Jensen actually donated one to Elon Musk’s relatively unknown nonprofit, OpenAI. He hand delivered it, and there’s photographic proof.

OpenAI quickly learned the supercomputer worked really well. Especially for training large neural networks. That 2016 Pascal architecture delivered an impressive 19 TFLOPS of FP16 operations.

That’s 19 trillion floating point operations per second. It’s a massive amount. But that was just the beginning.

Since then, Jensen and the Nvidia team have been lapping the industry in delivering more TFLOPS, growing them at an exponential rate.

The latest Blackwell architecture delivers a massive 5000 TFLOPS. That’s >260x AI computer in 8 years. And sells for more than $75K. But buyers like Meta, OpenAI, Google, and Amazon just can’t get enough, as their internal ASICs are nowhere near Nvidia’s level.

As a result, Nvidia’s profits and market cap continue to soar, cementing its position as a leader in the AI hardware and software space.

Ready to go deeper on Nvidia’s moat and story? I break down everything you need to know in the deep dive.

--

--

Aakash Gupta

Helping PMs, product leaders, and product aspirants succeed