sign up log in
Want to go ad-free? Find out how, here.

Why didn't robots, desktop POPS, and cores galore at Nvidia's GTC artificial intelligence meet impress that much?

Technology / news
Why didn't robots, desktop POPS, and cores galore at Nvidia's GTC artificial intelligence meet impress that much?
Source: Nvidia
Source: Nvidia

Nvidia’s GPU Technology Conference (GTC) wrapped up last week in San Jose, California, and it is arguably the most important artificial intelligence event currently. 

That is how dominant the company’s graphics processing unit (GPU) technology is for AI, big systems that people associate with the concept like OpenAI GPT, Meta Llama, Anthropic Claude.ai and others.

The company has masterfully gone from conquering computer video gaming, to becoming the hardware supplier for cryptocurrency mining; now, there can be no discussion around building AI systems without Nvidia being mentioned. The obvious joke here (and it’s not mine) is that the company should be renamed NvidAI.

From a nerd perspective, it is impressive how co-founder Jensen Huang and the team at Nvidia have doggedly honed and repurposed GPU computing, taking it into completely new areas. 

GPU computing quite a different take to traditional data processing.

At a high level, modern GPUs have lots of efficient cores that perform specialised tasks in parallel. A general purpose central processing unit (CPU) usually has tens of cores to perform tasks fast sequentially, but a GPU can have many thousands. 

The upshot of that is GPUs are able to process vast amounts of data in parallel quickly, which is what you want for hyper realistic computer games - and AI. Computers still need CPUs to manage the hardware, but offloading heavy duty data processing to powerful GPUs is the paradigm shift here.

Big and powerful new kit, 

Without exaggerating, some pretty out there chips were on display at GTC. A new variant of Nvidia’s Blackwell architecture (itself quite fresh, being just a year old) was announced at the event. 

The Blackwell Ultra is termed an “AI factory platform” by Nvidia and it’s destined for cloud providers mainly, costing US$30-40,000 per part.

In the server rack Nvidia GB300 NVL72 configuration, you get 72 Blackwell Ultra GPUs, and 36 Nvidia Grace CPUs which use the Arm architecture. It’s aimed at reasoning AI inference, and also pre- and post-training of models. Inferencing, where a trained model uses what it “knows” to predict or draw conclusions from new data is very computationally intensive.

Agentic AI which can act with limited or even no human interaction is another use case, but also AI for training robots. Yes, robots: Nvidia launched the Generalist Robot 00 Technology or Isaac GR00T to build humanoid robots on the company’s Omniverse 3D applications and services platform.

There was a cutesy little Blue robot as well. Ah well. There goes the planned pivot to manual labour after AI knocks you off the intellectual effort board.

Apart from lots of data processing cores, the new Blackwell chips have heaps of high bandwidth memory (HBM) at up to 288 gigabytes, to fit in large AI models and run them fast.

There’s a colossal amount of supporting electronics as well. It seems each GBU300 chip has an envelope of 1.4 kilowatt thermal design power (TDP); add a few 72 GPU NVL72 racks to that equation and you get an idea as to why cloud providers are talking about powering their data centres with nuclear reactors.

So much so that Nvidia and the Electric Power Research Institute (EPRI) et al launched the Open Power AI Consortium at GTC.

Nvidia also showed off the DGX desktop systems with the Grace Blackwell GB10 and GB300 superchips, which can have up to 784 GB (!) of coherent memory space. They are aimed at developers, data scientists and researchers and perform floating point (numbers with decimals) operations at enormous speeds, which is again a good thing for AI acceleration. 

The smaller Nvidia DGX Spark system which also has the Grace Blackwell GB10 superchip, and which was called project DIGITS before lets you run decent sized models locally is on preorder. It can manage 1000 TOPS (trillions of operations per second) for fast AI processing, and costs a shade under US$4,000 per compact system. Most desktops and laptops manage 35-50 TOPS at the most, but the DGX has the POPS (peta OPS) for AI.

Nvidia DGX. Source: Nvidia

Dell will have its Pro Max DGX range coming later this year, looking stylish like this:

Source: Dell

That and more, like hyperfast 800 gigabit/s networking, plus a strong roadmap for the next three years with more high performance hardware coming up, not to mention Nvidia last quarter revenue moving up by 77% to a staggering US$39.3 billion (NZ$68.7 billion), and you’d think investors would be falling over themselves.

But no: Nvidia’s share price sagged before and after GTC instead of shooting skywards.

Quite a bit of that is no doubt due to market fundamentals like investors worrying that US president Donald Trump’s love of tariffs will dampen demand by pushing up costs. AI infrastructure is expensive, and if businesses pull back investment and stop hiring, it’s not going to be a pretty scenario, 

In the background, there’s also the worry that the Chinese have actually worked out a way to build AI systems cheaper and with less resources than their Western counterparts.

In fact, maybe we should all feel a bit nervous about the shares. Despite its stock price being down, Nvidia’s market capitalisation is still a colossal US$2.88 trillion. That’s more than the gross domestic product of Italy, Canada and Brazil, respectively. A few percentage points of movement there is a lot of money sloshing around.

Irrespective of share market anxiousness, it’s better to be where Nvidia is currently than not with AI. Intel which kicked off the PC revolution missed not just smartphones and mobility, but AI as well. 

The company went on life support and might still be sold, as a whole or in parts. Samsung struggled to get a foothold in the HBM market, and its chief executive has had to apologise again to shareholders for that miss.

Even Apple fumbled with AI, angering loyal fans like high-profile blogger John Gruber of Daring Fireball who wrote that the Siri personal digital assistant is dumb and getting dumber despite promises of the opposite. 

You can’t do that as a tech company in 2025. Apple has acknowledged that its AI efforts are behind schedule, and reorganised the division responsible for the work. The chief of Apple’s mixed reality Vision Pro goggles has now been put in charge of AI to speed things up. And boom, the lawyers are circling, taking legal action against Apple in the US for false advertising.

There's not much margin for error and misses, in other words. AI has become a high-stakes game for everyone involved, and it’s an unpredictable one.

Update: the SI standard naming should be peta, not quadrillion.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.