nvidia launches new ai chips

Nvidia Unveils Blackwell Ultra and Rubin AI Chips Alongside New AI Systems

Nvidia just dropped a bombshell at GTC 2025. The tech giant revealed its Blackwell Ultra GPU architecture, and honestly, it’s a beast. With 288GB of HBM3e memory per GPU and 1.5x more AI performance than its predecessor, this thing isn’t messing around. The specs are downright ridiculous – we’re talking 15 petaFLOPS of dense 4-bit floating-point performance and 8 TB/s memory bandwidth. Yeah, you read that right.

Nvidia’s Blackwell Ultra GPU rewrites the rules with mind-bending specs: 288GB HBM3e memory and 15 petaFLOPS of raw computing muscle.

The new Blackwell Ultra features 12-high memory stacks that deliver unprecedented memory capacity for AI workloads. But Nvidia wasn’t content with just one mic drop. They also teased their next-gen Vera Rubin architecture, named after the dark matter pioneer because apparently, they’re going for astronomical levels of performance. Scheduled for late 2026, Vera Rubin promises to double Blackwell’s AI performance. That’s not a typo – double. It’ll pack up to 50 petaflops during inference operations and combine two GPUs into one, like some sort of silicon frankenstein. A custom-designed Vera CPU will power these units, running at speeds twice as fast as the Grace Blackwell CPU.

The networking capabilities have been significantly enhanced with Smart Nick technology for improved data transfer speeds. The optional battery backup system costs around $1,500 per configuration to ensure uninterrupted operation. The systems built around these chips are equally impressive. The GB300 NVL72 crams 72 GPUs and 36 Grace CPUs into a rack-scale design, while the GB300 SuperPOD takes it to another level with 576 GPUs across 8 racks. Talk about overkill. CEO Jensen Huang emphasized that we’ve reached an AI inflection point during his keynote presentation, marking a significant shift in computational capabilities.

And with up to 20 terabytes of vRAM in the NVL72 configuration, these systems have more memory than most data centers had just a few years ago.

Looking further ahead, Nvidia’s planning Rubin Ultra for late 2027, followed by Rubin Next, which will somehow squeeze four GPU dies into a single unit. Because apparently, two wasn’t enough. These chips are designed to handle everything from reasoning models to physical AI, with 10x throughput for systems like DeepSeek-R1.

The impact on AI computing is massive. Major cloud providers are already lining up to get their hands on these systems, and Nvidia’s revenue opportunity for AI factories is expected to multiply by 50x. That’s the kind of math that makes accountants cry tears of joy.

All this processing power comes at a cost, though – each Blackwell Ultra GPU draws 1400W of power. But Nvidia’s redesigned everything for better energy efficiency and serviceability, because they’d prefer not to melt their data centers.

The future of AI is here, and it’s hungry for watts.

Similar Posts