Nvidia outlines plans for using light for communication between AI GPUs by 2026 — silicon photonics and co-packaged optics may become mandatory for next-gen AI data centers

Nvidia outlines plans for using light for communication between AI GPUs by 2026 — silicon photonics and co-packaged optics may become mandatory for next-gen AI data centers

The extreme demands of passing communication between ever-growing clusters of AI GPUs is fueling a move towards using light for communication across the networking layers. Earlier this year, Nvidia outlined that its next-generation rack-scale AI platforms will use silicon photonics interconnects with co-packaged optics (CPO) for higher transfer rates at lower power. At the Hot…

Read More
The Ultimate Guide to CPUs, GPUs, NPUs, and TPUs for AI/ML: Performance, Use Cases, and Key Differences

The Ultimate Guide to CPUs, GPUs, NPUs, and TPUs for AI/ML: Performance, Use Cases, and Key Differences

Artificial intelligence and machine learning workloads have fueled the evolution of specialized hardware to accelerate computation far beyond what traditional CPUs can offer. Each processing unit—CPU, GPU, NPU, TPU—plays a distinct role in the AI ecosystem, optimized for certain models, applications, or environments. Here’s a technical, data-driven breakdown of their core differences and best use…

Read More
Elon Musk says xAI is targeting 50 million ‘H100 equivalent’ AI GPUs in five years — 230k GPUs, including 30k GB200s already reportedly operational for training Grok

Elon Musk says xAI is targeting 50 million ‘H100 equivalent’ AI GPUs in five years — 230k GPUs, including 30k GB200s already reportedly operational for training Grok

Leading AI companies have been bragging about the number of GPUs they use or plan to use in the future. Just yesterday, OpenAI announced plans to build infrastructure to power two million GPUs, but now Elon Musk has revealed even more colossal plans: the equivalent of 50 million H100 GPUs to be deployed for AI…

Read More