Chipmaker Nvidia announced Monday that Spectrum-X networking technology has helped expand xAI's Colossus supercomputer, now known as the world's largest AI training cluster.
Located in Memphis, Tennessee, Colossus will serve as Grok's third training site for xAI's large suite of language models to power chatbot features for X Premium subscribers.
colossus, It is completed In 122 days, it started training 19 days after the installation of the first models. Tech billionaire Elon Musk's startup xAI plans to double the system's capacity to 200,000 GPUs, according to NVIDIA. press release on Monday.
At its core, Colossus is a massive interconnected system of GPUs, each processing massive data sets. When GROC models are trained, they must analyze vast amounts of text, images, and data to improve their responses.
It is given. Musk's world's most powerful AI training cluster, Colossus connects 100,000 NVIDIA Hopper GPUs using a unified remote direct memory access network. Nvidia's Hopper GPUs handle complex tasks by splitting workloads across multiple GPUs and processing them in parallel.
The architecture allows the data to move directly between the nodes, bypassing the operating system and ensuring low latency as well as optimal flow for extensive AI training tasks.
Traditional Ethernet networks often suffer from congestion and packet loss—limiting throughput to 60%—Spectrum-X achieves 95% throughput without latency degradation.
Spectrum-X allows a large number of GPUs to communicate with each other efficiently, as traditional networks can become overwhelmed with too much data.
The technology allows Grok to learn quickly and accurately, which is essential for building AI models that respond effectively to human interaction.
Monday's announcement had little impact on Nvidia's stock, which dipped slightly. Shares traded at $141 as of Monday, giving the company a market capitalization of $3.45 trillion.
Edited by Sebastian Sinclair.
Generally intelligent newspaper
A weekly AI journey narrated by General AI Model.