Nvidia has improved its low-cost Jetson AI computer—more power for half the price
21 seconds ago Benito Santiago
Good news for AI developers and hobbyists: Nvidia has made it much cheaper to build AI-powered robots, drones, smart cameras, and other gadgets that require intelligence. The company's new Jetson Orion Nano Super, which was announced on Tuesday, packs more processing muscle than its predecessor and costs half as much at $249.
The palm-sized computer offers a 70% performance boost, reaching 67 trillion operations per second for AI tasks. That's a significant leap from previous models, especially for powering things like chatbots, computer vision and robotics applications.
“This is a new Jetson Nano Super. About 70 trillion operations per second, 25 watts and $249,” said NVIDIA CEO Jensen Huang in an official video from his kitchen. “Everything HGX does, it also does LLMs.”
Memory bandwidth has received a major upgrade, increasing to 102 gigabytes per second, which is 50% faster than the previous Jetson generation. This upgrade means the device can handle more complex AI models and process data from up to four cameras simultaneously.
The device comes with Nvidia's Ampere architecture GPU and a 6-core ARM processor, which allows it to run multiple AI applications simultaneously. This gives developers the opportunity to work with a variety of capabilities, such as environment mapping, object recognition, and building miniature models of robots using voice commands with low processing power.
Existing Jetson Orion Nano owners won't be left out in the cold either. Navia is releasing software updates to increase the efficiency of its older AI processors.
The numbers behind Nvidia's new Jetson Orion Nano Super tell an interesting story. At just 1,024 CUDA cores, it looks modest compared to the RTX 2060's 1,920 cores, RTX 3060's 3,584, or RTX 4060's 3,072. But the raw core count doesn't tell the whole story.
While the RTX series GPUs boast between 115 and 170 watts of power, the Jetson draws only 7 to 25 watts. That's about one-seventh the power consumption of the RTX 4060 – the most efficient of the bunch.
Memory bandwidth numbers paint a similar picture. The Jetson's 102 GB/s might seem boring next to RTX cards' 300+ GB/s, but it's particularly optimized for AI workloads at the edge, where efficient data processing is more important than raw production.
That said, the real magic happens in the AI performance. The device clocks in at 67 TOPS (Trillion Operations Per Second) for AI tasks – a number that's difficult to directly compare to RTX cards' TFLOPS, as they measure different types of operations.
But in practical terms, Jetson can handle tasks like running local AI chatbots, processing multiple camera feeds, and controlling robots — all at once on a power budget that simply can't run a gaming GPU cooling fan, essentially neck-and-neck with the RTX 2060 for a fraction of the price. Less power consumption.
8GB of shared memory may seem low, but it's more capable than a standard RTX 2060 to run local AI models like Flux or Stable Diffusion, which can throw an “out of memory” error on those GPUs or cause partial downloads. The work to the standard RAM, reducing the input time – basically the AI thought process.
Jetson Orion Nano Super supports a variety of small and large language models, including those with up to 8 billion parameters, such as the Lama 3.1 model. Using these models numerically can generate simulations at a rate of approximately 18-20 per second. A bit slow, but still good enough for some local applications. It's still an improvement over previous generation Jetson AI hardware models.
Given its price and features, the Jetson Orion Nano Super is primarily designed for prototyping and small-scale applications. For power users, businesses or applications that require extensive computing resources, the device's capabilities may feel limited compared to higher-end systems that cost more and require more power.
Edited by Andrew Hayward.
Generally intelligent newspaper
A weekly AI journey narrated by a generative AI model.