Remix.run Logo
Nvidia's new 'robot brain' goes on sale for $3,499(cnbc.com)
90 points by tiahura a day ago | 87 comments
npalli a day ago | parent | next [-]

So the single place that we can buy this is showing no stock (already) and not clear if this will even ship given all the customs and tariffs stuff. I must say after waiting for months on the 'almost ready to ship' DGX Spark (with multiple partners no less), getting strong announce-ware vibes from this already.

https://www.arrow.com/en/products/945-14070-0080-000/nvidia?...

justincormack a day ago | parent | next [-]

My assumotion would be it will siffer the same delays as DGX Spark as it is a very similar chipset, so maybe December?

altspace 20 hours ago | parent | prev [-]

they are becoming available on arrow in limited quantities. I just ordered mine arriving this week

mwambua a day ago | parent | prev | next [-]

My naive first reaction was that a unit like that would consume a way too much power to be practical on a robot, but then I remembered how many calories our own brains need vs the rest of our body (Google says 20% of total body needs).

Looks like power consumption for the Thor T5000 is between 30W-140W. The Unitree G1 (https://www.unitree.com/g1) has a 9Ah battery that lasts 2hrs under normal operation. Assuming an operating voltage of 48V (13s battery), that implies the robot's actuator and sensor power usage is ~216W.

Assuming average power usage is somewhere down the middle (85W), a thor unit would consume 28% of the robot's total power needs. This doesn't account for the fact that the robot would have to carry around the additional weight of the compute unit though. Can't say if that's good or bad, just interesting to see that it's in the same ballpark.

xattt a day ago | parent | next [-]

Can self-driving cars be framed as robots?

An electric car would have no issue sustaining this level of power; a gas-powered car doubly-so.

AlotOfReading a day ago | parent [-]

Autonomous vehicles are indeed robots, but they have power constraints (that Thor can reasonably fit within). Most industrial robots aren't meaningfully power constrained though.

It was a bit of a culture shock the first time I was involved with industrial robots because of how much power constraints had impacted the design of previous systems I worked on.

worldsayshi a day ago | parent | prev | next [-]

I tried to look up human wattage as a comparison and I'm very surprised that it lands around the same ballpark. Around 145W as a daily average and around 440W as a an approximate hourly average during exercise.

I thought current gen robots would be an order of magnitude less efficient. Maybe I'm misunderstanding something.

BobbyJo a day ago | parent | next [-]

Electric motors are very energy efficient. I believe they are actually far more efficient on a per-joint movement basis, and the equivalence between us and them is largely due to inefficient locomotion.

Where we excel is energy storage. Far less weight, far higher density.

themafia a day ago | parent | prev | next [-]

I happen to have an envelope handy:

2000 kilocalorie converts to 8.3 megajoules. This should be the amount of energy consumed per day.

8.3 megajoules / 24 hours is 96 watts. This should be the average rate of energy expediture.

96 watts * 20% is 19 watts. This should be the portion your brain uses out of that average.

96 watts * 24 hours is 464 watthours. This should be the average amount of energy your brain uses in a day.

This is why I've never found "AI" to be particularly competitive with human beings. The level of energy efficiency that our brains operate at is amazing. Our electrical and computer engineering is several orders of magnitude out from the achievements of nature and biology.

ZiiS a day ago | parent [-]

Calculate how much energy needs to be input into acriculture and transport to provide that wattage.

themafia a day ago | parent [-]

To be fair we'd have to consider how much of this same secondary energy would be required to build, operate and maintain the power grid. The grid itself is not 100% efficient either so we'd need to calculate how much power is directly wasted every single day just in inserting and extracting power from those overhead lines.

That's way off the envelope though.

lm28469 a day ago | parent | prev | next [-]

We do a whole lot of things a robot doesn't have to do, like filtering blood, digesting, keeping warm.

a day ago | parent | next [-]
[deleted]
worldsayshi a day ago | parent | prev [-]

Body maintenance.

LtdJorge a day ago | parent | prev [-]

Every hardware piece of such a robot can do a few things. Our body parts do orders of magnitude more, including growing and regeneration.

amelius 13 hours ago | parent | prev | next [-]

A robot working at an assembly line can be powered with a cable.

riku_iki a day ago | parent | prev | next [-]

> too much power to be practical on a robot

robot could be useful even when permanently plugged to the grid.

tonyarkles a day ago | parent [-]

From a UAV perspective, even at 140W it's not too bad. For a multi-rotor, that's about the same energy needed to lift around 750g-1kg of payload.

bitwize a day ago | parent | prev [-]

The efficacy to weight ratio of meat vs. rocks and metal is freakin' absurd. We don't know how to build a robot that's as strong and damage-resistant as a human body and weighs only as much as one. Similarly we don't know how to build something as energy-efficient as a human brain that thinks anywhere near as well. Artificial superintelligence may well be a thing in the coming decades, but it will be profoundly energy-greedy; I fear the first thing it will resolve to do is secure fuel for itself by stealing our energy supplies like out of Superman III.

kjhughes a day ago | parent | prev | next [-]

Here's NVIDIA's blog post on this:

NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

https://blogs.nvidia.com/blog/jetson-thor-physical-ai-edge/

nickfromseattle a day ago | parent | prev | next [-]

What are the variables that prefer local GPUs vs cloud inference? Is connectivity the dividing line or are there other variables that influence the choice?

Anduril submersibles probably need local processing, but does my laundry/dishes robot need local processing? Or machines in factories? Or delivery drones?

michaelt a day ago | parent | next [-]

Any sort of continuous video processing, especially low-latency.

Imagine you were tracking items on video at a self-service checkout. Sure, you could compress the video down to 15 Mbps or so and send it to the cloud. But now, a store with 20 self-checkouts needs 300 Mbps of upload bandwidth. That's one more problem making it harder for Wal-Mart to buy and roll out your product.

Also, if you know you need an NVIDIA L4 dedicated to you 24/7 for a year, a g6.xlarge will cost $7,000/year on-demand or $4,300/year reserved [1] while you can buy the card for $2,500.

Of course for many other use cases the cloud is a fine choice. If you only need a fraction of a GPU, or you only need a monster GPU a tiny fraction of the time, or you need an enormous LLM that demands water cooling and tolerates latency easily, the cloud can be a fine choice.

[1] https://instances.vantage.sh/aws/ec2/g6.xlarge?currency=USD&...

traverseda a day ago | parent | prev | next [-]

Anything latency sensitive. Anything bandwidth constrained.

Simple example, Security cameras that only use bandwidth when they've detected something. The cost of live streaming 20 cameras over 5g is very high. The cost of sending text messages with still images when you see a person is reasonable.

pyrale a day ago | parent | prev | next [-]

Why the hell would a dishwasher need to be connected, or smart for that matter?

I just want clean dishes/clothes, not to be upsold into some stupid shit that fails when it can’t ping google.com or gets bricked when the company closes.

I would pay premium for certified mindless products.

sidewndr46 a day ago | parent | prev | next [-]

Anecdotally, I don't have any direct physical evidence or written evidence to support this. But I talked to someone in the industry over a decade ago when "run it on a GPU" was just heating up. It's drones. Not DJI ones, military ones with surveillance gear and weapons.

bigfishrunning a day ago | parent | prev | next [-]

Mining, remote construction, remote power station inspection, battlefields. there are many many places where a stable network connection can't be taken for granted.

newsclues a day ago | parent | prev | next [-]

I want local processing for my local data. That includes my photos, documents and surveillance camera feeds.

ls612 a day ago | parent | prev | next [-]

If I had to guess there is significant interest in this product from a certain Eastern European nation. I don’t think they are intending to use it for “robotics” though.

exe34 a day ago | parent | prev [-]

it depends if the plates were expensive.

pmdr a day ago | parent | prev | next [-]

> CEO Jensen Huang has said robotics is the company’s largest growth opportunity outside of artificial intelligence

> The Jetson Thor chips are equipped with 128GB of memory, which is essential for big AI models.

Just put it into a robot and run some unhinged model on it, that should be fun.

bigfishrunning a day ago | parent | next [-]

The models that run on robots do things like "where is the road" or "is this package damaged"; people will run LLMs on this thing, but that's not it's primary bread-and-butter

ACCount37 a day ago | parent [-]

The future of advanced robotics likely requires LLM-scale models. With more bias towards vision and locomotion than the usual LLM, of course.

pradn a day ago | parent | prev | next [-]

There's already this hilarious bot. It's able to use people's outfits to woo them, or insult them. It's pretty good!

https://www.instagram.com/rizzbot_official/

ks2048 a day ago | parent | prev | next [-]

> CEO Jensen Huang has said robotics is the company’s largest growth opportunity outside of artificial intelligence

Does "robotics outside of AI" imply they want to get into making actual robots (beyond the GPU "brains")?

echelon a day ago | parent | prev [-]

AMD should jump on this immediately.

Edge compute has not yet been won. There is no ecosystem for CUDA for it yet.

Someone else but Nvidia please pay attention to this market.

Robots can't deal with the latency of calls back to the data center. Vision, navigation, 6DOF, articulation all must happen in real time.

This will absolutely be a huge market in time. Robots, autonomous cars, any sort of real time, on-prem, hardware type application.

varelse a day ago | parent [-]

[dead]

shekhar101 a day ago | parent | prev | next [-]

I was reading Xiaomi YU7 marketing page[0] yesterday and the NVIDIA AGX Thor stood out (says: NVIDIA DRIVE AGX Thor). I was wondering what it was and this showed up! Looks like it is (or a Drive variant of it) is already being used in newer cars for self-drive and such. [0] https://www.mi.com/global/discover/article?id=5174

jauntywundrkind a day ago | parent | prev | next [-]

Wow: notably a more advanced CPU than DGX GB200! 14 Neoverse V3AE cores, where-as Grace Hopper is 72x Neoverse V2. Comparing versus big GB100: 2560/96 CUDA/Tensor cores here vs big Blackwell's 18432/576 cores.

> Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency.

I could really use a table of all the various options Nvidia has! Jetson AGX Orin (2023) seems to start at ~$1700 for a 32GB system, with 204GB/s bandwidth, 1792 Ampere, 56 Tensor, & 8 A78AE ARM Cores, 200 TOPS "AI Performance", 15-45W. Slightly bigger model of 2048/64/12 cores/275 TOPS, 15-60W available. https://en.wikipedia.org/wiki/Nvidia_Jetson#Performance

Now Jetson T5000 is 2070 TFLOPS (but FP4 - Sparse! Still ~double-ish). 2560 Core Blackwell, 96 Tensor cores, 14 Neoverse V3AE cores. 273GB/s 128GB. 4x25Gbe is a neat new addition. 40-130W. There's also a lower spec T4000.

Seems like a pretty in line leap at 2x the price!

Looks like a physically pretty big unit. Big enough to scratch my head in the intro video of robots opening up the package & wonder: where are they going to fit their new brain? But man, the breakdown diagram: it's- unsurprisingly- half heatsink.

adrian_b 15 hours ago | parent [-]

It should be noted that Neoverse V3AE and Neoverse V3 are the automotive/server versions of the Cortex-X4 core, which is well known from many smartphones (and which is similar in performance to the Skymont E-cores of the Intel Lunar Lake, Arrow Lake S and Arrow Lake H CPUs).

While the Cortex-X925, the successor of Cortex-X4, has better absolute performance, it has much worse performance per die area. Therefore, for a CPU where the best multi-threaded performance is desired, Neoverse V3AE/Neoverse V3/Cortex-X4 remains the best CPU core designed by the Arm company.

This year's Arm core announcements have been delayed and it is not clear how the future Cortex-A930 and Cortex-X930 will compare with the currently existing Cortex-X4, Cortex-A725 and Cortex-X925.

ilaksh a day ago | parent | prev | next [-]

GMTec AMD Ryzen™ AI Max+ 395 --EVO-X2 AI Mini PC seems pretty similar and only $2000.

Would be interested to see head to head benchmarks including power usage between those mini PCs and the Nvidia Thor.

beefnugs 17 hours ago | parent [-]

Its not even close dude, the nvidia stuff is like 2000 TOPS vs the 50 you get from the ai 395+

adrian_b 15 hours ago | parent | next [-]

True, but this advantage is strictly for AI inference and only when using the very low resolution FP4 and sparse matrices.

When using bigger number formats and/or dense matrices the advantage of Thor diminishes considerably.

Also the 50 Tops is only from the low power NPU. When distributing the computation also over GPU and CPU you get much more. So for a balanced comparison one has to divide the Thor value by 4 or more and multiply the Ryzen value by a factor that might be around 3 or even more.

The Ryzen CPU is significantly better, and the GPU has about the same size but a much higher clock frequency, so it should also be faster, so for anything except AI inference a Ryzen Max at half price will offer much more bang for the buck.

17 hours ago | parent | prev [-]
[deleted]
wmf a day ago | parent | prev | next [-]

Orin was pretty expensive at $2,000; now Thor is significantly more.

mikepurvis a day ago | parent | next [-]

And now everyone's $2000 Orins will be stuck forever on Ubuntu 24.04 just like the Xaviers were abandoned on 20.04 and the TX1/2 on 18.04.

Nothing like explaining to your ML engineers that they can only use Python 3.6 on an EOL operating system because you deployed a bunch of hardware shortly before the vendor released a new shiny thing and abruptly lost interest in supporting everything that came before.

And yes, TX2 was launched in 2017, but Nvidia continued shipping them until the end of 2024, so it's absurd they never got updated software: https://forums.developer.nvidia.com/t/jetson-tx2-lifecycle-e...

audiofish a day ago | parent [-]

Same experience here, plus serial port drivers that don't work, bootloader bugs causing bricked machines in the field. This on a platform nearly a decade old! The hardware is great but the software quality is abysmal, when compared to other industrial SoC manufacturers.

tonyarkles a day ago | parent | next [-]

Ahhhh I see there's someone else who has experienced the serial port driver bugs :). I was responsible for helping them figure out and fix the one related to DMA buffers but still encounter the "sometimes it just stops sending data" one often enough.

mikepurvis a day ago | parent | prev [-]

I think what's most galling about it is that Nvidia gets away with behaving like this because even a decade later they're still basically the only game in town if you want a low power embedded GPU solution for edge AI stuff.

AMD has managed to blunder multiple opportunities to launch something into this space and earn the trust of developers. And no, NUC form factor APU machines are not the answer— both for power/heat concerns and the software integration story being an incomplete patchwork.

AlotOfReading a day ago | parent | prev | next [-]

Thor is a pretty big jump in power and the current prices are a bargain compared to what else is out there if you need the capabilities. I wish there was a competitive alternative, because Nvidia is horrible to work with.

CamperBob2 a day ago | parent | prev [-]

128 GB for $3,499 doesn't sound bad at all.

adrian_b 15 hours ago | parent [-]

You can get the same memory (including approximately the same bandwidth) in a Strix Halo system at half this price.

Therefore it sounds quite bad. Like Orin before it, Thor is severely overpriced.

It is worthwhile only for those who absolutely need some of its features that are not available elsewhere, like automotive certification or no need of additional boards when interfacing with a great number of video cameras.

probablydan a day ago | parent | prev | next [-]

Can these be used for local inference on large models? I'm assuming the 128G of memory is like system memory, not like GPU VRAM.

a day ago | parent | next [-]
[deleted]
sgillen a day ago | parent | prev | next [-]

It has a unified memory architecture, so the 128G is shared directly between CPU and GPU. Though it's slower than dGPU VRAM.

bigyabai a day ago | parent | prev | next [-]

Yes, but it is substantially cheaper and usually faster to buy a Jetson Orin chip or build an x86 homelab.

CamperBob2 19 hours ago | parent | prev [-]

Memory bandwidth is less than 300 GB/sec, looking at the data sheet. So it won't really be any faster at local inference than a Mac Pro or a decent x86 box.

It appears to be an embedded DGX Spark, at the end of the day.

jsight a day ago | parent | prev | next [-]

This sounds very similar to the dgx spark, which still hasn't shipped afaik.

adrian_b 15 hours ago | parent [-]

The CPU cores are very different (big Cortex-X925 + small Cortex-A725 vs. medium-size Cortex-X4 = Neoverse V3AE).

The CPU of DGX Spark has better single-threaded performance, while that of Thor has better multi-threaded performance per die area and per power consumption.

neom a day ago | parent | prev | next [-]

AGX Thor + TensorRT + SDXL-Turbo (or SD 1.5 LCM) + ControlNet (depth/canny) + ROS 2 + Isaac ROS + CUDA zero-copy camera feeds = fun!!

diggan a day ago | parent | next [-]

Or "VR Chat + Mostly human art and control but digital representation" ends up being more fun, engaging, cheap and/or humanizing. Wonder that the best accurate full-body tracking + VR headset one could put together today for that? Feels like it could be cheaper than just the "hardware brain" part of that.

a_t48 a day ago | parent | prev [-]

You don’t need Thor nor ROS for this, but it can certainly help.

shrubble a day ago | parent | prev | next [-]

The Strix 395+ or whatever it is called is $2k with 128gb, but I think less performance.

asadm a day ago | parent | prev | next [-]

Has anyone deployed jetson or similar in production? whats the BOM like at scale?

vjk800 a day ago | parent | prev | next [-]

What exactly does this chip do?

wmf a day ago | parent [-]

It has ARM CPU cores and an Nvidia GPU so it can do whatever you want but it's optimized for AI video analysis. Great for factory robots or self-driving cars.

torginus a day ago | parent | prev | next [-]

So this is the thing that's going to go into the next generation of Russian and Ukrainian drones.

sabareesh a day ago | parent | prev | next [-]

Looks very similar to DGX spark

justincormack a day ago | parent [-]

It is, some small differences. Also rumoured this will become a laptop chipset in future, although probably at lower power.

affenape a day ago | parent | prev | next [-]

Can it run doom? Can it make doom come true?

croes a day ago | parent | prev | next [-]

The shovel sellers new shovels

MaxPock a day ago | parent | prev | next [-]

[flagged]

pavlov a day ago | parent | next [-]

No, at least the Muskian end of the MAGA spectrum is very much into humanoid robots. I'm afraid it's because they imagine themselves at the head of a giant robot slave army.

Maybe more practically they see robots taking over the jobs that immigrants now do in America.

aianus a day ago | parent | prev | next [-]

What makes you think MAGA hates those things? Texas and Florida are right behind California for adoption of EVs and green energy.

nsp a day ago | parent | next [-]

Trump administration halts work on an almost-finished wind farm: https://www.npr.org/2025/08/23/nx-s1-5513919/trump-stops-off...

EV subsidies ending in a month or two: https://www.kiplinger.com/taxes/ev-tax-credit

Tons of examples

lukeschlather a day ago | parent | prev [-]

Trump is using environmental law to halt green energy projects. (Not just cutting unnecessary subsidies, declaring projects that are already under construction illegal.)

ethagknight a day ago | parent | prev [-]

Are there massive subsidies for humanoid robots?

MaxPock a day ago | parent [-]

I don't think subsidies are the issue. Farmers get billions in subsidies every year, and they don't get that much hate from the MAGA world.

lousken a day ago | parent | prev | next [-]

cost needs to be reduced by 90% to be viable

AlotOfReading a day ago | parent [-]

Serious question, what comparable hardware can you buy for 10% of the cost?

lousken a day ago | parent [-]

I meant for a dev kit it's fine, but it's not viable for anything beyond that. Shouldn't cost 100x of RPi if you gonna use as a part of a robot.

hadlock a day ago | parent | next [-]

Presumably prices will come down as this market segment matures; it's not unreasonable to assume performance will double and the price will reduce by half within a decade. A $2000 brain in a $20,000 robot is 10% of the total cost but at that price point it's not prohibitively expensive for the market they're catering to. The unitree G1 can be had for as little as $16,000 usd allegedly but capable models can be north of $40,000.

If you're buying a durable good like a warehouse robot or household chores robot that costs as much as a car this doesn't seem like that high of a starting point for the market segment to me.

AlotOfReading a day ago | parent | prev [-]

Pretty much everyone in my part of the industry is either working with thor-family chips already or actively investigating whether they should switch to them, with very few exceptions. The pricing seems completely viable based on that alone.

Anyone who can use an RPi (or one of many other SoCs in that class) should absolutely consider them, but that's not the market this is competing in. RPis are more comparable to the Jetson nano line, which had sub-$100 dev kits. Slightly above that are the Orin-based tegras like the SoC in the switch 2, which are still clearly viable.

throwawayoldie a day ago | parent | prev [-]

If I were Jensen Huang, the first thing I'd do...well, the _first_ thing I'd do is ditch the silly leather jacket and dress like an adult. But the first thing I'd do with Nvidia is make sure the company's product line is well diversified for the coming AI winter.

cheema33 a day ago | parent | next [-]

> the _first_ thing I'd do is ditch the silly leather jacket and dress like an adult

Given what he has accomplished, he has more than earned the right to wear a leather jacket if he wants to. People didn't complain about Steve Jobs wearing a turtleneck for the same reason. When you have accomplished as much as these guys, then you can dish out fashion advice and maybe someone will listen.

card_zero a day ago | parent | next [-]

From his biography, the turtleneck was kind of an accident:

> “So I called Issey [Miyake] and asked him to design a vest for Apple,” Jobs recalled. “I came back with some samples and told everyone it would be great if we would all wear these vests. Oh man, did I get booed off the stage. Everybody hated the idea.”

> “So I asked Issey to make me some of his black turtlenecks that I liked, and he made me like a hundred of them.” Jobs noticed my surprise when he told this story, so he gestured to them stacked up in the closet. “That’s what I wear,” he said. “I have enough to last for the rest of my life.”

I wonder if Issey thought the turtlenecks might be the Apple uniform, and that's why he made lots.

a day ago | parent | prev [-]
[deleted]
metalliqaz a day ago | parent | prev [-]

First thing I'd do is keep making more expensive shovels until gold miners stop buying them.