Remix.run Logo
SlightlyLeftPad 8 days ago

Any EEs that can comment on at what point do we just flip the architecture over so the GPU pcb is the motherboard and the cpu/memory lives on a PCIe slot? It seems like that would also have some power delivery advantages.

kvemkon 4 days ago | parent | next [-]

> at what point do we just flip the architecture over so the GPU pcb is the motherboard and the cpu/memory

Actually the RapsberryPi (appeared 2012) was based on a SoC with a big and powerful GPU and small weak supporting CPU. The board booted the GPU first.

verall 4 days ago | parent | prev | next [-]

If you look at a any of the nvidia DGX boards it's already pretty close.

PCIe is a standard/commodity so that multiple vendors can compete and customers can save money. But at 8.0 speeds I'm not sure how many vendors will really be supplying, there's already only a few doing serdes this fast...

y1n0 4 days ago | parent | next [-]

There are companies that specialize in memory controller ip that every one else uses, including large semi companies like Intel.

The ip companies are the first to support new standards, make their money selling to intel etc. Allowing intel or whomever to take their time to build higher performance ip.

bgnn 4 days ago | parent | prev | next [-]

These days you can buy any standard as a soft IP from Synopsys or Cadence. They take their previous serdes and modify it to meet the new standard. They have thousands of employees across the globe just doing that.

Melatonic 4 days ago | parent | prev | next [-]

Isnt it about latency as well with DGX boards? Vs PCI-E. You can only fit so much RAM on a board that will realistically be plugging into a slot

snerbles 3 days ago | parent [-]

Most current DGX server assemblies are stacked and compression-fit, much higher density and more amenable to liquid cooling.

https://www.servethehome.com/micron-socamm-memory-powers-nex...

eggsome 4 days ago | parent | prev [-]

Has the DGX actually shipped anywhere yet?

verall 4 days ago | parent [-]

Do you mean the new one? The older ones have been around for so long you can buy off-leases of them: https://www.etb-tech.com/nvidia-dgx-1-ai-gpu-server-2-x-e5-2...

vincheezel 8 days ago | parent | prev | next [-]

Good to see I’m not the only person that’s been thinking about this. Wedging gargantuan GPUs onto boards and into cases, sometimes needing support struts even, and pumping hundreds of watts through a power cable makes little sense to me. The CPU, RAM, these should be modules or cards on the GPU. Imagine that! CPU cards might be back..

ksec 8 days ago | parent | next [-]

It is not like CPU aren't getting higher wattage as well. Both AMD and Intel have roadmap for 800W CPU.

At 50-100W for IO, this only leaves 11W per Core on a 64 Core CPU.

linotype 4 days ago | parent [-]

800 watt CPU with a 600 watt GPU, I mean at a certain point people are going to need different wiring for outlets right?

0manrho 4 days ago | parent | next [-]

This is a legitimate problem in datacenters. They're getting to the point where a single 40(ish)OU/RU rack can pull a megawatt in some hyperdense cases. The talk of GPU/AI datacenters consuming inordinate amounts of energy isn't just because the DC's are yuge, (although some are), but because the power draw per rack unit space is going through the roof as well.

On the consumer side of things where the CPU's are branded Ryzen or Core instead of Epyc or Xeon, a significant chunk of that power consumption is from the boosting behavior they implement to pseudo-artificially[0] inflate their performance numbers. You can save huge (easily 10%, often closer to 30%, but really depends on exact build/generation) on energy by doing a very mild undervolt and limiting boosting behavior on these cpus and keeping the same base clocks. Intel 11th through 14th gen CPU's are especially guilty of this, as are most Threadripper CPU's. you can often trade single digit or even negligible performance losses (depends on what you're using it for and how much you undervolt/underclock/restrict boosting) for double digit reductions in power usage. This phenomenon is also true for GPU's when compared across the enterprise/consumer divide, but not quite to the significant extent in most cases.

Point being, yeah, it's a problem in data centers, but honestly there's a lot of headroom still even if you only have your common American 15A@120VAC outlets available before you need to call your electrician and upgrade your panel and/or install 240VAC outlets or what have you.

0: I say pseudo-artificial because the performance advantages are real, but unless you're doing some intensive/extreme cooling, they aren't sustainable or indicative of nominal performance, just a brief bit of extra headroom before your cooling solution heat-soaks and the CPU/GPU's throttle themselves back down. But it lets them put the "Bigger number means better" on the box for marketing.

Panzer04 4 days ago | parent | next [-]

It's not just about better numbers. Getting high clocks for a short period helps in a lot of use cases - say random things like a search. If I'm looking for some specific phrase in my codebase in vscode, everything spins up for the second or two it takes to process that.

Boosting from 4 to 5,5.5 ghz for that brief period shaves a fraction of a second - repeat that for any similar operation and it adds up.

0manrho a day ago | parent [-]

Yes, I figured that much would be obvious to this crowd. Thus the "pseudo" part.

The point isn't that there isn't a benefit, it's that you start to pay exponentially more energy per 0.1GHz at a certain point. Furthermore, AMD and Intel were exceptionally aggressive about it in the generations I outlined (AMD would be 7000 series ryzens specifically), leading to instability issues on both platforms due to their spec itself being too aggressive, or AIB partners improperly implementing that spec as the headroom that typically exists from factory stock to push clocks/voltages further was no longer there in some silicon (some of it comes down to silicon lottery and manufacturing defects/mistakes (Intel's oxidation issues for example) but we're really getting into the weeds on this already)

And to clarify: I'm talking specifically of Intel turboboost and AMD's PBO boosting technologies where they boost where they boost well over base clocks, separate from the general dynamic clocking behavior where clocks will drop well below base when not in (heavy) use.

latchkey 2 days ago | parent | prev | next [-]

> They're getting to the point where a single 40(ish)OU/RU rack can pull a megawatt in some hyperdense cases.

Switch is designing for 2MW racks now.

spacedcowboy 4 days ago | parent | prev | next [-]

unless it’s an Apple data center, populated by the server version of the latest ultra chips…

0manrho 3 days ago | parent | next [-]

What makes you think that?

They're small and efficient, that means they can pack large numbers of those into small spaces, resulting in a similar large power draw per volume occupied by equipment in the DC. This is especially true with Apple's "Ultrafusion" tech which they're developing as quasi-analog to Nvidia Grace (Hopper) superchips.

spacedcowboy 3 days ago | parent [-]

Because I worked on them, before retiring. Yes they’re packed in; no they still don’t draw the same levels of power.

0manrho a day ago | parent [-]

Didn't saw they draw the same, I openly acknowledge their more efficient. Said power user per rack unit is trending up. This is true of Apple DC's as well, especially with their new larger/fused chip initiatives. It's an universal industry trends especially with AI compute, and Apple is not immune.

spacedcowboy a day ago | parent [-]

Let me rephrase to: No, they (collectively) don’t draw the same levels of power. I know what amperage is drawn by each rack. It’s nowhere near as much as was drawn by the older intel-based racks.

And yes, they’re packed densely.

deafpolygon 3 days ago | parent | prev [-]

at that point, they're powered by a bicycle.

ciupicri 3 days ago | parent | prev [-]

How safe is undervolting? Can it cause stability issues?

0manrho a day ago | parent [-]

Far safer than overvolting.

Changing settings can lead to stability issues no matter which way you push it frankly. If you're don't know what you're doing/aren't comfortable with it, probably not worth it.

jchw 4 days ago | parent | prev | next [-]

At least with U.S. wiring we have 15 amps at 120 volts. For continuous power draw I know you'd want an 80% margin of safety, so let's say you have 1440 Watts of AC power you can safely draw continuously. Power supplies built on MOSFETs seem to peak at around 90% efficiency, but you could consider something like the Corsair AX1600i using gallium nitride transistors, which supposedly can handle up to 1600 watts at 94% efficiency.

Apparently we still have room, as long as you don't run anything else on the same circuit. :)

atonse 4 days ago | parent | next [-]

You can always have an electrician install a larger breaker for a particular circuit. I did that with my "server" area in my study, which was overkill cuz I barely pull 100w on it. But it cost nearly zero extra since he was doing a bunch of other things around the house anyway.

viraptor 4 days ago | parent | next [-]

> You can always have an electrician install ...

If you own the house, sure. Many people don't.

jacquesm 4 days ago | parent | prev | next [-]

You need to increase the wire diameter as well if you go that route. Running larger breakers on 10A or 15A wiring is a recipe for bad stuff.

mrweasel 4 days ago | parent | prev | next [-]

In older houses, made from brick and concrete, that can be tricky to do. The only reason I can have my computer on a separate circuit is because we could repurpose the old three phase wiring for a sauna we ripped out. If that had not been the case, getting the wires to the fuse board would have been tricky at best.

New homes are probably worse than old homes through. The wires a just chucked in the space been the outer and inner walls, there's basically no chance of replacing them of pulling new ones. Old houses at least frequently have piping in which the wires run.

davrosthedalek 4 days ago | parent | prev | next [-]

Larger breaker and thicker wires!

atonse 4 days ago | parent [-]

I thought you only needed thicker wires for higher amps? Should go without saying, but I am not a certified electrician :-)

I only have a PhD from YouTube (Electroboom)

jchw 4 days ago | parent [-]

The voltage is always going to be the same because the voltage is determined by the transformers leading to your service panel. The breakers break when you hit a certain amperage for a certain amount of time, so by installing a bigger breaker, you allow more amperage.

If you actually had an electrician do it, I doubt they would've installed a breaker if they thought the wiring wasn't sufficient. Truth is that you can indeed get away with a 20A circuit on 14 AWG wire if the run is short enough, though 12 AWG is recommended. The reason for this is voltage drop; the thinner gauge wire has more resistance, which causes more heat and voltage drop across the wire over the length of it, which can cause a fire if it gets sufficiently hot. I'm not sure how much risk you would put yourself in if you were out-of-spec a bit, but I wouldn't chance it personally.

bangaladore 4 days ago | parent [-]

Could you not just run a 240 volt outlet on existing wiring built for 110v? Just send l1 and l2 on the existing hot/neutral?

bri3d 4 days ago | parent [-]

You can, 240V on normal 12/2 Romex is fine. The neutral needs to be "re-labeled" with tape at all junctions to signify that it's hot, and then this practice is (generally) even code compliant.

However! This strategy only works if the outlet was the only one on the circuit, and _that_ isn't particularly common.

jchw 4 days ago | parent [-]

Although this exists, as a layperson, I've rarely seen it. There is the NEMA 6-15R receptacle type, but I have literally none of those in my entire house, and I've really never seen them. Apparently they're sometimes used for air conditioners. Aside from the very common 5-15R, I see 5-20R (especially in businesses/hospitals), and 14-30R/14-50R for ranges and dryers. (I have one for my range, but here in the midwest electric dryers and ranges aren't as common, so you don't always come across these here. We have LNG ran to most properties.) So basically, I just really don't see a whole lot of NEMA 6 receptacles. The NEMA 14 receptacles, though, require both hots and the neutral, so in a typical U.S. service panel it requires a special breaker and to take up two slots, so definitely not as simple of a retrofit.

(Another outlet type I've seen: I saw a NEMA 7 277V receptacle before. I think you get this from one phase of a 480V three-phase system, which I understand is ran to many businesses.)

bryanlarsen 4 days ago | parent | next [-]

If you drive an electric car in a rural area you might want to carry around 6-30 and 6-50 adapters because most farms have welders plugged into those and that can give you a quick charge. And also TT-30 and 14-50 adapters to plug in at campgrounds.

wat10000 3 days ago | parent | prev | next [-]

NEMA 6 is limiting because there’s no neutral, so everything in the device has to run on 240V. Your oven and dryer want 120V to run lights and electronics, so they use a 14 (or 10 for older installs) which lets them get 120V between a hot and the neutral.

Oddly, 14-50 has become the most common receptacle for non-hardwired EV charging, which is rather wasteful since EV charging doesn’t need the neutral at all. 6-50 would make more sense there.

bryanlarsen 3 days ago | parent [-]

Reasons why it's nice to have a 14-50 plug in your garage rather than a 6-50:

1: when an uncle stops by for a visit with his RV he can plug in.

2: the other outlets in your garage are likely on a shared circuit. The 14-50 is dedicated, so with a 14-50 to 5-15 adapter you can more safely plug in a high wattage appliance, like a space heater.

wat10000 3 days ago | parent [-]

1 is why we ended up with 14-50 as the standard, too. Before there was much charging infrastructure, RV parks were a good place to get a semi-fast charge, and that meant a charger with a 14-50 plug.

2 is something I never thought of, I’ll have to keep that in mind.

bri3d 4 days ago | parent | prev | next [-]

NEMA 6s are extremely common in barns and garages for welders. 6-50 is more common for bigger welders but I’ve also seen 6-20s on repurposed 12/2 Romex as the parent post was discussing used for cheap EV retrofits, compressors, and welders.

esseph 4 days ago | parent | prev [-]

5-20R/6-20R is also somewhat commonly used by larger consumer UPS for your computer, router, etc.

glitchc 4 days ago | parent | prev [-]

Without upgrsding the wiring to a thicker gauge? That's not code compliant and is likely to cause a fire.

atonse 3 days ago | parent [-]

Sorry just to specify, it was more like a 20 amp I think (I will verify), it wasn't like I was going way higher.

I don't remember whether he ran another wire though. It was 5 years ago. Maybe I should not be spreading this anecdote without complete info.

He was a legit electrician that I've worked with for years, specifically because he doesn't cut corners. So I'm sure he did The Right Thing™.

glitchc 3 days ago | parent | next [-]

If this is north america we're talking about, then 14 gauge is the standard for 120V 15A household circuits. By code, 20A requires 12 gauge. You'll notice the difference right away, it's noticeably harder to bend. Normally a house or condo will only have 15A wires running to circuits in the room. It's definitely not a standard upgrade, the 12 gauge wire costs a lot more per foot, no builder will do it unless the owner forks over extra dough.

Unless you performed the upgrade yourself or know for a fact that the wiring was upgraded to 12 gauge, it's very risky to just upgrade the breaker. That's how house fires start. It's worth it to check. If you know which breaker it is, you can see the gauge coming out. It's usually written on the wire.

jchw 3 days ago | parent [-]

I was actually under the impression that it is allowed depending on the length of the conductor, but it seems you are right. The NEC Table 15(B)(16) shows the maximum allowed ampacity of 14 AWG cables is 20 amperes, BUT... there is a footnote that states the following:

> * Unless otherwise specifically permitted elsewhere in this Code, the overcurrent protection for conductor types marked with an asterisk shall not exceed 15 amperes for No. 14 copper, 20 amperes for No. 12 copper, and 30 amperes for No. 10 copper, after any correction factors for ambient temperature and number of conductors have been applied.

I could've sworn there were actually some cases where it was allowed, but apparently not, or if there is, I'm not finding it. Seems like for 14 AWG cable the breaker can only be up to 15 amperes.

jchw 3 days ago | parent | prev [-]

There is a chance he did not run new wires if he was able to ascertain that the wire gauge was sufficient to carry 20 amps over the length of the cable. This is a totally valid upgrade though it does obviously require you to be pretty sure you know the length of the entire circuit. If it was Southwire Romex, you can usually tell just by looking at the color of the sheathing on the cable (usually visible in the wallboxes.)

cosmic_cheese 4 days ago | parent | prev [-]

Where things get hairy are old houses with wiring that’s somewhere between shaky and a housefire waiting to happen, which are numerous.

jchw 4 days ago | parent | next [-]

As an old house owner, I can attest to that for sure. In fairness though, I suspect most of the atrocities occur in wall and work boxes, as long as your house is new enough to at least have NM sheathed wiring instead of ancient weird stuff like knob and tube. That's still bad but it's a solvable problem.

I've definitely seen my share of scary things. I have a lighting circuit that is incomprehensibly wired and seems to kill LED bulbs randomly during a power outage; I have zero clue what is going on with that one. Also, often times opening up wall boxes I will see backstabs that were not properly inserted or wire nuts that are just covering hand-twisted wires and not actually threaded at all (and not even the right size in some cases...) Needless to say, I should really get an electrician in here, but at least with a thermal camera you can look for signs of serious problems.

kube-system 4 days ago | parent | prev [-]

Yeah, but it ain't nothing that microwaves, space heaters, and hair dryers haven't already given a run for their money.

jchw 4 days ago | parent [-]

Hair dryers and microwaves only run for a few minutes, so even if you do have too much resistance this probably won't immediately reveal a problem. A space heater might, but most space heaters I've come across actually seem to draw not much over 1,000 watts.

And even then, even if you do run something 24/7 at max wattage, it's definitely not guaranteed to start a fire even if the wiring is bad. Like, as long as it's not egregiously bad, I'd expect that there's enough margin to cover up less severe issues in most cases. I'm guessing the most danger would come when it's particularly hot outside (especially since then you'll probably have a lot of heat exchangers running.)

chronogram 4 days ago | parent | prev | next [-]

That's still not much for wiring in most countries. A small IKEA consumer oven is only 230V16A=3860W. Those GPUs and CPUs only consume that much at max usage anyway. And those CPUs are uninteresting for consumers, you only need a few Watts for a single good core, like a Mac Mini has.

rbanffy 4 days ago | parent | next [-]

> And those CPUs are uninteresting for consumers, you only need a few Watts for a single good core, like a Mac Mini has.

Speak for yourself. I’d love to have that much computer at my disposal. Not sure what I’d do with it. Probably open Slack and Teams at the same time.

ThunderSizzle 4 days ago | parent [-]

> Probably open Slack and Teams at the same time.

Too bad it feels like both might as well be single threaded applications somehow

rbanffy 3 days ago | parent [-]

I could use KVM and open a bunch of instances of each.

dv_dt 4 days ago | parent | prev | next [-]

So Europe ends up with an incidental/accidental advantage in the AI race?

atonse 4 days ago | parent | next [-]

All American households get mains power at 240v (I'm missing some nuance here about poles and phases, so the electrical people can correct my terminology).

It's often used for things like ACs, Clothes Dryers, Stoves, EV Chargers.

So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall.

kube-system 4 days ago | parent | next [-]

To get technical -- US homes get two phases of 120v that are 180 degrees out of phase with the neutral. Using either phase and the neutral gives you 120v. Using the two out of phase 120v phases together gives you a difference of 240v.

https://appliantology.org/uploads/monthly_2016_06/large.5758...

ender341341 4 days ago | parent [-]

Even more technical, we don't have two phases, we have 1-phase that's split in half. I hate it cause it makes it confusing.

Two phase power is not the same as split phase (There's basically only weird older installations of 2 phase in use anymore).

kube-system 4 days ago | parent [-]

Yeah that's right. The grid is three phases (as it is basically everywhere in the world), and the transformer at the pole splits one of those in half. Although, what are technically half-phases are usually just called "phases" when they're inside of a home.

voxadam 4 days ago | parent | prev | next [-]

Relevant video from Technology Connections:

"The US electrical system is not 120V" https://youtu.be/jMmUoZh3Hq4

atonse 4 days ago | parent [-]

That's such a great video, like most of his stuff.

ender341341 4 days ago | parent | prev | next [-]

> So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall.

It'd be all new wire run (120 is split at the panel, we aren't running 240v all over the house) and currently electricians are at a premium so it'd likely end up costing a thousand+ to run that if you're using an electrician, more if there's not clear access from an attic/basement/crawlspace.

Though I think it's unlikely we'll see an actual need for it at home, I imaging a 800w cpu is going to be for server class CPUs and rare-ish to see in home environments.

com2kid 4 days ago | parent | next [-]

> and currently electricians are at a premium so it'd likely end up costing a thousand+

I got a quote for over 2 thousand to run a 24v line literally 9 feet from my electrical panel across my garage to put a EV charger in.

Opening up an actual wall and running it to another room? I can only imagine the insane quotes that'd get.

Marsymars 4 days ago | parent | next [-]

I kinda suspect there’s a premium once you mention “EV vehicle”, since you’re signalling that you’re affluent enough to afford an EV and have committed to spending the money required to get EV charging at home working, etc. (Kinda like getting a quote for anything wedding related.)

I’m getting some wiring run about the same distance (to my attic, fished up a wall, with moderately poor access) for non-EV purposes next week and the quote was a few hundred dollars.

qotgalaxy 4 days ago | parent [-]

[dead]

tguvot 4 days ago | parent | prev [-]

the trick is to request 240v outlet for welder. it brings price down to 400 or so.

running to another room will be done usually (at least in usa) through attic or crawlspace. i got it done a few months ago to have dedicated 20A circuit (for my rack) in my work room. cost was around 300-400 as well

com2kid 4 days ago | parent [-]

Labor chargers alone are going to be higher than that in Seattle. Just to have someone come out on a call is going to be 150-200. If it is an independent electrician who owns their own business, maybe 100-150/hr, if they are part of a larger company, I'd expect even more than that.

Honestly I wouldn't expect to pay less than $1000 for the job w/o any markups.

tguvot 4 days ago | parent [-]

i live in bay area. i have some doubts that seattle going to be more expensive.

com2kid 4 days ago | parent [-]

Handy man prices around here are $65 to $100/hr, and there is a huge wait list for the good ones.

I've gotten multiple quotes on running the 240v line, the labor breakdown was always over $400 alone. Just having someone show up to do a job is going to be almost $200 before any work is done.

When I got quotes from unlicensed people, those came in around $1000 even.

tguvot 3 days ago | parent [-]

in bayarea subreddits there are multiple posts talking about EV charger vs welder outlet and how it drops price from 2000 to 500 or so (depends on complexity).

another thing, which is good long term is to a find a local electrician (plumber, etc) who doesn't charge service calls and have reasonable pricing.

no idea about handyman pricing. never used any. for electrical/water/roofing i prefer somebody who is licensed/insured/bonded/etc

vel0city 4 days ago | parent | prev [-]

I don't think many people would want some 2kW+ system sitting on their desk at home anyways. That's quite a space heater to sit next to.

Tor3 4 days ago | parent | next [-]

I should look at the label (or check with a meter..), but when I run my SGI Octane with its additional XIO SCSI board in active use, the little "office" room gets very hot indeed.

bonzini 4 days ago | parent | prev [-]

Also the noise from the fans.

the8472 4 days ago | parent | prev | next [-]

If we're counting all the phases then european homes get 400V 3-phase, not 240V split-phase. Not that typical residential connections matter to highend servers.

bonzini 4 days ago | parent [-]

It depends on the country, in many places independent houses get a single 230V phase only.

dv_dt 4 days ago | parent | prev [-]

Well yes its possible but often $500-1000 to run a new 240v outlet, and that's to a garage for an ev charger. If you want an outlet in the house I dont know how much wall people want to tear up and extra time and cost.

atonse 4 days ago | parent [-]

Sure yeah, I was just clarifying that if the issue is 240v, etc, US houses have the feed coming in. Infrastructure-wise it's not an issue at all.

kube-system 4 days ago | parent | prev | next [-]

Consumers with desktop computers are not winning any AI race anywhere.

buckle8017 4 days ago | parent | prev [-]

In residential power delivery? yes

In power cost? no

I'm literally any other way? also no

carlhjerpe 4 days ago | parent | prev [-]

In the Nordics we're on 10A for standard wall outlets so we're stuck on 2300W without rewiring (or verifying wiring) to 2.5mm2.

We rarely use 16A but it exists. All buildings are connected to three phases so we can get the real juice when needed (apartments are often single phase).

I'm confident personal computers won't reach 2300W anytime soon though

bonzini 4 days ago | parent | next [-]

In Italy we also have 10A and 16A (single phase). In practice however almost all wires running in the walls are 2.5 mm^2, so that you can use them for either one 16A plug or two adjacent 10A plugs.

Tor3 4 days ago | parent | prev | next [-]

In the Nordics (I'm assuming you mean Nordic countries) 10A is _not_ standard. Used to be, some forty years ago. Since then 16A is standard. My house has a few 10A leftovers from when the house was built, and after the change to TN which happened a couple of decades ago, and with new "modern" breakers, a single microwave oven on a 10A circuit is enough to trip the breaker (when the microwave pulses). Had to get the breakers changed to slow ones, but even those can get tripped by a microwave oven if there's something else (say, a kettle) on the same circuit.

16A is fine, for most things. 10A used to be kind of ok, with the old IT net and old-style fuses. Nowadays anything under 16A is useless for actual appliances. For the rest it's either 25A and a different plug, or 400V.

carlhjerpe 4 days ago | parent [-]

Let's rephrase: 10A is the effective standard that's been in use for a long long time, if you walk into a building you can assume it has 10A breakers.

On new installations you can choose 10A or 16A so if you're forward thinking you'd go 16 since it gives you another 1300 watts to play with.

nordcikmgsdf 3 days ago | parent | prev [-]

[dead]

tracker1 4 days ago | parent | prev | next [-]

There already are different outlets for these higher power draw beasts in data centers. The amount of energy used in a 4u "AI" box is what an entire rack used to draw. Data centers themselves are having to rework/rewire areas in order to support these higher power systems.

jacquesm 4 days ago | parent | prev | next [-]

You can up the voltage to 240 and re-use the wiring (with some minor mods to the ends), for double the power. Insulation class should be sufficient. That makes good sense anyway. You may still have an issue if the powersupply can't handle 240/60 but for most of the ones that I've used that would have worked. Better check with the manufacturer to be sure though. It's a lot easier and faster than rewiring.

t0mas88 4 days ago | parent | prev | next [-]

A simple kitchen top water cooker is 2000W, so a 1500W PC sounds like no big deal.

kube-system 4 days ago | parent | next [-]

Kettles in the US are usually 1500W, as the smallest branch circuits in US homes support 15A at 120V and the general rule for continuous loads is to be 80% of the maximum.

t0mas88 a day ago | parent | next [-]

Ah, 16A at 230v (3680W) is a normal circuit here. Most appliances work with that, the common exception is electric cooking (using two circuits or 380v two-phase) and EV charging.

linotype 4 days ago | parent | prev [-]

True but kettles rarely run for very long.

kube-system 4 days ago | parent | next [-]

But computers do, which was why I included that context. You don't really want to build consumer PC >1500W in the US or you'd need to start changing the plug to patterns that require larger branch circuits.

CyberDildonics 4 days ago | parent | prev [-]

Kettles and microwaves are usually 1100 watts and lower, but space heaters and car chargers can be 1500 watts and run for long periods of time.

Tor3 4 days ago | parent | next [-]

Microwave ovens have a different issue, which I found when I upgraded my breaker board to a modern one in my house. The startup pulse gives a type of load which trips a standard A-type 10A breaker (230V). Had to get those changed to a "slow" type, but even that will trip every blue moon, and if there's something else significant on the same circuit the microwave oven will trip even so, every two weeks or so (for the record, I have several different types of microwave ovens around the house, and this happens everywhere there's a 10A circuit).

The newer circuits in the house are all 16A, but the old ones (very old) are 10A. A real pain, with new TN nets and modern breakers.

wtallis 4 days ago | parent | prev [-]

Microwave ovens top out around 1100-1250W output from a ~1500W input from the wall. Apparently there's a fair bit of energy lost in the power supply and magnetron that doesn't make it into the box where the food is.

Narishma 3 days ago | parent | prev [-]

You don't keep the kettle constantly running, unlike a PC.

triknomeister 4 days ago | parent | prev | next [-]

And cooling. Look here: https://www.fz-juelich.de/en/news/archive/press-release/2025...

Especially a special PDU: https://www.fz-juelich.de/en/newsroom-jupiter/images-isc-202...

And cooling: https://www.fz-juelich.de/en/newsroom-jupiter/images-isc-202...

nehalem501 4 days ago | parent | prev | next [-]

It is mostly an issue in countries with 120V mains (I know that in the US 240V outlets exist though). In France for example it is required that standard plugs must be able to deliver at least 16A on each outlet, at the 230V used here, we get 3600W of power, that’s more than enough.

orra 4 days ago | parent | prev | next [-]

Laughs in 230V (sorry).

AnthonBerg 4 days ago | parent [-]

ʰᵉₕₑheʰᵉₕₑhe in 400V

esseph 4 days ago | parent | prev [-]

Yes and this is something I've been thinking about for awhile.

A computer is becoming a Home Appliance in the it will need 20A wiring and plugs soon, but should move to 220/240v soon anyway (and change the jumper on your standard power supply).

derefr 4 days ago | parent | prev | next [-]

But all of the most-ridiculous hyperscale deployments, where bandwidth + latency most matter, have multiple GPUs per CPU, with the CPU responsible for splitting/packing/scheduling models and inference workloads across its own direct-attached GPUs, providing the network the abstraction of a single GPU with more (NUMA) VRAM than is possible for any single physical GPU to have.

How do you do that, if each GPU expects to be its own backplane? One CPU daughterboard per GPU, and then the CPU daughterboards get SLIed together into one big CPU using NVLink? :P

wmf 4 days ago | parent [-]

GPU as motherboard really only makes sense for gaming PCs. Even there SXM might be easier.

db48x 4 days ago | parent | prev | next [-]

No, for a gaming computer what we need is the motherboard and gpu to be side by side. That way the heat sinks for the CPU and GPU have similar amounts of space available.

For other use cases like GPU servers it is better to have many GPUs for every CPU, so plugging a CPU card into the GPU doesn’t make much sense there either.

sitkack 4 days ago | parent | prev | next [-]

https://en.wikipedia.org/wiki/Compute_Express_Link

mensetmanusman 4 days ago | parent | prev | next [-]

It’s always going to be a back and forth on how you attach stuff.

Maybe the GPU becomes the motherboard and the CPU plugs into it.

avgeek23 7 days ago | parent | prev [-]

And the memory should be a onboard module on the cpu card intel/amd should replicate what apple did with a unified same ringbus sort of memory modules. Lower latency,higher throughput.

Would push performance further. Although companies like intel would bleed the consumer dry with, a certain i5-whatever cpu with onboard memory of 16 gigs could be insanely priced compared to what you'd pay for addon memory.

0x457 4 days ago | parent [-]

That would pretty much make both intel and amd to start market segmentation by CPU Core + Memory combination. I absolutely do not want that.

0manrho 4 days ago | parent | prev | next [-]

We're already there. That's what a lot of people are using DPU's are for.

An example, This is storage instead of GPU's, but as the SSD's were PCIe NVMe, it's pretty nearly the same concept: https://www.servethehome.com/zfs-without-a-server-using-the-...

undersuit 4 days ago | parent [-]

To continue the ServeAtHome links, https://www.servethehome.com/microchip-adaptec-smartraid-430...

PCI-e Networks and CXL are the future of many platforms... like ISA backplanes.

0manrho 4 days ago | parent [-]

Yep, I have a lot of experience with CXL devices and networked PCIe/NVMe (over Eth/IB) Fabrics and deploying "Headless"/"Micro-Head" compute units which are essentially just a pair of DPU's on a PCIe multiplexer (basically just a bunch of PCIe slots tied to a PCIe Switch or two).

That said my experience in this field is more with storage than GPU compute, but I have done some limited hacking about in the GPGPU space with that tech as well. Really fascinating stuff (and often hard to keep up with and making sure every part in the chain supports the features you want to leverage, not to mention going down the PCIe root topology rabbit hole and dealing with latency/trace-length/SnR issues with retimers vs muxers vs etc etc etc).

It's still a nascent field that's very expensive to play in, but I agree it's the future of at least part of the data infrastructure field.

Really looking forward to finally getting my hands on CXL3.x stuff (outside of a demo environment.)

bgnn 4 days ago | parent | prev | next [-]

EE here. There's no reason to not deliver power directly to the GPU by using cables. I'm not sure if it's sooving anything.

But you are right, there's no hierarchy in the systems anymore. Why do we even call something a motherboard? There's a bunch of chips interconnected.

pshirshov 4 days ago | parent | prev | next [-]

Can I just have a backplane? Pretty please?

theandrewbailey 4 days ago | parent | next [-]

I've wondered why there hasn't been a desktop with a CPU+RAM card that slots into a PCIe x32 slot (if such a thing could exist), or maybe dual x16 slots, and the motherboard could be a dumb backplane that only connected the other slots and distributed power, and probably be much smaller.

namibj 3 days ago | parent | next [-]

Those exist; they are used for risers ("vertical mount GPU brackets, for dual GPU" equivalent for servers, where they make the cards flat again).

KeplerBoy 3 days ago | parent | prev | next [-]

PCIe x 32 actually exists, at least in the specification. I have never seen a picture of a part using it.

iszomer 3 days ago | parent | prev [-]

Retimers.

colejohnson66 4 days ago | parent | prev | next [-]

Sockets (and especially backplanes) are absolutely atrocious for signal integrity.

pshirshov 4 days ago | parent [-]

I guess if it's possible to have 30cm PCIe 5 riser cables, it should be possible to have a backplane with traces of similar length.

namibj 3 days ago | parent [-]

Cables much better sadly, so much so that they started to use cables to jump across the server main board in places.

vFunct 4 days ago | parent | prev | next [-]

VMEBus for the win! (now VPX...)

pezezin 4 days ago | parent [-]

The hot stuff nowadays is µTCA: https://www.picmg.org/openstandards/microtca/

crimony 3 days ago | parent [-]

If I remember correctly the military / aerospace shy away from this spec because the connector with the pins is on the backplane, with the sockets on the cards.

So if you incorrectly insert a card and bend a pin you're in trouble.

VPX has the sockets on the backplane so avoids this issue, if you bend pins you just grab another card from spares.

This may have changed since I last looked at it.

Telecoms industry definitely seem to favour TCA though.

pezezin 3 days ago | parent [-]

I don't know, I work in particle physics and here µTCA is all the rage nowadays.

guerrilla 3 days ago | parent | prev [-]

Yes, for fucks sake, this is the only way forward. It gives us the ultimate freedom to do whatever we want in the future. Just make everything a card on the bus and quit with all this hierarchy nonsense.

dylan604 4 days ago | parent | prev | next [-]

Wouldn't that mean an complete mobo replacement to upgrade the GPU? GPU upgrades seem much more rapid and substantial compared to CPU/RAM. Each upgrade would now mean taking out the CPU/RAM and other cards vs just replacing the GPU

p1esk 4 days ago | parent | next [-]

GPUs completely dominate the cost of a server, so a GPU upgrade typically means new servers.

BobbyTables2 4 days ago | parent [-]

Agree - newer GPU likely will need faster PCIe speeds too.

Kinda like RAM - almost useless in terms of “upgrade” if one waits a few years. (Seems like DDR4 didn’t last long!)

chrismorgan 4 days ago | parent | prev [-]

> GPU upgrades seem much more rapid and substantial compared to CPU/RAM.

I feel like I’ve been hearing about people selling five-to-ten-year-old GPUs for sometimes as many dollars as they bought them for, for the last five years; and people choosing to stay on 10-series NVIDIA cards (2016) because the similar-cost RTX 30-, 40- or 50-series was actually worse, because they’d been putting the effort and expense into parts of the chips no one actually used. Dunno, I don’t dGPU.

MurkyLabs 4 days ago | parent | prev | next [-]

Yes I agree, let's bring back the SECC style CPU's from the Pentium Era, I've still got my Pentium II (with MMX technology)

Dylan16807 4 days ago | parent | prev | next [-]

And limit yourself to only one GPU?

Also CPUs are able to make use of more space for memory, both horizontally and vertically.

I don't really see the power delivery advantages, either way you're running a bunch of EPS12V or similar cables around.

4 days ago | parent | prev | next [-]
[deleted]
mcdeltat 3 days ago | parent | prev | next [-]

Personally I hope this point comes after we realise we don't need 1kW GPUs doing a whole lot of not much useful

burnt-resistor 4 days ago | parent | prev | next [-]

Figure out how much RAM, L1-3|4 cache, integer, vector, graphics, and AI horsepower is needed for a use-case ahead-of-time and cram them all into one huge socket with intensive power rails and cooling. The internal RAM bus doesn't have to be DDRn/X either. An integrated northbridge would deliver PCIe, etc.

iszomer 3 days ago | parent | prev | next [-]

I wonder how many additional layers are required in the PCB to achieve this + how this will dramatically affect the TDP; the GPU's aren't the only components with heat tolerance and capacitance.

j16sdiz 3 days ago | parent | prev | next [-]

It is not a EE problem. It is an ecosystem problem. You need a whole catalog of compatible hardware for this.

coherentpony 4 days ago | parent | prev | next [-]

The concept exists now. You can "reverse offload" work to the CPU.

Razengan 4 days ago | parent | prev | next [-]

Isn't that what has kinda sorta basically happened with Apple Silicon?

trenchpilgrim 4 days ago | parent | next [-]

And AMD Strix Halo.

MBCook 4 days ago | parent | prev [-]

GPU + CPU on the same die, RAM on the same package.

A total computer all-in-one. Just no interface to the world without the motherboard.

leoapagano 4 days ago | parent | prev | next [-]

One possible advantage of this approach that no one here has mentioned yet is that it would allow us to put RAM on the CPU die (allowing for us to take advantage of the greater memory bandwidth) while also allowing for upgradable RAM.

themafia 4 days ago | parent | next [-]

I think you'd want to go the other way.

GPU RAM is high speed and power hungry. So there tends to not be very much of it on the GPU card. This is part of the reason we keep increasing the bandwidth is so the CPU can touch that GPU RAM at the highest speeds.

It makes me wonder though if a NUMA model for the GPU is a better idea. Add more lower power and lower speed RAM onto the GPU card. Then let the CPU preload as much data as is possible onto the card. Then instead of transferring textures through the CPU onto the PCI bus and into the GPU why not just send a DMA request to the GPU and ask it to move it from it's low speed memory to it's high speed memory?

It's a whole new architecture but it seems to get at the actual problems we have in the space.

kokada 4 days ago | parent [-]

Isn't that what you described Direct Storage?

themafia 4 days ago | parent [-]

You're still running through the PCIe slot and it's bandwidth limit. I'm suggesting you bypass even that and put more memory directly on the card.

KeplerBoy 3 days ago | parent [-]

So an additional layer slower and larger than global GPU memory?

I believe that's kind of what bolt graphics is doing with the dimm slots next to the soldered on lpddr5. https://bolt.graphics/how-it-works/

MBCook 4 days ago | parent | prev [-]

Couldn’t we do that today if we wanted to?

What’s keeping Intel/AMD from putting memory on package like Apple does other than cost and possibly consumer demand?

iszomer 3 days ago | parent [-]

Supply + demand, the manufacturing-capacity rabbit hole.

LeoPanthera 4 days ago | parent | prev [-]

Bring back the S100 bus and put literally everything on a card. Your motherboard is just a dumb bus backplane.

MBCook 4 days ago | parent [-]

We were moving that way, sorta, with Slot 1 and Slot A.

Then that became unnecessary when L2 cache went on-die.