416 points by ambigious7777 29 days ago | 44 comments
kersplody 28 days ago
12vhpwr has almost no safety margin. Any minor problem with it rapidly becomes major. 600W is scary, with reports of 800W spikes.

12V2x6 is particularly problematic because any imbalance, such as a bad connection of a single pin, will quickly push things over spec. For example, at 600W, 8.3A are carried on each pin in the connector. Molex Micro-Fit 3.0 connectors are typically rated to 8.5A -- That's almost no margin. If a single connection is bad, current per connector goes to 10A and we are over spec. And this if things are mated correctly. 8.5A-10A over a partially mated pin will rapidly heat up to the point of melting solder. Hell, the 16 gauge wire typically used is pushing it for 12V/8.5A/100W -- that's rated to 10A. Really would like to see more safety margin with 14 gauge wire.

In short, 12V2x6 has very little safety margin. Treat it with respect if you care for your hardware.

ckozlowski 28 days ago
Great summary. Buildzoid over on YouTube came to a similar conclusion back during the 4xxx series issues[1], and looks like he's released a similar video today[2]. It's worth a watch as he gets well into the electrical side of things.

It's been interesting to think that we're probably been dealing with poor connections on the older Molex connectors for years, but because of the ample margins, it was never an issue. Now with the high power spec, the underlying issues with the connectors in general are a problem. While use of sense pins sorta helps, I think the overall mechanism used to make an electrical connection - which hasn't changed much in 30+ years - is probably due for a complete rethink. That will make connectors more expensive no doubt, but much of the ATX spec and surrounding ecosystem was never designed for "expansion" cards pushing 600-800w.

[1] - 12VHPWR failures (2023) https://youtu.be/yvSetyi9vj8?t=1479 [2] - Current issues: https://www.youtube.com/watch?v=kb5YzMoVQyw

exmadscientist 28 days ago
> I think the overall mechanism used to make an electrical connection - which hasn't changed much in 30+ years - is probably due for a complete rethink.

There are tons of high-power connectors out there, and they look and work pretty much the same as the current ones (to the untrained eye). They are just more expensive.

Though at 40A+ you tend to see more "banana" type connectors, with a cylindrical piece that has slits cut in it to deform. Those can handle tons of current.

ckozlowski 28 days ago
> There are tons of high-power connectors out there, and they look and work pretty much the same as the current ones (to the untrained eye). They are just more expensive.

That's fair, so maybe not a complete rethink then. But definately a higher standard of quality. Right now, my experience with any of those molex type connectors (be it a 4 pin HDD connector or 8 pin EATX 12V or PCI-e somethingorother) is that they rely on the pin properly aligning with the holder on the other side, and if those aren't lined up, the pin can simply end up pushing the holder and it's wire back, instead of seating correctly. There's plenty of give and play in those cables, and it's hard to tell at a glance if all of them have seated correctly or if a holder has been pushed backwards in it's socket. I can imagine a higher quality connector with tighter tolerances and stiffer materials would lessen the likelihood of this happening, but no doubt with higher costs to PSUs and cards.

I suspect manufacturers are sensitive to price increases there, but I have to imagine tacking on even a few dollars to an already exorbitantly priced card that might melt otherwise is a good value? I guess we'll see.

smallmancontrov 28 days ago
Those molex clones were out of spec.

Both male and female terminals are supposed to be retained in the plastic housing by little wings (locking tangs) that are very strong. The metal bits can wiggle in the plastic housing (feature not a bug -- something has to absorb the tolerances) but not retract, not without an extractor tool or an extreme amount of force sufficient to tear apart or fold the metal contact. Anyone who has tried to extract one of the terminals without the correct extractor tool can attest to just how much force this is. It's a lot, and the specs are also such that you should never get a metal pin tip meeting a metal edge if the plastic bits are engaged.

Of course, shitty out-of-spec molex clones abound. I have no doubt you saw what you saw, I'm coming to the defense of the specified design, which is ingenious and works extremely well at extremely low cost and loose tolerances when implemented correctly.

ssl-3 28 days ago
The specs, as they exist in the PC space, tend to specify things like very specific Molex(tm) or Amphenol(tm) parts.

Anything else -- even a very precise and astutely-manufactured clone (including a Molex clone of an Amphenol part, or vice-versa) -- is out of spec.

(Which means that we're all running out-of-spec parts somewhere. I promise it.)

opello 28 days ago
This [1] is also a good deep dive into the space covering the spec, limits, and materials details. For example:

> The specification for the connector and its terminals to support 450 to 600W is very precise. You are only within spec if you use glass fiber filled thermoplastic rated for 70°C temperatures and meets UL94V-0 flammability requirements. The terminals used can only be brass, never phosphor bronze, and the wire gauge must be 16g (except for the side band wires, of course).

[1] http://jongerow.com/12VHPWR/

amluto 28 days ago
And yet plenty of things around the house use far more than 800W and work fine. The secret is to use a more reasonable voltage.

30V or 36V or even 48V would leave a decent margin for touch safety and have dramatically lower current and even more dramatically lower resistive loss.

tcdent 28 days ago
This is the most informative assessment in this thread.

You'd expect to see the capacity to be 125% as is common in other electrical systems.

Ratings for connectors and conductors comes with a temperature spec as well, indicating the intended operating temperature at a load. I'm sure, with this spec being near the limit of the components already, that the operating temperatures near full load are not far from the limit, either.

Couple that with materials that may not have even met that spec from the manufacturer and this is what you get. Cheaper ABS plastic on the molex instead of Nylon, PVC insulation on the wire instead of silicone, and you just know the amount of metal in the pins is the bare minimum, too.

zamalek 28 days ago
"3rD party connectors" is being waved around by armchair critics. The connectors on the receiving end of all of this aren't some cheap knock-off, they are from a reputable manufacturer and probably exceed the baseline.
Ygg2 28 days ago
No sane individual is going to buy 5090 for $2000-100000 and hook it up to a $15 Power supply.
rincebrain 28 days ago
Correct, but it turns out, a fair fraction of the people who day 1 purchase a GPU for that much may not necessarily be described as wholly sane.
m463 28 days ago
your comment makes me wonder...

Is it a computer science type person, who might be unaware of electrical engineering...

Or someone who takes a perfectly good car and adds ridiculous rims that rub?

Ygg2 28 days ago
They aren't insane, just not thrifty with their money.
lmm 28 days ago
Doesn't mean they prioritise everything the same way. Plenty of people buy an expensive car and put cheap tires on it or what have you.
asddubs 28 days ago
Yeah, I can easily see someone buying an expensive GPU but a cheap power supply because they ostensibly all do the same thing
rincebrain 28 days ago
They're not necessarily insane, I agree.

But I've also known some people who did that when they did not have the disposable income to do it.

Decabytes 28 days ago
Let's not forget that the 90 series cards in each generation won't be top end forever. Soon they will just be used cards like all other technology. And someone might be building their first computer, got a good deal on eBay on a 5090 which is 5 or 6 generations old, and cobble it together with some other old parts, and maybe a weak PSU, or an older 12vHP cable
clhodapp 28 days ago
The cable must necessarily be third-party from the perspective of the GPU or from the perspective of the power supply.

If it's built to the expected tolerance, it should work.

Dalewyn 28 days ago
Except when the "expected tolerance" is unreasonable.

Even if the connectors and wires are to spec, the design leaves next to no margin for play. You need that margin to account for reality: Handling by casual end-users rather than trained professionals, the ambient temperature of the average room or office, dirt and grime that might get lodged and go unnoticed, wonky supply/draw of power, and more.

Running 8.3A through connections rated for 8.5A is "expected tolerance", it's also fucking stupid in no uncertain terms.

clhodapp 28 days ago
There are at least two issues here:

1) The designed safety margin is unacceptably low. It should be set such that any cable that complies with the expected safety tolerance for carrying current is safe to use.

2) The late-model Nvidia cards in particular have no feedback system to discover unbalanced current on 12v wires that make up the connector and no circuitry to keep the current balanced even if they did. That is, they forgo any digital control and depend on the physical properties of the conductors to be perfectly balanced.

Overall, Nvidia failed to learn from the melting connector issues in the RTX 4000 series and doubled down by increasing the power draw while further cost-cutting the safety circuitry.

See:

    * High-level demonstration: https://youtu.be/Ndmoi1s0ZaY?si=bkv12pXG4K5T72YN
  
    * Low-level explanation: https://youtu.be/kb5YzMoVQyw?si=Bl5aowND4uXoI8s6
caycep 27 days ago
Ok, I have to say this, but the narrator sounds a lot like Jim Henson/Kermit the Frog....(my understanding it is a maryland/mississippi accent combo)
sozforex 28 days ago
~23A had been measured going going through one of 6 wires/pins in this test: https://youtu.be/Ndmoi1s0ZaY?t=927

Standard requires these connectors to handle 9.5A per pin (9.5 A × 12 V × 6 pin = 684 W).

A detailed explanation of this mess: https://www.youtube.com/watch?v=kb5YzMoVQyw

leeter 28 days ago
I'm curious, if there are any high level electrical engineers reading this please respond.

I wonder if that vertical (as far as the PCB goes) power connector will always ensure that this sort of imbalance will always occur. While we like to pretend that current is even in any given current plane that's not what happens. The impedance of the wires and copper is not perfectly ideal. This is why these connectors have equal number of grounds, so they have an ideal shortest path and balanced return current path. So I'm curious if electrically it's just impossible to have a vertical connector like that (on that shorts all the pins for 12V together instead of current balancing them) and have it balance current across the pins. The pins closest to the board should in theory have the greatest currents as they are the shortest path electrically. Based on the pictures that appears to be the case. It appears that the pins under the most stress are likely those with the lowest impedance.

Assuming my SWAG above is correct... I'm curious if this is affected by the per pin impedance on the PSU too. Where if certain folks are just unlucky get a situation where some pins in the connector have a significantly lower impedance than the rest.

If my second SWAG is plausible, my third and really bad SWAG is that removing the two ground pins nearest the PCB could actually "balance" the current better by forcing the current to use a slightly longer path for the power pins. But, my guess is this will just cause EMI issues. So please don't test this unless you're an EE and know what you're doing.

This is pure speculation on top of what Buildzoid, the posts above this have said, and what I've learned from Robert Feranec's videos. I'm in no way an electrical engineer, just a humble hobbyist and person that loves to learn.

exmadscientist 28 days ago
Paralleling wires is stable because the TCR of copper is positive. When one connection carries too much current compared to its peers, it will heat up. This will increase its resistance, causing it to accordingly carry less of the current. So the system is self-balancing.

Do not remove ground wires. That is stupid. You'll just be raising the current in the remaining wires. EMI should not be a major concern as we are talking about DC power delivery here (also why I'm saying "resistance" instead of "impedance") and so the potential for trouble by changing the number of conductors making a connection is limited. Yes, anything could happen, but that's just the nature of EMC problems.

leeter 28 days ago
Yeah I realized that was the worst way to go about testing that anyway right after I went to bed last night. If (big stress on if) that was the issue a ferrite bead would be a better way to test it. Based on what you're saying my SWAGs were wildly off. I'd still like to see the sims of it however to see if they provide any illumination on the issue. What makes me think something weird is going on is that it's two out of six wires heating up to absurd degrees. Of the other four two are carrying normal currents and the last two (based on Roman's video) are carrying practically nothing. Buildzoid makes the convincing argument that clearly Nvidia engineers were aware of something like this could happen on the 3090. But, then didn't carry that over to the 4090/5090.
toast0 28 days ago
> This is why these connectors have equal number of grounds, so they have an ideal shortest path and balanced return current path.

These connectors have an equal number of grounds and 12v because the same current flows on both sides, and the required current justifies at least 6 wires at the specified current.

Pci-e 8-pin power is a bit weird, because it's 3 12v and essentially 5 grounds; but that's because it's pci-e 6 pin and a promise that the power supply makers know what they're doing... The extra 2 grounds signal that the PSU designers are aware of the higher current limit, even though the wiring specifications are the same.

tcdent 28 days ago
Show me the Molex logo molded on that connector end and I'll believe you.

It's all off-brand slop from companies that learned marketing by emulating western brands. Kids on the internet lap up circuitous threads about one brand being better than the other based on volume.

"MODDIY" is a reputable brand? C'mon.

zamalek 28 days ago
> Any imbalance

I watched de-Bauer's analysis this morning, and you've seemingly hit the nail on the head. Even on his test bench it looks like only two of the wires are carrying all of the power (instead of all of them, I think 4 would be nominal?) - using a thermal camera as a measuring tool. The melted specimen also has a melted wire.

Maybe 24V or 48V should be considered, and higher gauge wires - yes.

mjevans 28 days ago
It would be _lovely_ if instead of the 12V only spec we went to 48V for internal distribution. Though that would require an ecosystem shift. USB-PD 2.0~3.0 would also be better supported https://en.wikipedia.org/wiki/USB_hardware#USB_Power_Deliver...

As others no doubt mention Power (loss, Watts) = I (amsp) * V (volts (delta~change on the wire)).

dV = I*R ==> dV = I * I / R -- That is, other things being equal, amps squared is the dominant factor in how much power loss occurs over a cable. In the low voltage realms most insulators are effectively the same and there's very little change in resistance relative to the voltages involved, so it's close enough to ignore.

600W @ 12V? 50A ==> 1200 * R while at 48V ~12.5A ==> 156.25 * R

A 48V system would have only ~13% the resistive losses over the cables (more importantly, at the connections!); though offhand I've heard DC to DC converters are more efficient in the range of a 1/10th step-down. I'm unsure if ~1/25th would incur more losses there, nor how well common PC PCB processes handle 48V layers.

https://en.wikipedia.org/wiki/Low_voltage#United_States

""" In electrical power distribution, the US National Electrical Code (NEC), NFPA 70, article 725 (2005), defines low distribution system voltage (LDSV) as up to 49 V.

The NFPA standard 79 article 6.4.1.1[4] defines distribution protected extra-low voltage (PELV) as nominal voltage of 30 Vrms or 60 V DC ripple-free for dry locations, and 6 Vrms or 15 V DC in all other cases.

Standard NFPA 70E, Article 130, 2021 Edition,[5] omits energized electrical conductors and circuit parts operating at less than 50 V from its safety requirements of work involving electrical hazards when an electrically safe work condition cannot be established.

UL standard 508A, article 43 (table 43.1) defines 0 to 20 V peak / 5 A or 20.1 to 42.4 V peak / 100 VA as low-voltage limited energy (LVLE) circuits. """

The UK is similar, and the English Wikipedia article doesn't cite any other country's codes, though the International standard generally talks at the power grid distribution level.

Denvercoder9 28 days ago
> A 48V system would have only ~13% the resistive losses over the cables (more importantly, at the connections!)

It's one-sixteenth (6.25%) actually. You correctly note that resistive losses scale with the square of the current (and current goes with reciprocal voltage), so at 4 times the voltage, you have 1/4th the current and (1/4)^2 = 1/16th the resistive losses.

myself248 28 days ago
I've been beating the 48v drum for years. Any inefficiency in the 48-to-1 conversion should be mostly offset by higher efficiency in the 240-or-120-to-48 conversion, I suspect it's a wash.

Every PoE device handles 48 without issue on normal PCB processes, so I don't expect that to be a big deal either. They _also_ have a big gap for galvanic isolation but that wouldn't be necessary here.

boznz 28 days ago
POE/POE+ standard is 15-20W. 600W would mean another standard
deelowe 28 days ago
48v is basically already the standard for hyperscalers with power shelf rack designs.
magicalhippo 28 days ago
From what I can gather, one challenge with 24V and higher is that switched-mode converters, such as the buck converters used in the power stage, get a lot more inefficient when operating at high ratios.

You can see this effect in figure 6 in this[1] application note, where it's >90% efficient at ratios down to 10:2.5, but then drops to ~78% at a ratio of 10:1.

So if one goes for higher voltage perhaps 48V would be ideal, and then just accept the GPU needs a two-stage power conversion, one from 48V to 12V and the other as today.

The upside is that this would more easily allow for different ratios than today, for example 48V to 8V, then 8V to 1.2V, so that each stage has roughly the same ratio.

[1]: https://fscdn.rohm.com/en/products/databook/applinote/ic/pow... (page 14)

crote 28 days ago
> I think 4 would be nominal?

6 or 12, depending on how you count. There are 6 12V supply wires, and 6 GND return wires. All of them should be carrying roughly the same current - just with the GND wires in the opposite direction from the 12V ones.

snuxoll 28 days ago
> Really would like to see more safety margin with 14 gauge wire.

The wire itself really isn't the issue, the NEC in the US is notoriously cautious and 15A continuous is allowed on 14AWG conductors. Poor connectors that do not ensure good physical contact is a real problem here, and I really fail to understand the horrid design of the 12VHPWR connector. We went decades with traditional PCIe 2x6 and 2x6 power connectors with relatively few issues, and 12VHPWR does what over them? Save a little bulk?

exmadscientist 28 days ago
This can't be Micro-Fit 3.0, those are only sized to accept up to 18AWG. At least, with hand crimp tooling, and that's dicey enough that I'd be amazed if Molex allowed anything larger any other way. The hand crimper for 18AWG is separate from the other tools in the series, very expensive, and a little bit quirky. Even 18AWG is pushing it with these terminals.

This has to be some other series.

Xelbair 28 days ago
There were some tests of current draw being imbalanced on each pair.

two cables carried 22A - PSU connector heating up near them to 150C

rest 2-3A. derbauer has video on that on youtube.

1st party cables too.

connector is also a disaster design wise.

choilive 24 days ago
I would bet a lot of money more than 1 engineer at nvda flagged this as a potential issue. If you were going to run this close to the safety margin, I would at minimum add current sensing on each pin.
amelius 28 days ago
Yes, and by the way I also think typical GPU cables are way too stiff for such a small and fragile connector.
opello 28 days ago
Shouldn't there be a bit better margin if we subtract the 75W from the slot itself? Down to ~7.3A/pin in the 600W example.
HexPhantom 27 days ago
At what point do we stop blaming user error and start admitting the design itself is the problem?
caycep 27 days ago
RTX 6000 Ada Gen had EPS12V 8 pin or something?
Dalewyn 28 days ago
What is the reason the spec keeps specifying next to no headroom? Clearly that was the fundamental problem with 12VHPWR and it's being repeated with 12V2X6.

Any engineer worth his salt knows that you should leave plenty of headroom in your designs, you are not supposed to stress your components to (almost) their maximum specifications under nominal use.

bcrl 26 days ago
I found it hilarious when a friend went to use a Tesla supercharger on his F150 Lightning. As the cable is only long enough to reach the charge port on the corner of a Telsa, he had to block 2 parking spaces and almost 3 chargers to use it. Oops... I hope all the money "saved" on copper was worth it.
nvarsj 28 days ago
I would love to get some insight from Nvidia engineers on what happened here.

The 3 series and before were overbuilt in their power delivery system. How did we get from that to the 1 shunt resistor, incredibly dangerous fire-hazard design of the 5 series? [1] Even after the obvious problems with 4 series, Nvidia actually doubled down and made it _worse_!

The level of incompetence here is actually astounding to me. Nvidia has some of the top, most well paid EE people in the industry. How the heck did this happen?

1: https://www.youtube.com/watch?v=kb5YzMoVQyw

MBCook 28 days ago
Maybe having cards drawing >500/600 watts is just a bad idea.

Add in a CPU and such and we’re quickly approaching the maximum amount of power that can be drawn from a standard 12 amp circuit continuously.

shawnz 28 days ago
What's the proposal here, just convince everyone to stop demanding more powerful hardware?
MBCook 28 days ago
Stop and try again?

Either you’re not going to get a choice, or you’ll need a 240v three phase outlet installed and possibly an expanded electrical panel.

Nvidia has been all about “as fast as possible, no compromise” for a very long time now. So was Intel, until the Pentium 4 forced a big reset.

At a certain point this stuff is just totally untenable. Throw more amps at it stops being workable.

Apple seems to be able to get a decent fraction of the performance using drastically less power. I have the impression Intel is doing the same, though I’m less sure on that one.

Given how parallel graphics problems are, maybe it’s time to give up on 100 cores that go uber fast as behave like a space heater and move to 250 cores that go quite fast but use 1/4 of the power each.

cameronh90 28 days ago
If GPUs requiring 2kW PSUs do become reality, at least us Europeans would have something new to flaunt over the US, along with our fast boil electric kettles.
ssl-3 28 days ago
Here in the States, I've already banished non-Metric "Freedom units" from my workshop.

I can do the same with 120v.

(And indeed: Back in the mining days, my somewhat diminutive rig was running from a dedicated 240v circuit. It made my already-efficient power supplies a tiny bit more efficient, and it saved a bit on copper.

I've already got plans for 240v outlets in the kitchen.)

shawnz 28 days ago
> I've already got plans for 240v outlets in the kitchen.

Likewise, I just need to find a kettle with a NEMA 6-15 plug

ssl-3 27 days ago
I'm OK with installing my own plug into an appliance, like the Brits used to commonly do for approximately everything.

But (and I'm literally splitting hairs here) I'd like to find a 6-15P that has a captive fuse (like US christmas lights have), first:

The 13A maximum that a British appliance expects as a maximum is lower than the 15- or 20- amp circuit breakers that are listed for use in the States.

Unfortunately, it doesn't seem to exist.

lsaferite 28 days ago
That was a good chuckle, thanks! The image of that honkin' big plug on a kettle is great!
ssl-3 27 days ago
Maybe you're thinking of NEMA L6-15 or 14-30 or something?

6-15P isn't big or honkin'. It is about the same size as a regular US plug, and differs mostly in that the contacts are just rotated 90 degrees.

If anything, it's a bunch smaller than what much of the rest of the world plugs their own kettles in with, like a British type G or a Schuko.

shawnz 28 days ago
Plenty of people are working hard to tackle the performance-per-watt problem while others simultaneously tackle the absolute performance problem. There's no one single metric you can focus exclusively on that's going to satisfy everyone's use cases. Obviously not everyone needs a high-end enthusiast product like the 5090 and plenty of people are going to be seeking out a middle-of-the-road product that focuses on performance-per-watt instead, which will be accommodated by the lower-tier products in the lineup as well as their competitors' products.

> Given how parallel graphics problems are, maybe it’s time to give up on 100 cores that go uber fast as behave like a space heater and move to 250 cores that go quite fast but use 1/4 of the power each.

It wasn't too long ago that people were saying the same things about single-core and dual-core NetBurst CPUs and how we need to start considering dual-core and quad-core for consumer CPUs. And now we have consumer CPUs with 64 cores and more, so aren't we moving in the direction you want already? Performance-per-watt is improving, parallelism is improving, and also absolute performance is improving too in addition to those other metrics.

bufferoverflow 28 days ago
Don't be ridiculous. Nobody is going to stop anything. If the only way to scale up is to build more and more power-hungry hardware, that's where the market will go. Even if some GPU of the future consumes 10kW, it's still just $1.5 per hour. Which is a lot cheaper than most other forms of entertainment.
ants_a 28 days ago
Consumers already have the choice of not buying the most powerful card.

Increasing die size to run cores in a more power efficient regime is not going to work, because a) the chips are already as big as can be made, and b) competition will still push companies to run the 250 cores uber fast and figure out some way to push enough power to it.

As long as there is customer demand for this, these things will get built. Given the amount of bad press these melted connectors create, possibly with better engineered power delivery systems.

zymhan 28 days ago
> I have the impression Intel is doing the same, though I’m less sure on that one.

That may be true on the low-power end of their lineup, but certainly not for the high end i9 chips. Those will increase the power budget for minuscule performance gains.

MBCook 28 days ago
Sorry, I meant their GPUs. I should have clarified that.
amluto 28 days ago
Math, please. An 800W load is fine on a 120V 15A circuit with plenty of room to spare. The problem is the 12V connection from the PSU to the card.
MBCook 27 days ago
The national electric code says not to draw more than 80% of the maximum load a circuit can handle continuously. So assuming you plan to game for a reasonable amount of time we’re limited to 12 amps.

That’s 1440 watts. And power supplies for PCs these days seem to be 80-95% efficient for good ones. Let’s say you’ve got a 90%er.

That’s 1300 watts. A top end CPU is 120 watts. Looks like 70-130 watts for a motherboard. Call it 100.

We’re at 1080 watts. Four sticks of DDR5 is another 60 or so. Let’s add 20 for two M.2 drives.

Wait, didn’t we have a 600 watt GPU? Down to 400 left. Good thing dual GPU gaming is dead, we can’t afford a second in our killer gamer system. Let’s add 20 watts for fans so everything doesn’t melt.

So 380 watts left out of 1440. That’s just 25%. But no spinning hard drives or SSDs, no USB PD, no PCI-e cards at all but our one graphics card.

Wait we need a monitor on the same circuit. Looks like that’s 50 watts for a high end LG, 330 now. You did only want one monitor right?

Is it really that hard to imagine hitting that limit soon?

bongodongobob 28 days ago
Idk but if computers are drawing enough power to melt themselves just to play ray traced Minecraft or write boilerplate Python code, perhaps we've gone wrong somewhere.
ChrisRR 28 days ago
> write boilerplate Python code

This is my biggest issue. I'm still impressed that we need GHz and GBs to run simple apps that were achievable with MHz and MBs 20+ years ago

fennecfoxy 21 days ago
Because we only need hours to write such things, rather than months or years. The market wants rapid development, not necessarily developers. I usually just see the old-heads implying younger devs can't hack it.
bongodongobob 28 days ago
I meant local AI.
ChrisRR 28 days ago
Seems like the Pentium 4 issue. Realise that your current approach isn't sustainable long term and change your approach
satvikpendem 27 days ago
Perhaps that will incentive people to innovate and invent better and more optimized software instead.
rangestransform 28 days ago
Not sure if this is related but a friend there said a lot of people retired
hipsterstal1n 28 days ago
My boss has a few neighbors who are Nvidia and former Nvidia employees and they're all driving their shiny, fancy sports cars and such and living the high life thanks to the stock they got. It doesn't surprise me they're having churn and / or retiring cutting in to their workforce. Probably not enough to be substantial, but enough to count for something.
Joel_Mckay 28 days ago
In general, poorly designed cables will use steel instead of tin plated copper conductors. The differences become more relevant at higher power levels, as energy losses within the connectors will be higher. Thus, the temperatures rise above the poorly specified unbranded plastic insulator limits, and a melted mess eventually trips the power supply to turn off.

I would assume a company like NVIDIA wouldn't make a power limit specification mistake, but cheap ATX power-supplies are cheap in every sense of the word.

Have a great day =3

kllrnohj 28 days ago
derbauer did an analysis of the melted connector. It was nickel & gold, not steel.

buildzodes analysis showed Nvidia removed all ability to load balance the wires which they used to have with the 30xx connector that introduced it, and they had even fancier solutions for the multi 8/6 pin connectors before that.

The specification also has an incredibly low safety factor, much much lower than anything that came before.

They seem to be penny pinching on shunt resistors on a $2000 GPU. Their engineering seems suspect at this point, not worthy of a benefit of the doubt.

Joel_Mckay 28 days ago
Nickel & gold plating is common, but if it were solid (rare because that would be expensive and stupid)... you would be looking at 3 to >5 times worse performance than the same sized plated copper plug with a beryllium copper leaf spring.

Have a gloriously wonderful day... I could be wrong, but it has been awhile =3

fennecfoxy 21 days ago
Engineering, or executive management?
Joel_Mckay 20 days ago
The fact remains a custom after-market cable is not an NVIDIA design problem. If you didn't use the cable in the OEM box, than ethically one should eat the loss if something broke.

I considered publishing a mod tutorial for amateurs (including why crimped-copper is so much better than soldered-steel), and changed my mind given GamersNexus is already looking into the issues. Have a nice day =3

fennecfoxy 19 days ago
Oh, you're the uwu guy, I've seen you around.

I mean that's very true, but that the same time it would be remiss not to chastise Nvidia for selecting a standard that teeters on the edge of design constraints even while using the provided cable. Especially when removing any sort of balancing feature to even things out.

It's the same with the USB standard and various USB cables' current/data carrying capabilities which are often completely transparent to consumers; just totally bad design.

Others have suggested new a new standard that supports 48v or whatever out of the PSU. It does seem a little ridiculous to bring this in for the occasional card that hits a new high but at the same time, why _shouldn't_ our PSUs follow a standard like USB PD has where voltage/current limits budget is negotiated?

That said I'd hope that they include a speck of silicon in cables as well, that outlays the cable's capabilities to the controller on each side - no capabilities present then you only get 5v1a, bub.

What I would hope that this would mean is that we'd no longer have the various different cables, just a single type varying by current/power carrying capabilities.

Joel_Mckay 18 days ago
48v supplies are not practical for certified end-user products due to National Electrical Codes which limit the risk of electrocution on UL/CE/IC certified power supplies. Note, commercial Telecommunications equipment standards have a very different design market goal, so do use this DC standard already.

The current NEC code (pun intended) effectively limits most supplies to around under 32vDC in most jurisdictions for amateurs (equipment does not need retested in a lab, or signed off by a municipal ticketed EE.) Thus, a standard 24vDC (28vDC actual) equipment rail makes more sense in terms of design economics for the CPU/GPU buck-converters, and is indeed already very common to see in industrial/military rated hardware.

"speck of silicon in cables as well", Maybe... note USBC uses resister values to identify the plug initial behavior... but practically there comes a point where IT workers/hobbyists will have to know first-year ohms law again. At bare minimum people still need to calculate when the wall outlet power-limits are exceeded to avoid posing a fire-hazard in older homes.

I will spare you the xkcd cartoon about standards (and rants about ESD in USBC), but a power-requirement sticker would likely be just as effective for novices.

We all have mixed opinions on NVIDIAs silly GPU design kludges, but the industry has reached a sort of stagnation years ago. Cheers =3

ulfbert_inc 29 days ago
Derbauer and Buildzoid on YouTube made nice informative videos on subject, and no it is not "simply a user error". So glad I went with 7900 XTX - should be all set for a couple of years.
enragedcacti 29 days ago
Summary of the Buildzoid video courtesy of redditors in r/hardware:

> TL;DW: The 3090 had 3 shunt resistors set up in a way which distributed the power load evenly among the 6 power-bearing conductors. That's why there were no reports of melted 3090s. The 4090/5090 modified the engineering for whatever reason, perhaps to save on manufacturing costs, and the shunt resistors no longer distribute the power load. Therefore, it's possible for 1 conductor to bear way more power than the rest, and that's how it melts.

> The only reason why the problem was considered "fixed" (not really, it wasn't) on the 4090 is that apparently in order to skew the load so much to generate enough heat to melt the connector you'd need for the plug to not be properly seated. However with 600W, as seen on der8auer video, all it takes is one single cable or two making a bit better contact than the rest to take up all the load and, as measured by him, reach 23A.

https://old.reddit.com/r/hardware/comments/1imyzgq/how_nvidi...

c2h5oh 29 days ago
- Most 12VHPWR connectors are rated to 9.5-10A per pin. 600W / 12V / 6 pin pairs = 8.33A. Spec requires 10% safety factor - 9.17A.

- 12VHPWR connectors are compatible with 18ga or at best 16ga cables. For 90C rated single core copper wires I've seen max allowed amperages of at most 14A for 18ga and 18A for 16ga. Less in most sources.. Near the connectors those wires are so close they can't be considered single core for purpose of heat dissipation..

leeter 29 days ago
Honestly with 50A of current we should be using connectors that screw into place firmly and have a single wipe or a single solid conductor pin style. Multi-pin connectors will always inherently have issues with imbalance of power delivery. With extremely slim engineering margins this is basically asking for disaster. I stand by what I've said elsewhere: If I was an insurance company I'd issue a notice that fires caused by this connector will not be covered by any issued policy as it does not satisfy reasonable engineering margins.

edit: replaced power with current... we're talking amps not watts

29 days ago
Panzer04 28 days ago
I was wondering if this was a thing - in RC quads and the like we use these massive bullet connectors (XT30/60/90 and similar) which often have lower resistances than the wires themselves.

Yeah, they take soldering for the wire/connector interface, but presumably there are connectors similarly designed with crimp terminals, or it's just something the mfgs will have to deal with.

CarVac 29 days ago
Derbauer: https://www.youtube.com/watch?v=Ndmoi1s0ZaY

Buildzoid: https://www.youtube.com/watch?v=kb5YzMoVQyw

I went 7900 GRE, not even considering Nvidia, because I simply do not trust that connector.

whalesalad 29 days ago
Enjoying my 7900XTX as well. I really don't understand why nvidia had to pivot to this obscure power connector. It's not like this is a mobile device where that interface is very important - you plug the card in once and forget about it.
lmm 28 days ago
The new cards need unprecedented amounts of power, the old connectors can't deliver enough.
slightwinder 29 days ago
> So glad I went with 7900 XTX - should be all set for a couple of years.

Really depends on the use case. For gaming, normal office, smaller AI/ML or video-work, yeah, it's fine. But if you want the RTX 5090 for the VRAM, then the 24GB of the 7900 XTX won't be enough.

wing-_-nuts 29 days ago
Honestly, the smart play in that case is to buy 2 3090's and connect them with nvlink. Or...and hear me out, at this point you could probably just invest your workstation build budget and use the dividends to pay for runpod instances when you actually want to spin up and do things.

I'm sure there are some use cases for 32gb of vram but most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram. It's been a while since I've seen anything bigger than 24gb but less than 70gb.

diggan 29 days ago
> most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram.

I'm not sure what your idea of "day to day" use cases are, but models that fit in 12GB of VRAM tend to be good for like autocomplete and not much more. I can't even get those models to chose the right tool at the right time, even less be moderately useful. Qwen2.5-32B seems to be the lower boundary of a useful local model, it'll at least use tools correctly. But then for "novel" (for me) coding, basically anything below O1 is more counter-productive than productive.

wing-_-nuts 28 days ago
Yes I was gonna mention that Qwen model from the deepseek folks as maybe an exception
pshirshov 27 days ago
There is tremendous quality difference between 14b and 32b versions.
dualboot 28 days ago
So funny because the only reason the 5090 has 32GB of RAM is because AMD put 24GB on the card they released against the 4080.
pshirshov 27 days ago
Radeon Pro W7900
SketchySeaBeast 29 days ago
Yeah, I expect my next card with be AMD. I'm happy with my 3080 for now, but the cards have nearly double in price in two generations and I'm not going to support that. I can't abide the prices nor the insane power draw. I'm OK with not having DLSS.
asmor 29 days ago
It'll probably be fine for years, longer if you can stand looking at AI generated, upscaled frames. Liftup in GPU power is so expensive, we might as well be back to the reign of the 1080. The only thing that'll move the needle will be a new console generation.
fsloth 29 days ago
And 1080 is totally fine at 1920x1080 60fps even with recent games (I see my son enjoying Elden Ring, Helldivers 2 etc with that setup).
gigaflop 29 days ago
My 1080 has been running with the same configuration for years. The only thing I consider a downside is the lack of power for exploring AI locally, and AI isn't worth buying a $1234 video card for myself.
hibikir 29 days ago
I agree with the prices, but you say you have a 3080, which has a higher max power draw than the 5080 or 4080. Power requirements actually went down
SketchySeaBeast 29 days ago
It's true, but to be fair, by the hardware the 5080 is really more of a 70 series cared in previous gens. I was just thinking of the insane top end of the 90 series.
parineum 28 days ago
> 5080 is really more of a 70 series

Oft repeated, never explained.

The numbers don't really mean anything and never have. 5080 is faster than the 5070 is faster than the 5060. That's what the number means. The performance gap between tiers isn't and has never been constant.

xquce 28 days ago
Often explained, always ignored

https://youtu.be/J72Gfh5mfTk

Everyone who cares to know about generational improvements have indeed compared performance between tiers. No they have never been conctant but the "best" generations clearly had better segmented tiers in their generation compared to prior. Adding "fake" frames to say the 5070 has the performance of the 4090 like Jensen, tells you even Nvidia do this comparison.

parineum 24 days ago
> Adding "fake" frames to say the 5070 has the performance of the 4090 like Jensen, tells you even Nvidia do this comparison.

You're missing the point. The only thing that makes a 5080 a 5080 is that's what NVIDIA named it. of course comparisons are going to be made but it's meaningless to expect the numbers to correlate to some specific performance gain over the lower tier or previous generation.

The lineup is different every year. The metric that matters is performance per dollar, not the name of the product.

The 80 tier isn't great value this generation. That doesn't make it really a 5070.

SketchySeaBeast 28 days ago
xquce linked a good video explaining why it's a 70 series card, but you don't remember this controversy?

https://en.wikipedia.org/wiki/GeForce_RTX_40_series#RTX_4080...

The only thing that nvidia learned from that was to only offer the smaller version.

Hamuko 29 days ago
I had two AMD 7900 XTX and they both overheated like mad. Instant 110°C and throttle. Opted for a refund instead of a third GPU.
shmerl 28 days ago
Sapphire Nitro has a really nice heatsink for 7900 XTX. It works well even under full load.
seanw444 29 days ago
Probably reference cards, yeah? I think the common advice is to not buy the reference cards. They rarely cool enough. I made that mistake with the RX 5700 XT, and will never again.
mrguyorama 29 days ago
My 5700XT overheated constantly. It wasn't a reference card. The problem was that the default fan curve in the driver maxed out at 25%!

AMD is REALLY bad at software.

Hamuko 29 days ago
Yeah, they were reference, direct from AMD's own store.
bananadonkey 29 days ago
Those had an issue with their heatpipe design which affected cooling performance depending on their orientation. I made sure to buy an AIB model that didn't suffer the same issue, just in case I want to put the card somewhere funky like a server rack.

https://www.tomshardware.com/news/defective-vapor-chamber-ma...

asmor 29 days ago
It's too bad AMD will stop even aiming for that market. But also, I bought a Sapphire 7900 XTX knowing it'd be in my machine for at least half a decade.
dralley 29 days ago
People are acting like this is some long-term position. There's no evidence of that. AMD didn't give up on the high end permanently after the RX 480 / 580 generation.

What AMD does need right now is:

* Don't cannibalize advanced packaging (which big RDNA4 required) from high-margin and high-growth AI chips

* Focus on software features like upscaling tech (which is a big multiplier and allows midrange GPUs to punch far above their weight) and compute drivers (which they badly need to improve to have a real shot at taking AI chip marketshare)

* Focus on a few SKUs and execute as well as possible to build mindshare and a reputation for quality

"Big" consumer GPUs are increasingly pointless. The better upscaling gets, the less raw power you need, and 4K gaming is already passable (1440p gaming very good) on the current gen with no obvious market for going beyond that. Both Intel and Nvidia are independently suffering from this masturbatory obsession with "moar power" causing downstream issues. I'm glad AMD didn't go down that road personally.

If "midrange" RDNA4 is around the same strength as "high-end" RDNA3, but $300 cheaper and with much better ray tracing and an upscaling solution at least on par with DLSS 3, then that's a solid card that should sell well. Especially given how dumb the RTX 5080 looks from a value perspective.

emsixteen 29 days ago
This same thing happening on the 40 series cards was good enough vindication for me not 'upgrading' to that at the time. I'd rather not burn my house down with my beloved inside.

Can't believe the same is happening again.

the__alchemist 29 days ago
I can relate to this perspective! It's important to step out of the system and recognize priorities like this; balancing risk and reward.

I think, in this particular case, perhaps the risk is not as high as you state? You could imagine a scenario where these connectors could lead to a fire, but I think this is probably a low risk compared to operation of other high-power household appliances.

Bad connector design that's easy to over-current? Yes. Notable house-fire risk? Probably not.

cranium 29 days ago
There is a real problem with the connector design somewhere: der8auer tested with his own RTX 5090FE and saw two of the cable strands reach concerning temperatures (>150ºC).

Video timestamp: https://youtu.be/Ndmoi1s0ZaY?t=883

gambiting 29 days ago
I've tested my own 5090FE the same way he did(Furmark for at least 5 minutes at max load, but I actually ran it for 30 minutes just to be mega sure) and with an infrared thermometer the connector is at 45C on the GPU side and 32C on the PSU side. I have no idea what's happening in his video but something is massively wrong, and I don't understand why he didn't just test it with another PSU/cable.
perching_aix 28 days ago
More peeking and prodding would definitely have been welcome, but it's still a useful demonstration that the issues around the card's power balancing are not theoretical, and can definitely be the culprit behind the reports.
minzi 29 days ago
Are you using the splitter that nvidia provided or a 600w cable? Also, what PSU?

I've been using mine remotely, so trying to figure out how much I should panic. I'm running off the SF1000 and the cable it came with. Will be a few weeks before I can measure temperatures.

gambiting 29 days ago
The new Corsair RM1000x(ATX 3.1 model), with the included 12V-2x6 cable(so just one connector at the PSU and one at the GPU, no adapter).
minzi 29 days ago
Good to know. I guess I'll just hold out hope that things are ok and avoid heavy work loads until I can measure things properly.
29 days ago
dcrazy 29 days ago
Is NVIDIA breaching any consumer safety laws by pumping twice the rated current through a 24ish gauge wire? Perhaps by violating their UL certification?
RobotToaster 29 days ago
CE marking in Europe could be an issue. There's potential for a fine or forced recall.
michaelt 29 days ago
Aren't 12VHPWR cables like https://www.overclockers.co.uk/seasonic-pcie-5.0-12vhpwr-psu... 16AWG ?

Sure, there are problems with the connector. But 600W split over a 12-pin connector is 8.3A per wire, and a 16AWG / 1.5mm² wire should handle that no problem.

zamadatix 29 days ago
You're correct about 16AWG however "But 600W split over a 12-pin connector is 8.3A per wire" is only what _should_ ideally be occurring, not what Roman aka Der8auer _observed_ to occur. Even with his own 5090, cable, PSU, and test setup:

> Roman witnessed a hotspot of almost 130 degrees Celsius, spiking to over 150 degrees Celsius after just four minutes. With the help of a current clamp, one 12V wire was carrying over 22 Amperes of current, equivalent to 264W of power.

mirthflat83 29 days ago
Made some 12v-2x6 custom cables for fun and 99% sure the melting problems are from the microfit female connectors themselves. A lot of resistance going through the neck
mminer237 29 days ago
UL is a private company. There're no laws requiring it or penalizing violations. I would think the only legal consequences would be through civil product liability/breach of warranty claims. Plus, like, losing the certification would mean most stores would no longer stock it.
dcrazy 29 days ago
Many products sold in the United States must be tested in a CPSC-certified lab for conformity, of which UL is the best known. But consumer electronics don’t seem to be among that set, unless they are roped in somehow (maybe for hazardous substances?).
singleshot_ 28 days ago
Seems like if you filled your house with non-UL compliant stuff and your house burned down, the first fact would be material to your insurance carrier (you know, the Underwiter to which the UL name refers…)
sidewndr46 28 days ago
You might want to do some research on what you can buy and legally plug into your own home. It's more or less you get UL listing, or the product isn't available
reaperman 29 days ago
dylan604 29 days ago
For how much longer will that .gov website be operating?
mpreda 29 days ago
Step the GPU voltage up to 48V. (anyway you make a new connector that's not compatible with existing PSUs. Why not actually fix a problem at the same time, once and for all! [48V should be enough for anybody, right?])
jmrm 28 days ago
Not a bad idea IMHO. There are already computers (servers mostly, but also integrated models) who only have 12V power connections and the mainboard does the step-down voltage conversion, and IIRC some companies wanted to do the same to regular desktops.

I would be totally happy if the next gen of computers have 12V outputs to the mainboard and CPU and 48V to the GPU and other power-hungry components. This would make the PCB of those cards a bit bigger, but also would have less power losses and less risk of overheated connectors on the other hand.

B1FF_PSUVM 29 days ago
> Step the GPU voltage up to 48V.

Meh. Might as well ask for its own AC cable and be done with it.

mpreda 29 days ago
But I want fewer cables, not more.
B1FF_PSUVM 27 days ago
Primary AC cable goes to GPU, and it courteously supplies a short cable to feed the main PSU that only draws half the power ;-)

Tail wags the dog, modern style ...

Kirby64 29 days ago
Until the US changes their AC power connectors, we just don’t have a use case for it frankly. When the entire system is going to always top out at 1200W or so (so you have an extra few hundred watts for monitors and such), we’re pretty limited to maximum amperage.
ielillo 29 days ago
The USA has 240 volt plugs. They are only used for high power appliances such as AC or ovens. If you want, you could add a plug for your high powered space heater AKA gaming PC.
Kirby64 29 days ago
I’m aware we have 240V outlets. They are just not used in a place where you would put a PC. Until there is a shift in need (I.e., every normal user would need more than a 120V plug could handle), you won’t ever see 240V outlets in offices. I suspect it will never happen.

In server areas and extremely specialized stuff? Yea, sure. But we’re talking desktop PCs here.

tass 29 days ago
There's also 20 amp circuits which are common.

Many houses run circuits that are rated for 20 amps even if they don't have the right outlet for it so this is an inexpensive upgrade for most.

anankaie 29 days ago
I did not realize the outlet impacts the amperage… Is it a rating issue, or is there an actual part there that will trip?
ProllyInfamous 28 days ago
Retired electrician's comments:

Most of the US utilizes the NEC for installation compliance. Per NEC, 15A-style outlets are "to code" on 20A circuits unless a single recepticle ("dedicated") circuit — in which case a 20A-style recepticle MUST be installed.

For any electric appliance (including computers) which operates for 3hrs+ ("continuously"), the circuit rating is reduced to 80% capacity (e.g. only 16A load allowed continuously on a "20A circuit" == only 1920W computers allowed on 20A circuit, 1440W on 12A).

Pro tip: check your own PSU, but practically all modern computers can handle AC input 100-240V (all you need is the correct IEC power cord for a 240 US plug).

I have fixed enough melted devices in my career to always twice-torque each&every connection I make. For temporary extension cords/plugs, "twist lock" ends are worth all the extra dollars.

Protips: use Eeez-Ox (a conductive paste which inhibits corrosion) for high-load applications (non-data, only). My own gamerig's AMD GPU has it (sparingly applied) within its dual 8-pin connectors. I supply the 8-pin connectors from a single pair of 8awg copper, which is directly soldered within the PSU's PCB power-take-offs... so only a few inches of 16awg for voltage drop (into the GPU), which reduces the amperage required (but is also unnecessary overkill).

lsaferite 28 days ago
> For any electric appliance (including computers) which operates for 3hrs+ ("continuously"), the circuit rating is reduced to 80% capacity

That's a new one for me. Do you have a reference for that? I'd love to read more.

ProllyInfamous 28 days ago
NEC§210.20(A) a/k/a "80% Load Rule"

As trade practice, certain applications are ALWAYS deemed "continuous," e.g. water heaters, computers, space heaters, general lighting.

happyopossum 29 days ago
It’s the whole chain - 20a outlets typically require 12ga wire instead of 14ga, a 20 amp breaker, and yes - the outlet is different. The 20a outlets add a horizontal opening to one of the 2 vertical slots, making a sideways T shape. Devices that require 20 amps will have one of those horizontal prongs to ensure you don’t plug them in to a 15 amp outlet.
colejohnson66 29 days ago
The outlet itself doesn't care, but the shape of the receptacle is supposed to restrict insertion of a 20 amp device into a 15 amp socket. You can stick a 15 amp device into a 20 amp socket, but not vice versa. The electrician should be installing 20 amp sockets if the cabling can support it, but many don't.

It's the difference between NEMA 5-15 and 5-20: https://en.wikipedia.org/wiki/NEMA_connector#NEMA_5

tass 28 days ago
> The electrician should be installing 20 amp sockets if the cabling can support it, but many don't.

I think this is mainly because the 20 amp outlets are kind of ugly, and the fact that barely anything actually uses a 20 amp plug.

In my house, almost every circuit has a 20 amp breaker and 12 ga (yellow) romex, but only a couple of outlets are 5-20.

snuxoll 28 days ago
20a outlets are also expensive compared to 15A ones. When redoing the electrical in the house I inherited I put a dedicated circuit in the living room for the TV/game consoles/etc since my daughter's gaming rig would be going in the living room as well; since I hadn't decided to put the recessed panel for the TV wiring in at the time it was the only outlet on the circuit and code dictates that a 20A circuit with a single outlet must use a 20A receptacle.

I can buy a 10-pack of nice 15A Leviton Decora Edge outlets (that use lever lock connectors instead of screw terminals) for $26 ($2.60 each), but basic 20A tamper resistant outlets (which newer editions of the NEC require anywhere a small child can access them) are $6+ a pop.

When nothing outside of my electrical room (where the servers live) has need of a 20A receptacle it's kind of pointless to spend the extra money on them, but the extra couple bucks on 12 gauge copper is always wise.

ssl-3 28 days ago
It has long been my understanding that a single regular, bog-standard 15A duplex (ie, dual outlet) receptacle meets the multi-receptacle requirement of a 20-amp branch circuit.

If my understanding is correct, then you overkilled it and you could have saved a few dollars, at least one welli-intentioned rant, and still have been compliant with NEC.

A literally-single 15A outlet like a Leviton 16251-W would not pass muster, while one dual-outlet example of the 15-amp lever-lock devices you mention would.

snuxoll 28 days ago
Well, the rant still applies either way - the 20A outlets are 2x the cost, that's the big reason why the aren't routinely installed. I was thinking when I embarked on the project "oh, I'll just put 20A ones everywhere because why not?" and immediately decided not to when I looked at the prices for the packs of outlets....
ssl-3 27 days ago
Oh, for sure.

I had the luxury (or curse, depending) once of owning a home that needed all of the wiring replaced.

Being the kind of person that I am, I overbuilt things as I felt was appropriate. As part of that, I certainly wanted to install 20 amp outlets (even though I've never held in my hand a 20 amp plug).

The cost of that, vs good spec-grade 15A duplex outlets, was insane.

I know that the only difference is using one T-shaped contact instead of a straight and some different molds for the plastics. The line producing T-shaped contacts already exists, and so do the molds. Every 15A outlet sold today can transfer 20A safely.

It should be pennies difference in cost, and it was instead whole dollars.

Sucks.

(I'm reasonably certain that we are going to be broadly stuck with this low-current, low-voltage business until something very different comes along, and that any of this is unlikely to change in my lifetime.)

malfist 28 days ago
NEMA 5-20 is only required for commercial. You can use NEMA 5-15 on a 20A circuit for residential in the US.
colejohnson66 28 days ago
When I said "should", I didn't mean code required it, but that slapping a 15 amp cover over a 20 amp capable circuit is kindof stupid.
db48x 28 days ago
If the cheaper 14guage wire was used in the walls then only the 15 amp socket is appropriate, otherwise you might plug in a 20amp device and melt the wires or burn down the house.
snuxoll 28 days ago
Not really. 15A receptacles are required by code to be able to handle 20A of current at their terminals, so the larger wire and breaker allows for more loads to be run instead of one big load to draw it all (and if you need that, then a 20A circuit with a single receptacle does actually require it to be a 20A receptacle).
jml7c5 28 days ago
>The electrician should be installing 20 amp sockets if the cabling can support it, but many don't.

Electricians will almost always install 20 amp sockets where they can, but they avoid running 12 gauge wire because it costs about 50% more.

manwe150 29 days ago
The shape of the outlet is different for different current allowances (the spades are wider or rotated). It is supposed to allow an electrician to indicate that the whole circuit is rated to handle the higher expected load, and that there aren’t other outlets on the same circuit which might also try to use the whole current available. Basically a UI problem trying to encourage robust designs for use by non-experts
engineer_22 29 days ago
I envy my European friends' 240v electric kettles
sangnoir 29 days ago
British kettles draw so much power, the electric utility had to consider the additional power draw on the grid from synchronized tea-making during the commercial breaks of a popular soap opera, back when broadcast TV was king.
IsTom 29 days ago
And to drive that point home, we get induction stoves that run on three-phase 400V.
engineer_22 29 days ago
D':
ryao 28 days ago
It is odd that having 240VAC outlets installed in kitchens for kettles has not caught on. A NEMA 6-15R would allow for parity.
diggan 29 days ago
> I envy my European friends' 240v electric kettles

... do you not have electric kettles in the US? Foolishly, I thought this was a standard kitchen appliance all over the world, I've even seen it in smaller cities in Peru.

ssl-3 28 days ago
We do. Like others have said, they're unusual here -- in part, because culturally we drink a lot more coffee than we do tea, but also because heating a large-ish volume of water with a normal American outlet takes awhile.

My solution to drinking a cup of tea is also unusual: I have a no-longer-in-production Sunbeam Hotshot and I use it heat only as much water as I need right now.

It raises one tea-cup worth of water from whatever temperature it is that comes out of the cold tap to boiling in about 40 seconds.

I just dump a cup of water in, push one button to heat it up, wait until it boils, and then push another button to dispense that hot liquid into the cup that I'm using for tea.

It then turns itself off until next time.

Kirby64 29 days ago
We do, but they’re limited to much lower wattage due to the outlet limits. A typical US kettle is 1100-1400W, and takes maybe 1-2 minutes to boil. Kettles in the UK are typically 2.5-3kW.
ryao 28 days ago
malfist 28 days ago
And they're probably not real. Take look at any of the clones of the dyson hair dryer and check their proclaimed RPMs, many of them would have the tip of the fan blade spinning at several mach if they actually hit their limit.

There's aquarium heaters on amazon that say they're 10kw or more and plug into a 120 outlet.

I bought a magnet that is supposed to hold "150 pounds", but pulls off the ceiling (in it's strongest position) with just 10-15 pounds.

Amazon specs are fake.

stoobs 28 days ago
The Ninja one and probably the Breville ones will be accurate - I have the same Ninja kettle here in the UK and its 3000Watts vs the US 1500Watts.
Kirby64 27 days ago
What's your point? There's likely a few 1800W kettles that exist too. They are not common, and you definitely will never see anything above 1800W (outside of maybe some specialized commercial product designed for an 20A outlet) in the US. A 'normal' average kettle is going to be ~1200W here. Kettles start at 2kW and go up from there in the UK.
bongodongobob 28 days ago
We don't have a need for an exclusive water boiling device as we aren't obsessed with tea. Most people drink coffee and the heating element is already built into the coffee maker. Idk what else you'd use it for. The stove works fine for boiling water without needing a separate appliance.
lazide 28 days ago
They’re far faster than a stovetop most of the time, have auto shutoff so you can wander away for a minute or two or not pay attention to it and not boil off a lot of water, etc.

Many parts of the world, it’s a good idea to boil water before using it for drinking, brushing teeth, etc.

So it’s actually really convenient for many use cases besides tea or coffee.

bongodongobob 28 days ago
That's fine, but there are no use cases in the US. If we are boiling water, it's for food, and we're doing it on the stove, just like the way everyone else cooks. In the US, there is no use case for a specialized water boiling appliance.
Kirby64 27 days ago
We can agree they're less common than the UK, but saying they have "no use case" is a gross exaggeration. Pretty much everyone I know owns a kettle, given how cheap they are (~$15-20 for an OK one). Great for hot chocolate, coffee, tea, boiling water for a pot of water (to speed it up vs heating up your entire house), and numerous other reasons.
ryao 28 days ago
I have been known to heat water in a microwave after doing the calculations needed to know roughly when the water would reach boiling. It is rather quick once the microwave timer starts going.
dragonwriter 27 days ago
There are plenty of use cases in America, and electric kettles are not uncommon here, even if they are less common than in the UK.
lazide 28 days ago
I use it to make oatmeal in the morning. Fastest and easiest option.
db48x 28 days ago
They’re readily available, but most households don't actually have one. Many more households have a coffee maker, which is essentially the same thing but specialized for dripping the boiling water through coffee grounds. Anyone else who needs boiling water just puts a pot on the stove, or possibly an old–fashioned metal tea kettle.

We could even use 240v electric tea kettles here in the US if we wanted to; most kitchens with an electric range and oven have 240v (or 208v) outlets to plug them in to. But those outlets are usually inconveniently located for counter–top appliances. It wouldn’t cost much to add another above the counter, but it is rarely done in practice. Of course, in many parts of the US natural gas heating is cheaper than electric so the houses there are built for gas ranges and ovens instead.

flyingpenguin 29 days ago
The problem there would be your breaker. I am not an electrition but I can tell you that when I tried adding a heated MAU to my house, I had to switch to a 120v washer/dryer because my electric panel did not have space for another 208v line.

(Note, my building is actually 3 phase 208 volt not 240volt so I don't have 240 volt plugs but 208volt plugs)

snuxoll 28 days ago
Not a criticism, but a question. Did you consider adding a subpanel? If you're running a new circuit I assume there was already some drywall patching to be done, seems like it would have been more cost effective and removed future headaches to just give yourself more breaker space.
lazide 28 days ago
At least in the US, a sub panel is an easy grand or two even if it’s right next to the main panel.
snuxoll 28 days ago
A lot is going to depend on labor rates for your local electricians, but that costing more than $500 where I am would be outrageous. I do my own electrical, but even I paid a licensed electrician to come handle installing a new panel since I did not have an outdoor service disconnect and didn't feel like fighting with the utility company over de-energizing and re-energizing my service. Ended up needing a lot more done, but the whole thing cost me $2500 to get a new service drop, outdoor meter main, and wiring run to the old panel (in the bedroom on the other side of the main) and the new panel (in the old furnace closed that's now my electrical / network room).

But really, doing a subpanel yourself to expand breaker capacity is a really simple project - most people if so inclined could do it themselves. Anywhere from $100-200 for the panel itself depending on how many spaces you feel like adding, up to $80 for a large enough breaker to feed it, and some tens of dollars for SER cable.

lazide 27 days ago
Agreed - I’ve ended up installing 4 (inspected) ones over the years myself, and one I paid an electrician for (they also had to upgrade the main service feed).

IMO, what usually drives up the price is the ancillary stuff - opening up a wall (and re-finishing it) because there isn’t enough physical space, or adding extra main panel capacity/service capacity because the main feed is insufficient, or having to run heavier than expected wiring because the only available space gets really hot (poorly ventilated attic space), or having to run surface conduit due to a specific challenge with framing.

Then add in labor (where I was in a high cost of labor area), and it can get expensive quick.

An actual surface mount subpanel and appropriate wires/breakers is usually only a couple hundred bucks total like you note.

revnode 29 days ago
We also have 20 amp wiring, 20 amp breakers, 20 amp sockets, and plugs too. A lot easier than going 240 volt. That will give you 2400 watts max.
dcrazy 29 days ago
Most residential wiring is 15A except for bathrooms.
Sohcahtoa82 29 days ago
And kitchens.

Otherwise, running your microwave, toaster, and coffee maker at the same time would likely trip the breaker.

And obviously, the stove/oven is on its own circuit unless it's gas.

ryao 28 days ago
My kitchen has at least 2 120VAC circuits, which seems to avoid this.
Kirby64 27 days ago
NEC code requires 2 separate 20 amp circuits in a kitchen. Usually they split the circuits across a countertop or window. It's been code since ... as far as I can tell, pretty much forever. Unless you're living in a house that is either non-compliant or built before NEC was required, then it should have this kitchen arrangement.
bongodongobob 28 days ago
Most kitchens don't run on only one circuit for the obvious reasons you've described.
burnerthrow008 29 days ago
Don’t forget garages, too.
jml7c5 28 days ago
Yes, but most people don't want to pay the thousands of dollars to get an electrician to do a rewire.
mpreda 29 days ago
Now I must use lots of rather thick cables in my desktop (because I run GPUs).

Imagine that the GPU would instead suck up all the power it needs through the PCIe connector, without all those pesky cables. (right now PCIe can provied 75W at 12V, i.e. 6.25A; that same current would provide 300W at 48V).

masklinn 29 days ago
The PCIe slot would not be sufficient even if the power architecture moved to 48V: the 12VHPWR are getting 600W pushed through them.
anticensor 28 days ago
Card-edge connectors are wattage-bound, they would still be limited at somewhere about 100W if upvolted to 48V.
RetpolineDrama 29 days ago
I pulled a fresh 20A (120V) circuit just for my 5090 build.
Kirby64 29 days ago
What power supply do you have that even has a 20A inlet? 20 amp breakers are common for outlets (especially in newer builds) but the outlets are still 15A outlets. And there is essentially no desktop power supply that exists that would exceed a 15A outlet currently.
aaronmdjones 28 days ago
> What power supply do you have that even has a 20A inlet?

ATX PSUs usually have IEC 60320 C14 inlets. The IEC 60320 standard itself states that this inlet is only good for up to 10 Amps.

UL is happy to ignore them and say that 15 Amps is okay. It wouldn't surprise me if someone else were happy to ignore that and say that 20 Amps is okay.

Even still, swapping a C14 inlet for a C20 inlet (IEC max 16 Amps, UL max 20 Amps) would be a relatively easy thing to do (EDIT: on a PSU that is already designed to take more than 15 Amps, obviously). Probably a warranty-voiding action though.

lazide 28 days ago
Notably, 20 amps @ 120v is 2400 watts.
revnode 28 days ago
https://www.amazon.com/dp/B09PJYMK77/zcF9kZXRhaWxfdGhlbWF0aW...

I'm sure there are power supplies for servers that go above 1600 watts too. If you really want to, you can ... but you really shouldn't.

lyu07282 29 days ago
It's strange how Nvidia just doubled down on a flawed design for no apparent reason. It doesn't even do anything, the adapter is so short you still have the same mess of cables in the front of the case as before.
formerly_proven 29 days ago
This connector somehow has it's own Wikipedia page and most of it is about how bad it is. Look at the table at the end: https://en.wikipedia.org/wiki/16-pin_12VHPWR_connector#Relia...

The typical way to use these is also inherently flawed. On the nVidia FE cards, they use a vertical connector which has a bus bar connecting all pins directly in the connector. Meanwhile, the adapter has a similar bus bar where all the incoming 12V wires are soldered on to. This means you have six pins per potential connecting two bus bars. Guess how this ensures relatively even current distribution? It doesn't, at all. It relies completely on just the contact resistance between pins to match.

Contrast this with the old 8-pin design, where each pin would have it's own 2-3 ft wire to the PSU, which adds resistance in series which each pin. That in turn reduces the influence of contact resistance on current distribution. And all cards had separate shunts for metering and actively balancing current across the multiple 8-pin connectors used.

The 12VHPWR cards don't do this and the FE cards can't do this for design reasons. They all have a single 12 V plane. Only one ultra-expensive custom ASUS layout is known to have per-pin current metering and shunts (but it still has a single 12 V plane, so it can't actively balance current), and it's not known whether it is even set up to shut down when it detects a gross imbalance indicating connector failure.

johnwalkr 28 days ago
Was the old design actively balanced? I think the current of each pin was only monitored.
onli 29 days ago
I was under the impression it saves them money. Is that correct?

It is also a powerplay. By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

Plus probably some internal arrogance about not admitting failures.

incrudible 29 days ago
> By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

They are free to use them, they just don’t because it is a stupid connector. The cards that need 600W are gonna need an enormous amount of cooling, therefore they will need a lot of space anyway, no point in making the connector small.

Yes, NVIDIA created an amazingly small 5090 FE, but none of the board partners have followed suit, so most customers will see no benefit at all.

numpad0 29 days ago
That's the majority understanding, but I suspect it was a simple "update" into "same" connector - the old one was a product called Molex Mini-Fit, and the new one is their newer Micro-Fit connector.
bayindirh 29 days ago
> Plus probably some internal arrogance about not admitting failures.

Arrogance is good. Accelerates the "forced correction" (aka cluebat) process. NVIDIA needs that badly.

formerly_proven 29 days ago
I doubt engineering a new connector (I think it's new? Unlike the Mini-Fit Jr which has been around for like 40-50 years) and standing up a supply chain for it could offset the potentially slightly lower BOM cost of using one specialty connector instead of three MiniFit Jr 8-pins. However, three of those would not have been enough for the 4090, nevermind the 5090.
lyu07282 29 days ago
> three of those would not have been enough for the 4090, nevermind the 5090.

Oh you are right these PCIe power connectors can only draw 150W, so you would need 4 of those for 4090/5090. I guess that makes sense then to create a new standard for it actually, hopefully they can make a newer revision of that connector that makes it safer.

In theory with the new standard you can have a single cable from the PSU to the GPU instead of 4, which would be a huge improvement. Except if you use those and then your PC catches fire, you will be blamed by the community for it. People on the reddit thread [1] were arguing that it was his own fault for using a "third party" connector.

[1] https://www.reddit.com/r/nvidia/comments/1ilhfk0/rtx_5090fe_...

slavik81 29 days ago
EPS is practically identical to PCIe, just keyed slightly differently, and it can handle 300W. It's used for the CPU power connector and on some data centre GPUs. I've never been clear on why it didn't take over from the PCIe standard when more power was needed.
numpad0 29 days ago
The old Mini-Fit takes 10A/pin, or theoretically 480W for 8 pin. Existing PSUs would not be rated for that much current per the PCIe harness, so the connector compatibility has to be intentionally broken for idiot proofing purposes, but connector wise up to 960W before safety margins can be technically supplied fine with just 2x PCIe 8p.
rcarmo 29 days ago
It saves them money on a four-digit MSRP. I think they could afford to be less thrifty.
beeflet 28 days ago
>By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

I suppose but this could be overcome by AMD/Intel shipping an adapter cable

fredoralive 29 days ago
The connector is a PCI spec, it's not an Nvidia thing, it's just they introduced devices using it first.
onli 29 days ago
I don't think thats correct. Nvidia used that connector first and then a similar PCI spec came out. Compatibility is limited. See https://www.hwcooling.net/en/nvidia-12pin-and-pcie-5-0-gpu-p... from back then.
fredoralive 29 days ago
I'd forgotten about the weird 30 series case, but the 40/50 series ones are the PCI spec connector.
kllrnohj 29 days ago
Being a PCI spec connector doesn't mean it isn't an Nvidia thing. It seems pretty likely at this point that Nvidia forced this through, seeing as there's zero other users of this connector. Convincing PCI spec consortium to rubber stamp it probably wasn't very hard for Nvidia to do.
NekkoDroid 29 days ago
Can't have 4 connectors going into 1 video card, that would look ridiculous :/

- Nvidia

BigJono 29 days ago
This shit is so fucking dumb. Sorry for the unhinged rant, but it's ridiculous how bad every single connector involved with building a PC is in 2025.

I'm just a software guy, so maybe some hardware engineer can chime in (and I'd love to find out exactly what I'm missing and why it might be harder than it seems), but why on earth can everything not just be easily accessible and click nicely into place?

I'm paying multiple hundred dollars for most of these parts, and multiple thousands for some now that GPUs just get more and more expensive by the year, and the connector quality just gets worse and worse. How much more per unit can proper connectors possibly cost?

I still have to sit there stressing out because I have no idea if the PSU<->Mobo power connector is seated properly, I have no idea if the GPU 12VHPWR cable is seated properly, I'm tearing skin off my fingers trying to get the PSU side of the power cables in because they're all seated so closely together, have a microscopic amount of plastic to grip onto making it impossible to get any leverage, and need so much force to seat properly, again with no fucking click. I have no idea if any of the front panel pins are seated properly, I can't even reach half of them even in a full ATX case, fuck me if I want anything smaller, and no matter what order you assemble everything in, something is going to block off access to something else.

I'm sure if you work in a PC shop and deal with this 15 times a day you'll have strategies for dealing with it all, but most of us build a PC once every 3 years if that. It feels like as an average user you have zero chance to build any intuition about how any of it works, and it's infuriating that the hardware engineers seem to put in fuck all effort to help their customers assemble their expensive parts without breaking them, or in this case, having them catch fire because something is off by a millimetre.

This space feels ripe for a radical re-design.

mlsu 29 days ago
Connectors are actually extremely difficult to make.

- you have to ensure that the metal connectors take shape and bond to the wire properly. This is done by crimping. Look up how much a good crimping tool costs for a rough approximation of how difficult it can be to get this right.

- one plastic bit has to mate with another plastic bit, mechanically. This needs to be easy enough for 99.99% of users to do easily, yet it needs to be 99.99% reliable, so that the two bits will not become separated, even partially. Even under thermal expansion.

- the electrical contacts inside must be mechanically mated over a large surface area so that current can pass from one connector to another.

- it must be intuitive for people to use. Ideally user pushes it and it clicks right in. No weird angles either, it could be behind a mechanical component that's tough to reach. Also, user has to be able to un-mate the connector from the same position. It should be tough for a user to accidentally plug in an ill suited connector into the wrong slot.

- has to cost peanuts. Nobody will pay $3 for a connector. Nobody will even want to pay $1 for a connector. BOM cost is 15-20% finished goods cost. Will the end user pay $8, $10, $12 for a good connector? No.

- repeatable to manufacture (on the board and on the cable) at high quality. User might take apart their PC a dozen times, to fix things, clean, etc for the lifetime of the component. So the quality bar is actually very high. Nothing can come loose or break off, not even internal parts.

- physically compact. PCB space is at an extreme premium.

- your connector design has to live across many product cycles, since people are going to be connecting old parts to new boards and they'll be upset if they can't do this. So this increases risk by a lot as redesigning a connector means breaking compatibility for existing users.

Connectors are actually a very very deep and interesting well.

I'm not surprised at all that they are running into issues here, these cards are pulling 500+ watts. That is a LOT of current.

I think next gen we will begin seeing 24V power supplies to deal with this.

pja 29 days ago
> I think next gen we will begin seeing 24V power supplies to deal with this.

May as well go the whole hog & jump to 48V.

(50V is as high as you can go whilst still being inside the “low voltage” electrical safety regime in most countries IIRC.)

eqvinox 29 days ago
General SELV limit is 60V, that's why PoE is 54≈56V at the source (it's calculated at roughly 10% tolerance so it can be built cheaply.)
Schiendelman 29 days ago
Then the graphics card would have to have a transformer on it to step down to the voltage that the chips can handle.
dcan 29 days ago
They already do - most of the components buck the 12V down to the 1.3ish volts that the GPU core needs
xxs 29 days ago
They are not transformers, though. The coil/chokes are not galvanically isolated which makes them (more) efficient. Stepping down from 48V to 0.8V (with massive transient spikes) is generally way harder than doing it from 12V. So they may ended with multi step converters but that would mean more PC with more passives.
eqvinox 29 days ago
3.3V from 48V is a standard application for PoE. (12V intermediate is more common though.) The duty cycle does get a bit extreme. But yes, most step-down controllers can't cover both an 0.8V output voltage and 48-60V input voltage. (TI Webench gives me one - and only one - suggested circuit, using an LM5185. With an atrocious efficency estimate.)

You'd probably use an intermediate 12V rail especially since that means you just reuse the existing 0.8V regulator designs.

xxs 28 days ago
Aside the step down, the transients can be quite crazy, which might make the power consumption higher (due to load line) calibration. 48V fets would have much worse RDSon compared to lower voltage spec'd ones. So it does make sense no single smart power stage to have such transistors (presently).

There are other issues, too. 48V would fry the GPU for sure, 12V often time does not even with a single power stage failure.

In the end we are talking about a stupid design (seriously 6 conductors in parallel, no balancing, no positive preload, lag connectors, no crimping, no solder) and the attempted fix is a much more sophisticated PCB design and passives.

Schiendelman 29 days ago
So then it would need to be significantly larger.
lazide 29 days ago
Likely smaller actually.
SV_BubbleTime 29 days ago
This isn’t how it works.

Your SMPS needs sub-2V output, cool. That means it only needs to accept small portions of the incoming.

But, if the incoming is 48V, it needs 48V tolerant parts. All your caps, inductor (optional typically), diodes, the SMPS itself.

Maybe there isn’t a sides difference in a 0603 50V capacitor and 10V 0603 capacitor, but there is a cost difference. And it definitely doesn’t get smaller just because.

Your traces at 48V likely need more space/separation or routing considerations that they would at 24V, but this should be a quickly resolved problem at your SMPS is likely right next to your connector.

lazide 29 days ago
Yes. And it also doesn’t need to handle 40+ AMPs on input, with associated large bus bars, large input wires, etc.

Extra insulation is likely only a mm or two, those other components are big and heavy, and have to be.

It’s the same reason inverters have been moving away from 12v to 48v. Larger currents require physically larger and heavier parts in a difficult to manage way. Larger voltages don’t start being problematic until either > 48v or >1000v (depending on the power band).

deelowe 28 days ago
No one uses transformers anymore. VRRs are basically mini PCs now. They run firmware and report telemetry and are crazy efficient.
Schiendelman 28 days ago
I'm not familiar with this! I've tried to investigate but I just get variable refresh rate. Tell me more?
deelowe 28 days ago
Voltage regulators. Voltage regulation technology is extremely advanced as even very small efficiency gains can save billions for hyperscalers. Unfortunately, I don't know of any specific products to share as power isn't my domain. I'm only familiar with the space because we sometimes have to pull telemetry directly from the VRs when doing system level RCAs. Some of our BMCs can do this directly via I2C.
Schiendelman 27 days ago
No that's okay, thank you for the pointer in the right direction!

It doesn't look to me like anything out there can take voltage from 48v to 2-3v; at least not obviously.

deelowe 20 days ago
https://www.digikey.com/en/products/detail/texas-instruments...

There should be plenty. 48-54 VDC is the standard for OCP powershelf designs. Hyperscalers such as Google have been working for nearly two decades now to eliminate voltage conversion steps. When I left, the power plane within the server PCB ran at the busbar voltage, which could float up to 54VDC. Given this, I'd expect them to convert from 48-54VDC down to 3.3 directly or at most something like 5VDC and then use smaller VRs near components such as ram and cpu.

readingnews 29 days ago
>> Connectors are actually extremely difficult to make.

While your points listed are valid, we have been making connectors that overcome these points for decades, in some cases approaching the century mark.

>> I'm not surprised at all that they are running into issues here, these cards are pulling 500+ watts. That is a LOT of current.

Nonsense. I used to work at an industrial power generation company. 500W is _nothing_. At 12VDC, that is 41.66A of current. A few, small, well made pins and wires can handle that. It should not be a big deal to overcome that. We have overcome that in cars (which undergo _extreme_ temperature and environmental changes, in mere minutes and hours, daily, for years), space stations (geez), appliances, and thousands of other industrial applications that you do not see (robots, cranes, elevators, equipment in fields and farmlands, equipment in mines, equipment misused by people)... and those systems fail less frequently than Nvidia connectors. But your comment would lead one to think that building a connector with twelve pins on it to handle a whopping (I am joking) 500W (not much, really, I have had connectors in equipment that needed to handle 1,000,000Watts of power, OUTDOORS, IN THE RAIN, and be taken apart and put back together DAILY) is an insurmountable task.

userbinator 29 days ago
One word: cost.

Look up how much industrial/automotive connectors cost, and you'll see the huge difference in quality.

drdaeman 29 days ago
Those GPUs aren’t particularly cheap, even a $100 connector and cable wouldn’t be a huge deal breaker for a $2000-3000 device if it means it’s reliable and won’t start a fire (that’ll cost way more than $3100)
vollbrecht 29 days ago
Yes cheap connectors exist and there is a marked for it, like everything "cheap". But to what point one wants to "defend" a trillion dollar company, on a product that was never marketed as "cheap", that actually comes with a hefty price tag, to skimp on something that is 0.01% of there BoM cost. If you sell for a premium price you should better make sure your product is premium.
Manozco 29 days ago
I've bought cars that cost me less than a nVidia card (and they were running).
FirmwareBurner 29 days ago
Which new cars cost less than 2000$-1000$?
ziddoap 29 days ago
They didn't say new cars.
FirmwareBurner 29 days ago
Then what's the point of such an arbitrary comparison? It's normal that plenty of commodities that were expensive when new have been devalued by age and can cost less on the used market than the top of the line BRAND NEW cutting edge GPU today, which itself will be worthless in 10-20 years on the used market and so on.
ziddoap 29 days ago
Presumably, the point is that a working car is more complicated & cheaper (in this case) than the graphics card, while the graphics card can't figure out how to make a connector.

I read it as a kind of funny comment making a broader point (and a bit of a jab at nVidia), not a rigorous comparison. I think you might be taking it a bit more seriously than was intended.

FirmwareBurner 29 days ago
An old legacy car is definitely not more complicated than designing and manufacturing a cutting edge silicon made for high performance compute.

The price difference is just the free market supply and demand at work.

People and businesses pay more for the latest Nvidia GPUs than for an old car because for their use case it's worth it, they can't get a better GPU from anywhere else because they're complex to design and manufacture en-masse and nobody else than Nvidia + TSMC can do it right now.

People pay less for an old beater car than for Nvidia GPUs, because it's not worth it, there's a lot better options out there in terms of cars and cars are interchangeable commodities easy and cheap to design and manufacture at scale at this point, but there's no better options easier to replace what Nvidia is selling.

Comparing a top GPU with old cars is like comparing apples to monkeys, it makes no sense that doesn't prove any point.

ziddoap 29 days ago
>An old legacy car is definitely not more complicated than designing and manufacturing a cutting edge silicon made for high performance compute.

A car is more complicated than a connector, at least.

Anyways, the rest of your comment is again taking a humorous one-liner way too seriously. Thanks for the econ lesson though, I guess. I liked the part where you explained to me the basics of supply and demand like I am in 5th grade.

FirmwareBurner 28 days ago
>A car is more complicated than a connector, at least.

The connectors on a new car cost more than the connectors on a new GPU part for part.

>I liked the part where you explained to me the basics of supply and demand like I am in 5th grade.

You'd be surprised about the state of HN understanding of how basic things in the world work.

numpad0 29 days ago
used objects and imports from economically isolated land are traded at meme value, doesn't count.
janalsncm 29 days ago
That would be relevant if the margins on GPUs weren’t astronomical.
broeng 29 days ago
No, not for a connector for 500W, on a $2000 GPU from one of the worlds biggest companies. They can do better.
Henchman21 29 days ago
Well surely they can take that cost out of the $5090 people are paying for these cards.
state_less 28 days ago
They could use a common XT90 or something similar. You find high amperage connectors on all the RC lipo batteries and they are cheap enough, you find them on $100 products (batteries).

I regularly work with 100amp+ at 12v. It’s obvious the connector NVidia is using is atrocious and we all know it.

jnwatson 29 days ago
Nvidia is clearing 4 figures on each 5090. They can afford another few dollars on connectors.
ImHereToVote 29 days ago
"Nobody will pay $3 for a connector"

I would pay $10.

Henchman21 29 days ago
This whole conversation seems absurd! Of course you'd pay for the right power connector for your multi-thousand dollar card!

You don't buy a $200k sports car and then take it to Jiffy Lube for oil changes. You pony up for the cost of proper maintenance!

Modified3019 28 days ago
A quote I found the other day and saved (forgot where from):

>Like the classic trope says, it's not about affording to buy the Ferrari, it's whether you can afford to maintain it.

accrual 29 days ago
I know we're just ranting, and there are reasons for the seemingly bad designs. But I have a very recent 1200W Corsair (ATX 3.1/PCIe 5.1) which uses these special "type 5" mini connectors on the PSU side. It's painful to try and get your fingers between them to unclip a cable, and yesterday two of the clips broke off just trying to remove them. I ended up taking the whole PSU out just to make sure I didn't lose plastic clips into the PSU itself. It's fine now, but two of my cables will never latch again. Just, blah.

My first build used a Kingwin PSU from around 2007 which used "aircraft style" round connectors which easily plugged in then screwed down. It even had a ring of blue LEDs around the connectors. It was so cool and felt premium! Having that experience to compare to made the Corsair feel cheap despite being so much more powerful.

29 days ago
Workaccount2 29 days ago
I work in power electronics and there are ample connectors that can handle any type of power requirement.

What is happening in the computer space is that everyone is clinging to an old style of doing things, likely because it is free and open, and suggestions to move to new connectors get saddled with proprietary licensing fees.

D-sub power connectors have been around forever (they even look like the '90s still) and would easily be able to power even future monster GPUs. They screw in for a strong connection too, but no reason you couldn't make ones that snap in too.[1]

[1]https://i.ebayimg.com/thumbs/images/g/A0MAAOSwYGFUvkFg/s-l50...

unshavedyak 29 days ago
Man would i prefer screw in. I hate snap. All of those things in motherboards that require serious force and if you don't know what you're doing it's quite easy to not realize the reason something isn't going in is because of a block/issue, rather than not enough force. So the user adds more force and bam, something breaks.

Then of course there's just so much force in general it's easy for a finger/hand to slip and bump/hurt something else, etcetc.

I tend not to enjoy PC building because of that. Screws on everything would be so nice imo. Doubly so if we could increase the damn motherboard size to support these insane GPU sizes lol.

rasz 29 days ago
You are proposing connector with exposed live 12V pins.
RavSS 28 days ago
Would it be any less safe than a Molex connector? They sometimes still come with brand new PSUs for compatibility. They have 12 volt pins too (yellow wire) if I remember correctly that can be very loose. Back when they were more standard, I'd seen sparks go off after they touched a case's chassis, as a cable to the PSU could have multiple unused/unplugged Molex connectors on it just hanging somewhere. The older PSUs I've used never came with full covers for them, so wrapping them in electrical tape was the "fix".
xg15 28 days ago
Not a hardware guy, but I wonder if that's a factor in connector choice. Basically, if a significant fraction of PC building is done by teens or young adults building their gaming rig in their living room, with neither formal training nor oversight, do designers have to make sure this is "teenage proof"?
Workaccount2 28 days ago
The GPU and PSU would have female ports and the cable would be male.

12V isn't dangerous to humans, but it could spark quite a bit if it hit the computer chassis.

aaronmdjones 28 days ago
On the contrary, a system like this would most certainly be designed such that the PSU outlet is female, the GPU inlet is male, and you'd use a male to female power cable. This way, a cable plugged only into a device leaves exposed but dead pins on the other end, and a cable plugged only into a PSU leaves non-exposed pins on the other end.

Just like UPSes have C13 outlets, ATX PSUs have C14 inlets, and you plug a desktop PC into a UPS with a C14 to C13 cable.

delichon 29 days ago
There exists a perfectly balanced point between usability and affordability that, if it can be achieved, makes exactly nobody happy.
account42 29 days ago
GP's point is that "affordability" here is penny pinching considering the cost of the components those cables connect (and are usually included with).
stefan_ 29 days ago
My favorite is these shitty RGB connectors. They were obviously very recently decided on, yet somehow what we got is something without any positive retention or determined orientation yet still obnoxiously big.
Dylan16807 29 days ago
What's wrong with the 4/6/8 pin plugs? I find them perfectly good. And they have a high power variant that would have worked much better here, rated for twice the current per pin.
BigJono 29 days ago
They're the best of the bunch when it comes to PC parts, but think how far off they are in terms of usability compared to USB, or Ethernet, or HDMI, or Displayport, or those old VGA cables you had to screw in, or literally anything else. They only look good in comparison to the other power connectors.
whywhywhywhy 29 days ago
> They're the best of the bunch when it comes to PC parts

Not really, the PSU side isn't standardized at all and it's not obvious at all because the cables will happily fit when you plug cables from PSU A into PSU B and fry your entire build.

Theres no benefit to not having standards on that side, and the other side is all standard so they are able to follow standards there, "It's just the way it's always been" so they keep doing it

cosmic_cheese 29 days ago
Even the now ancient and defunct FireWire 400 connector is nicer than most internal PC connectors.
hulitu 29 days ago
> how far off they are in terms of usability compared to USB, or Ethernet, or HDMI, or Displayport, or those old VGA cables

Those connectors were not designed to carry power.

account42 29 days ago
USB, especially USB C, is very much designed to carry power. Not quite as much as high end graphics cards guzzle these days but it goes up to 240W. Ethernet, HDMI, DP and even VGA (with extensiosn) are also all used to carry power even if much smaller currents.
Dylan16807 29 days ago
It's designed for 5 amps. In this context, that's close enough to "not carrying power".

If we're considering the bigger voltages that allow higher power on USB C, then the existing GPU plugs are fine because we can deliver 600W using the same current as a 150W 8 pin plug.

lenerdenator 29 days ago
> This space feels ripe for a radical re-design.

Making electrical connectors that do their job safely and properly is a solved problem in the engineering world.

Doing so in a way that allows for maximum profit is not.

numpad0 29 days ago
They want to use the Molex for some reason. That's what doesn't make sense. They could just like, give it two ring connectors and let gamers screw them on. Bigger ones of rings take 50A(*12V = 600W) just fine.
Schiendelman 29 days ago
Define the exact "they" you're talking about and you'll start to see the problem.
numpad0 29 days ago
I'm suspecting it's really less than half a dozen people at NVIDIA, like guys in purchasing division or PCB designers not wanting to make a drastic parts/footprint change. M8 SMD lug terminal in a gamer accessory is crazy, but not rocket science.
Schiendelman 29 days ago
That's what I wondered, you may not understand all the players. I believe the PCI standard specifies this Molex connector. Somewhere between what Nvidia ships and the power supply itself, that standard is the only common connection.
numpad0 29 days ago
No, NVIDIA's use of the connector and first reports of melting predates the spec. They were never had hands tied to use it.

Gaming GPUs are having sagging problems for years too, and little is done to solve it. The cards are bending in their own weight. They're not products of proper engineering.

Schiendelman 28 days ago
Oh, tell me more - what year was this added to the spec, and what year did nvidia start offering this connector?
structural 28 days ago
Good connectors are expensive. All-plastic connectors like these are extremely cheap. Here's an example of a connector style as used in internal PC power cabling:

https://www.digikey.com/en/products/detail/amphenol-icc-fci/...

This is $0.20/ea in bulk from a distributor, after import into the US, and after distributor markup. Probably $0.10-0.15 or even less at he scale board manufacturers are working at. You have 4 of these kind of connectors in your system (1 on the GPU, one on the power supply, and two on the cable). So still <$1 total in volume.

A quality d-sub power connector that has a metal housing and screws in place is going to be about $10/each. That's $40 just in connector parts, just to power your GPU, not including every other power cable in the unit, and not including all the herding cats you need to do to get the entire PC industry to shift over to using your new connector.

So, yes, you could do this, but you'd probably double the cost of a PC power supply (if all connectors used were upgraded to the same standard) and increase the cost of every GPU by $100-200, minimum.

People are already complaining that modern GPUs cost too much, so businesses making parts have assessed that it hasn't been worth it to spend this kind of money on connectors. Now, this may change at 600+W... clearly something has to change soon, as we're seeing the limits of what the existing standards can do.

Wingman4l7 28 days ago
If you increased the cost of the GPU by the upper end of your estimate ($200), that's a 10% increase of the new top end GPU (MSRP $2000 for a RTX 5090). That seems significant... until you realize that that 10% is what would prevent that $2000 GPU from turning into a ruined $0 brick when the connector inevitably melts. All of a sudden, that 10% increase seems like a bargain.
rawling 29 days ago
> I still have to sit there stressing out because I have no idea if the PSU<->Mobo power connector is seated properly

I recently switched my PSU and my onboard audio volume halved.

There's no way I'm going to switch back to see if the problem goes away because that connector was such a **ache to undo and reconnect.

mmis1000 28 days ago
Even a middle school teacher will tell you put large amount of current over a wire is an bad idea though. Remember P=I2R? It should be in the first few class that you learn about electricity.

And nvidia engineers decides to put current originally carried by 24wire (or even 32) into a 12wire connector without change the connector size. Wow it's so surprising that it would burn.

I just don't understand how the f*k the whole thing get approved at first place. It's just insane.

knowitnone 29 days ago
power requirements of GPU cards are increasing with each generation and pushing power to them become more difficult. Electricity through wire causes heat. More power = more heat resulting in things melting. Even the cable would melt(or explode) if high enough current runs through it. People here are talking about 48 volts instead of 12 volts which is one solution. But more cabling to distribute the current would be easier.
soramimo 29 days ago
Part of it is likely backward compatibility.
hoseja 28 days ago
Electrical engineers are extremely proud when they can enshittify a $1000 article by replacing a ¢15 component with a ten times worse ¢12 component.
colechristensen 29 days ago
If we’re going to keep up these kilowatt scale cards, we’re just going to need higher voltage rails on PSUs. I had a bunch of similar dumb power connector problems when my 4090 was new.
lazide 29 days ago
Or just give up and provide a wall power cord.
anticensor 28 days ago
You mean ATX230VO.
knowitnone 29 days ago
actually, not a bad idea.
Dylan16807 29 days ago
Is it effective to step down from 24/48 volts to the 1-2 range? Or would cards need two stages of voltage conversion?
mastax 29 days ago
It is tricky but possible. It is being done in the data center for certain hyperscalers. I wonder if Oxide Computer is doing it also, they mentioned a high voltage DC bus I think. https://epc-co.com/epc/about-epc/gan-talk-blog/post/14229/48...

19V is pretty standard in notebooks so 19-24V could probably be done with fairly little trouble. 48V would entail developing a whole new line of capacitors, inductors, power stages (transistors), and controllers.^1

^1: yes, of course, 48V compatible components exist for all of those. But the PC industry typically gets components developed for its specific requirements because it has the volume.

aaronmdjones 28 days ago
The computer industry already has these. 48-54V PoE (power over ethernet) that ends up being used to power equipment designed to operate at ranges of 1.8-3.3V is extremely common.
progbits 29 days ago
TL;DR: Yes there is a small difference in efficiency, but it's still plenty efficient.

You need a switching regulator for the current 12V anyway (as opposed to linear regulator which are much simpler but basically just burn power to reduce voltage) so the question is if increasing the voltage 2-4x while keeping same power requirements makes a difference.

- You need higher voltage rated components (mainly relevant for capacitors), potentially bit more expensive but negligible for GPU costs. The loss due to inductor will be higher too (same DC resistance but higher voltage => higher current, more power), but this is negligible.

- On the other hand you need less thick traces/copper, and have more flexibility in the design.

For some concrete numbers, here is a random TI regulator datasheet [1], check out figure 45. At 5V 3A output, the difference in efficiency between 12V, 24V and 42V inputs is maybe 5%.

I think the main problem is the industry needs to move together. Nvidia can't require 24/48V input before there is standard for it and enough PSUs on the market offer it. This seemingly chicken-and-egg situation has happened in the past a bunch of times, so it's not a big problem, but will take a while.

[1] https://www.ti.com/lit/ds/symlink/tps54340.pdf

anticensor 28 days ago
What if we added one more stage, making it into 230V→24V→5V→1.2V?
zamadatix 29 days ago
Or 2 cables.
gnabgib 29 days ago
Discussion (22 points, 1 day ago, 10 comments) https://news.ycombinator.com/item?id=42996057
xg15 29 days ago
At this point, I'm waiting for the first RTX generation that just comes with its own separate PSU and wall plug cable.
ddalex 29 days ago
People are hooking air con units to liquid GPU coolers

https://www.tomshardware.com/pc-components/liquid-cooling/rt...

jffry 28 days ago
For what it's worth, the "air conditioner" is just a giant radiator. From the linked bilibili video it's clear (at 1:20 mark) that they've repurposed the radiator, fan, and case from an air conditioner but there is no compressor.
ddalex 28 days ago
My calculations show that they should have a working compressor to achieve the reported 20C constant under load
bunabhucan 28 days ago
To get a video card during lockdown someone gave me an invite to a micro center fan discord where people would post timely photos of what was on the shelves. They would also post photos of their crazy hardware, one of which was an air conditioner duct window insert feeding outdoor air to cool the GPUs (mining rig) and exhausting the hot air into a grow tent.
albrewer 28 days ago
I've read about this exact practice in one form or another since the Pentium 3 (sometimes just directly cooling with R134).
moosedev 28 days ago
Not quite RTX, but it almost happened, 25 years ago :P http://www.x86-secret.com/articles/divers/v5-6000/v56kgb-6.h...
misantroop 29 days ago
You only paid $2000 for it, what did you expect.
revnode 29 days ago
Ha! Find me where I can buy it for $2000.
tcdent 29 days ago
Yeah, how do people have these in their hands already? Everywhere I look is sold out.
martinpw 28 days ago
It's sold out to the people who have these in their hands already.
tuananh 29 days ago
it's actually $5090 where I live.
blangk 28 days ago
Getting a 5090 for $5090 would be an absolute steal where I live
tuananh 28 days ago
it's crazy :-o I cant imagine paying that much for a gpu.
Animats 29 days ago
There are plastics that can deal with high temperatures.[1] They're heavily used in automotive applications. They're not often seen inside computers.

Still, 50 amps inside a consumer computer is excessive. At some point it's time to go to a higher voltage and get the current down.

[1] https://www.plastopialtd.com/high-temperature-plastics/

nubinetwork 29 days ago
Someone remind me again why GPUs need 600 watts? I never liked the concept of having to plug a power cable into a GPU, but these new connectors are just terrible...
Dylan16807 29 days ago
> Someone remind me again why GPUs need 600 watts?

Imagine a GPU that uses fewer watts. Now imagine someone in charge of the high end models makes it bigger until they hit a limit.

That limit is generally either chip size or power, and chips have gotten so dense that it's usually not chip size.

That's why the very top GPU uses as much power as you can get it.

perching_aix 28 days ago
Because it's a competitive industry, and further efficiency gains are either not available ("can't do" option) or were deemed strategically unsound to roll out for now ("won't do" option), possibly both. It's an active dimension of sprawl.
29 days ago
kookamamie 29 days ago
The GPUs require additional power, as the PCI-e slot they're connected to can only carry so much.

Obviosly there are GPUs without aux power connectors, but they're considered low-tier.

imp0cat 28 days ago
Power overwhelming! ;)
eqvinox 29 days ago
der8auer got his hands on the actual card, cable and PSU: https://www.youtube.com/watch?v=Ndmoi1s0ZaY (I'm assuming the content is identical to the German https://www.youtube.com/watch?v=puQ3ayJKWds - I haven't watched the English one)

Notable is that on the PSU connector side, 5 pins show heat damage. That means at minimum those 5 must have been carrying some current; i.e. only one of the 6 connections could have failed completely open.

On the PSU one of the ground pins was melted into the PSU connector, this should allow verifying if the plug was fully inserted by a lab disassembling and cross-sectioning it.

karmakaze 29 days ago
The extended title and subtitle say a lot.

> -- Uneven current distribution likely the culprit

> One wire was spotted carrying 22A, more than double the max spec.

fennecfoxy 21 days ago
Idk why we haven't migrated to better power standards for PC/computers.

I loathe those molex connectors, especially the 24-pin mobo one; it's just ridiculously massive/overcomplicated.

Like, instead of the very rectangular 24-pin with a flat but wiiiiide cable, why don't we have a circular connector with a circular cable that is a lot more manageable for mini-itx builds.

enricojr 28 days ago
I've believed for a while now that GPUs are eventually going to get so big they'd need to be external to satisfy power and cooling requirements, and this is just more proof of that IMO.

There's a meme image somewhere out there of a GPU crudely photoshopped to look like a split-type AC condenser unit, i.e the kind you mount outside your house. It's pretty much how I picture GPUs will end up if things keep going the way they are.

wnevets 29 days ago
Why are people using 3rd party cables after the 40 series disaster?
NoPicklez 28 days ago
Because often it is the case that third party cables are of higher quality than stock cables, if you purchase from reputable vendors
numpad0 28 days ago
Gamer judgement on "quality" is often misplaced and skewed towards cosmetic novelty though...
NoPicklez 27 days ago
Often, but not always and whilst using third party cables may bring cosmetic appeal, many third parties provide quality cables.

Yes if you buy random cosmetic cables from random Amazon vendors you're running a heavy risk. However there are reputable 3rd party companies that provide both quality cables and cosmetic customization built for your particular power supply.

The same goes for any 3rd party products/accessories.

dylan604 29 days ago
they're cheaper?
wnevets 29 days ago
That sounds like someone buying a Lamborghini then trying to save money by fueling with regular gas.
bigstrat2003 28 days ago
But with gas the difference is at least somewhat legible to the layman. You see the octane rating and know you're getting something for that extra money. With cables they all look the same to the uninitiated, after all it connects doesn't it? So people are going to go for the cheaper option because they can't discern a reason not to.
wnevets 28 days ago
The type of person who owns a 5090 must also be the type of person who knows about the 40 series and it's cable problems. Little timmy building his first PC wasn't camping outside of microcenter for a week just for a chance to spend $2000 on a video card
dylan604 29 days ago
Yeah, the people that buy a Lamborghini are some of the "cheapest" people and will squabble over the smallest of things.
tlb 27 days ago
I've designed PCBs using connectors with multiple parallel wires to carry high currents. It's hard to do, and very easy to mess up during later PCB changes. You have to match trace resistances from each pin to the destination. The resistance in the traces is probably higher than the resistance in the wires, so the wires don't help balance the current much.

It's easy to mess up during minor PCB changes because the netlist doesn't capture the important bit. An autorouter would happily connect the pins of the connector together and cause uneven current sharing.

nickpsecurity 29 days ago
This needs a name along the lines of The Red Ring of Death. What will we call it?
gamesbrainiac 29 days ago
Black goo of doom?
28 days ago
kristianp 28 days ago
On a side note, there are no 5090's on amazon.com, except as part of a system. I suppose this is because any that become available are purchased and amazon doesn't display out of stock items in their search results? There are 5080's.

https://www.amazon.com/s?k=nvidia+rtx+5090+32gb+ddr7&i=elect...

wasabinator 28 days ago
For the current running through these wires I would have never imagined that the gauge they have and the pin spacing would never work. It's nuts.
snvzz 28 days ago
I hope these connectors are transitional and limited to NVIDIA, and not something that the industry adopts moving forward.

Instead, a proper standard should be developed.

HexPhantom 27 days ago
So after all the drama with the 4090, this is where we've landed? Another generation, another round of melted connectors.
Decabytes 28 days ago
I really hope this means that we reign back in the power on these high end cards. 600 what is just too much for a connector like this. 450watts seems much safer (though I wish the spec as a whole had better margins). Nvidia really just tried to pass the 5090 as a new generation by pumping more power through it and it shows.
bastard_op 28 days ago
This has been the situation since the last 4090's melting as well. Just check youtube channels for videos of the repairs daily from these folks. It's fun to watch, less if you actually bought one.

I wouldn't buy one after watching these and have, period. The rest are beta testers, now two generations in of failure and still lining up in tents.

JimmyAustin 29 days ago
FWIW, I had the same issue with my 3090 (though I believe that uses a slightly different port?). I was using a custom cable like this guy. Nvidia replaced it under warranty, and I went back to using the (ugly) provided adapter.
marcyb5st 29 days ago
I hope that a brand that produce PSUs and GPUs develop a higher voltage rail and a card that goes with it as an open standard.

Wishful thinking, I know.

Especially because I don't even know if they can drift from the nVidia/AMD specs that much before being sanctioned or something.

Yeah, they will be more expensive, but I'd rather pay few bucks more and be safer/not to worry to burn my house down.

mastax 28 days ago
IIRC at the last PC trade show (the one in Taipei) one of the GPU+Motherboard makers was showing a prototype system where the PCIe slot had an additional slot behind it just for power, so no cables were required. Of course then that additional power needs to get into (and flow across) the motherboard somehow.
marcyb5st 28 days ago
That would be amazing even though it requires now 3 non standard components (PSU, Motherboard, and GPU). Would still go for it probably
NotYourLawyer 28 days ago
They should pigtail the GPU straight into the AC wall voltage and build in a dedicated power supply to convert to DC.
renewiltord 29 days ago
The thing takes enormous power. Some people had trouble with the 4090s too but I haven’t and I run a shit ton of them.
wruza 29 days ago
Why can’t they just use a cable+socket similar to PSU - wall socket? It’s not even multiple-kilowatts range.
ch_123 29 days ago
I'm aware of at least one card which did this, which was a custom OEM design (specifically from Asus) which put two Geforce 7000-series GPUs on a single card: https://pcper.com/2005/10/asus-n7800gt-dual-review-7800-sli-...

Thankfully, I've never seen something like it since then.

voxadam 29 days ago
3dfx did it even earlier with the Voodoo 5 6000 all the way back in 2000.[1][2]

[1] https://www.extremetech.com/gaming/325466-i-wrote-the-first-...

[2] https://www.techpowerup.com/gpu-specs/voodoo5-6000.c3536

hgomersall 29 days ago
It's 12V, which means the currents are very high (like >40A). It feels like perhaps they need higher voltage power supplies.
vvv5 29 days ago
I think the problem with this is that chips can only use a relatively low voltage around 1-1.5 volts. So if you supply 48 volts to the card it still has to be stepped down and this means more components and heat dissipation on the card. We are basically arriving at the idea of graphic cards having their own integrated PSUs, but this doesn't fit well with the current physical design of computers.
CharlieDigital 29 days ago
I feel like it could work with an external power brick and the card exposing a dedicated external port.
Dylan16807 29 days ago
For what purpose? That gives you similar cabling problems to internal connections, but now that cable is far less protected.
CharlieDigital 29 days ago
It would be a different connector and it moves some mass and heat outside of the case.
Dylan16807 29 days ago
If you want a different connector then just use a different connector.

Moving 5% of the mass and 1% of the heat outside the case is a bad thing. One of the main purposes of the case is to be one big hunk of mass.

sidewndr46 29 days ago
The cards supply 12 volts today, which is stepped down. There isn't a significant difference between 48 volts and 12 volts other than the amperage
29 days ago
vvv5 29 days ago
That connector (C13) is rated for 15 amps. That's 180W at 12V.
wruza 29 days ago
So heat depends on current, not on power. My research: https://www.reddit.com/r/ElectricalEngineering/comments/15xf...
hgomersall 29 days ago
No it depends on power, but the power dissipated by the cable, not the power through the cable. The power dissipated is i^2 * r, where r is the resistance of the cable and i, crucially, is the current through the cable which depends on the power it's supplying (which with a resistive load, in this case, is i * 12v).
gorjusborg 29 days ago
Yes, but it seems the connectors, not the entire cable are too high resistance.

Using a larger diameter wire would drop resistance in the cable, but if it has to go through the same connector, it will likely still get hot.

Also, NVDA might be telling the truth about poorly seated connectors, that could raise resistance and heat significantly. That could also be handwaving away a business decision to move forward with a design with too little margin.

cwillu 29 days ago
And the card requires up to 600 watts, which is 50 amps if the supply is 12 volts.
dcrazy 29 days ago
I’m not sure what you’re trying to suggest here… the PSU is also connected to the wall with a C13 connector, and is able to supply 600W at 12VDC to the card.
vvv5 28 days ago
Yes, but the voltage on the wall side of the psu is not 12V it's 120-240V. And 600W at 120V requires only 5A of current. It's easier to handle more power at higher voltages because heat produced depends on current and not on voltage. If you take that puny C13 connector and move it to the 12V side with its 50A it would just melt.
ChoGGi 28 days ago
Bring back screw in connectors.
eemil 28 days ago
Why don't they just put two connectors on the 90-series cards?
anthk 29 days ago
150 deg? You can nearly bake a pizza with that.
Havoc 29 days ago
What’s wild is that this is a company clearly capable of designing highly complex things with numerous tradeoffs, challenges and unknowns. And then the fuckin cable is the issue. Repeatedly.
DannyBee 29 days ago
This is because they are trying to parallel like 50amps (it's 12 volt IIRC) over a few conductors to get to 600watts.

If it becomes unbalanced due to any number of reasons, none of those individual cables can come close to handling it - they will all generate enough heat to melt lots of things.

Conservatively, they'd have to be 8awg each to be able to handle the full load without melting if they ended up taking the full load onto a single conductor.

That's the crappy part about low voltages.

If the voltage was higher (i believe 'low volt' classification tops out at 48v), it'd be more dangerous to deal with in some aspects, but it'd be easier to have small cables that won't melt.

rollcat 29 days ago
Can we talk about how absolutely terrifying is that 600W figure? We're not transcoding or generating slop as the primary use case, we're playing computer games. What was wrong with the previous-generation graphics that we still need to push for more raw performance, rather than reducing power draw?
ninth_ant 28 days ago
What was “wrong” is that enough people are willing to pay exorbitant prices for the highest-end gear that Nvidia can do most anything they want as long as their products have the best numbers.

Other companies do make products with lower power draw — Apple in particular has some good stuff in this space for people who need it for AI and not gaming. And even in the gaming space, you have many options for good products — but people who apparently have money to burn want the best at any cost.

xnyan 28 days ago
We must be thinking about very different types of games, because even though I’m completely bought into the Apple ecosystem and love my M3 macbook pro and mac mini, I have a windows gaming PC sitting in the corner because very few titles I’d want to play are available on the mac.
ninth_ant 28 days ago
Perhaps I phrased it poorly but I was trying to separate out GPU workloads for AI and gaming. The apple ecosystem is very poor for gaming overall, but in their ML and LLM related abilities they have very good performance at a fraction of the power draw of a modern nvidia card.

So the point being, nvidia is optimizing for gamers who are willing to throw top dollar at the best gear, regardless of power draw. But it’s a choice, and other manufacturers can make different tradeoffs.

28 days ago
truncate 28 days ago
Is the primary use case for *090 series gaming anymore? 5070 which is probably what most popular gaming card is 250W. If I recall correctly it can push 4k @ 60fps for most games.

But yes, I do agree that TDPs for GPUs are getting ridiculous.

wtallis 28 days ago
4k 60Hz is still largely unachievable for even top of the line cards when testing recent games with effects like raytracing turned up. For example, an RTX 4090 can run Cyberpunk 2077 at 4k at over 60fps with the Ray Tracing Low preset, but not any of the higher presets.

However, it's easy to get misled into thinking that 4k60 gaming is easily achieved by more mainstream hardware, because games these days are usually cheating by default using upscaling and frame interpolation to artificially inflate the reported resolution and frame rate without actually achieving the image quality that those numbers imply.

Gaming is still a class of workloads where the demand for more GPU performance is effectively unlimited, and there's no nearby threshold of "good enough" beyond which further quality improvements would be imperceptible to humans. It's not like audio where we've long since passed the limits of human perception.

0x457 28 days ago
4k@60 isn't all that good today and 5070 can do it with reduced graphics in modern games.

x90 cards IMO are either bought by people that absolutely need them (yay market segmentation) or simply because they can (affording is another story) and want to have the best of the latest.

jorgemf 29 days ago
This genereation seems that is getting performance using more power and more cores. Not really an architectural change but only packing more things in the chip that require more power.
jgalt212 28 days ago
Too true. I've been looking replace my 1080. This was a beast in 2016, but the only way I can get a more performant card these days is to double the power draw. That's not really progress.
UberMouse 28 days ago
Then get a modern GPU and limit the power to what your 1080 draws. It will still be significantly faster. GPU power is out of control these days, if you knock 10% off the power budget you generally only lose a few percentage of performance.

Cutting the 5090 down from 575w to 400w is a 10% perf decrease.

jgalt212 28 days ago
Even if I knew how to do that, I'd still need double the power connectors I currently have.
UberMouse 28 days ago
5090 was an example, same process applies to lower tier GPUs that don't require extra power cables. ie a 3080 with the same power budget as a 1080 would run circles around it (1080 with default max power limit of 180w gets approx 7000 in TimeSpy, 3090 limited to 150w gets approx 11500). Limiting the power budget is very simple with tools such as MSI Afterburner and others in the same space.
0x457 28 days ago
That's because 1080 and whole 10xx generation was pinacle and is the best GPU nvidia ever made. Nvidia won't make the same mistake any time soon.
NoPicklez 28 days ago
Because previous generation graphics didn't include ray/path tracing or DLSS technologies. They had baked in lighting and shaders that required much less compute to generate. Now that it does it requires more computing power that (we assume) Nvidia hasn't been able to solve with improved higher efficient computing power but simply by pushing more power through the card.

It's what Intel has been grappling with, their CPU's are drawing more and more wattage at the top end.

_carbyau_ 28 days ago
Take a step back, perspectively.

1. People want their desktop computers to be fast. These are not made to be portable battery sippers. Moar powa!!!

2. People have a powerpoint at the wall to plug their appliances into.

Ergo, desktop computers will tend towards 2000w+ devices.

"Insane!" you may cry. But a look at the history of car manufacture suggests that the market will dictate the trend. And in similar fashion, you will be able to buy your overpowered beast of a machine, and idle it to do what you need day to day.

rollcat 28 days ago
Well exactly my point. I'm "still" using an M1 Mac mini as my daily driver. 6W idle. In a desktop. It is crazy fast compared to the Intel Macs of the year before, but the writing was already on the wall: this is the new low-end, the entry level.

Still? It runs Baldur's Gate 3. Not smoothly, but it's playable. I don't have an M4 Pro Max Ultra Plus around to compare the apples to apples, but I'd expect both perf and perf per watt to be even better.

If one trillion dollar company can manage this, why not the other?

_carbyau_ 28 days ago
I imagine it's using more than 6w to play Baldurs Gate 3 but still, I get that it is far more efficient for the work being done. I'm a bit irked that my desktop idles at 35w. But then I recall growing up with 60w light bulbs as the default room lighting...

But other people will look at that and say "Not smooth = unplayable. If you can do so much with 100w or less, then lets dial that up to 2000w and make my eyes bleed!"

We're not the ones pushing the limits of the market it seems.

makeitdouble 28 days ago
Is your argument that computer games don't merit better performance (e.g. pushing further into 4K) and/or shouldn't expand beyond the current crop and we give up on better VR/AR ?
xxpor 28 days ago
Why should we reduce power draw? We live in an age of abundance.
fredoliveira 28 days ago
Can you point me to the abundance? Because I sure can point you to the consequences of thinking we live in an age of abundance.
xxpor 27 days ago
Disposable income has never been higher? One of the world's biggest health problems is everyone eating too much food?

Lack of electricity production is entirely a human choice at this point. There's no need to output carbon to make it happen.

KeplerBoy 29 days ago
If only we had connectors which could actually handle such currents. Maybe something along the lines of an XT90, but no Nvidia somehow wants to save a bit of space or weight on their huge brick of a card. I don't get it.
michaelt 29 days ago
The USB-C connectors on laptops and phones can deliver 240 watts [1] in a 8.4x2.7mm connector.

12VHPWR is 8.4x20.8mm so it's got 7.7x the cross-sectional area but transmits only 2.5x the power. And 12VHPWR also has the substantial advantage that GPUs have fans and airflow aplenty.

So I can see why someone looking at the product might have thought the connector could reasonably be shrunk.

Of course, the trick USB-C uses is to deliver 5A at 48v, instead of 50A at 12v

[1] https://en.wikipedia.org/wiki/USB-C#Power_delivery

6SixTy 28 days ago
Nobody thought that they could push 50A at 12V through half the connector. It's management wanting to push industrial design as opposed to safety. They made a new connector borrowing from an already existing design, pushed up the on paper amperage by 3A, never changed the contact resistance, and made the parent connector push current near it's limit (10.5A max vs 8.3A). And oh, the insertion force is so, so much higher than ever before. Previous PCIe connectors push about 4A through a connector designed for about 13A.

Worth also mentioning that the same time the 12VHPWR connector was being market tested was during Ampere, the same generation where Nvidia doubled down on the industrial design of their 1st party cards.

Also there's zero devices out there that actually deliver or take 240W over USB-C. Texas Instruments literally only released the datasheets for a pivotal supporting IC within the last 6 months.

crote 28 days ago
> Also there's zero devices out there that actually deliver or take 240W over USB-C. Texas Instruments literally only released the datasheets for a pivotal supporting IC within the last 6 months.

The 16-inch Framework laptop can take 240W power. For chargers, the Delta Electronics ADP-240KB is an option. Some Framework users have already tried the combination.

michaelt 28 days ago
> Nobody thought that they could push 50A at 12V through half the connector.

If you're saying that the connector doesn't have a 2x safety factor then I'd agree, sure.

But I can see how the connector passed through the design reviews, for the 40x0 era cards. The cables are thick enough. The pins seem adequate, especially assuming any GPU that's drawing maximum power will have its fans producing lots of airflow; plenty of connectors get a little warm. There's no risk of partial insertion, because the connector is keyed, and there's a plastic latch that engages with a click, and there's four extra sense pins. I can see how that would have seemed like a belt-and-braces approach.

Obviously after the first round of melted connectors they should have fixed things properly.

I'm just saying to me this seems like regular negligence, rather than gross negligence.

Miraste 28 days ago
The spec may say it, but I've never encountered a USB-C cable that claims to support 240 watts. I suspect if machines that tried to draw 240W over USB-C were widespread, we would see a lot of melted cables and fires. There are enough of them already with lower power draw charging.
DebtDeflation 28 days ago
Search Amazon for "240W USB" and you get multiple pages of results for cables.

A few years ago there was a recall of OnePlus cables that were melting and catching fire, I had 2 of them and both melted.

But yes 240W/48V/5A is insane for a spec that was originally designed for 0.5W/5V/100mA. I suspect this is the limit for USB charging as anything over 48V is considered a shock hazard by UL and 5A is already at the very top of the 3-5A limit of 20AWG for fire safety.

makeitdouble 28 days ago
We've had a variety of 140W laptops for a few years already, so the original spec has been far away for a while now.

The advantage of USB-C is the power negotiation, so getting the higher rating only on circuits that actually support it should de doable and relatively safe.

The OnePlus cable melting give me the same impression as when hair power cables melt: it's a solved problem, the onus is on the maker.

makeitdouble 28 days ago
240W cables are here but at around a 10x price premium. Also cables are chipped so e.g. a 100W cable won't allow 240 in the first place.

Users needing the 240W have a whole chain of specialized devices, so buying a premium cable is also not much of an issue.

kersplody 28 days ago
The connector could reasonably be shrunk. It just now has essentially no design margin so any minor issue immediately becomes major! 50A DC is serious current to be treated with respect. 5A DC is sanely manageable.
Joker_vD 29 days ago
If only we had electrical and thermal fuses that could be used to protect the connectors and wires.
baq 29 days ago
At these wattages just give it its own mains plug.
cesarb 29 days ago
> At these wattages just give it its own mains plug.

You might think you're joking, but there are gamer cases with space for two PSUs, and motherboards which can control a secondary PSU (turning both PSUs on and off together). When using a computer built like that, you have two main plugs, and the second PSU (thus the second mains plug) is usually dedicated to the graphics card(s).

immibis 28 days ago
I've done this, without a case, not because I actually used huge amounts of power, but because neither PSU had the right combination of connectors.

The second one was turned on with a paperclip, obviously.

Turns out graphics cards and hard drives are completely fine with receiving power but no data link. They just sit there (sometimes with fans at max speed by default!) until the rest of the PC comes online.

0x457 28 days ago
You can also hookup a little thingy that takes sata power on one side and 24 pin on the other. As soon as there is power on sata side, relay switches and second PSU turns on.
aaronmdjones 28 days ago
This may not be fast enough for some add-in cards. It would be better to connect the PS_ON (green) cable from both ATX24 connectors together, so that the motherboard turns on both PSUs simultaneously.

This would still have the disadvantage that the PWROK (grey) cable from the second PSU would not be monitored by the motherboard, leaving the machine prone to partial reset quirks during brown-outs. Normally a motherboard will shut down when PWROK is deasserted, and refuse to come out of reset until it returns.

Dylan16807 28 days ago
The joke actually removes this connector problem though, while a secondary PSU does not.
ryao 28 days ago
Server systems already work like this for redundancy.
aaronmdjones 28 days ago
No they don't. Server-grade redundant PSUs usually use a CRPS form factor, where individual PSU modules slot into a common multi-module PSU housing known as a PDB (power distribution board). Each module typically outputs only 12V and the PDB manages the down-conversion to 5V, 5VSB, and 3.3V. From there, there is only one set of power cables between the PDB and the system's components including the motherboard and any PCIe add-in cards. Additionally, there is a PMBus cable between the PDB and the motherboard so that the operating system and the motherboard's remote management interface (e.g. IPMI) can monitor the status of each individual module (AC power present, measured power input, measured power output, measured voltage input, measured frequency input, fan speeds, temperature, which module is currently powering the system, etc).

PSUs can be removed from the PDB and replaced and reconnected to a source of power without having to shut down the machine or even remove the case lid. You don't even need to slide the machine out of the rack if you can get to the rear.

Example:

https://www.fspgb.co.uk/_files/ugd/ea9ce5_d90a79af31f84cd59d...

ryao 28 days ago
You can have the machine draw twice the amount from the server PSUs. It kills the redundancy, but it is supposed to work.
aaronmdjones 28 days ago
But that still only happens over one set of power cables, from the PDB. The post you replied to described using a separate PSU with separate component power cables to power specific components. Current sharing in server PSUs is handled by every PSU equally powering all of the components.

Edit: For example, in a 3+1 redundant setting, 3 PSUs would be active and contributing toward 1/3 of the total load current each; 1 PSU would be in cold standby, ready to take over if 1 of the others fails or is taken offline.

sho_hn 28 days ago
Not without precedent: The Voodoo 5 6000 by 3dfx came with its own external PSU almost 25 years ago.

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQqLXew...

Joker_vD 29 days ago
Also put it in a separate case, and give it an OcuLink cable to attach to the main desktop tower. I suspect that's exactly where we're heading, to be fair.
dylan604 29 days ago
I've built video rigs that did just that. An external expansion chassis that you could put additional PCIe cards when the host only had 3 slots. The whole eGPU used to be a cute thing, but it might have been more foreshadowing than we realized.
fooker 29 days ago
Have you measured latency?

In modern(last 4 years approximately) GPUs, physical wiring distance is starting to contribute substantially to latency.

KeplerBoy 28 days ago
latency due to wiring distances is far from being an issue in these scenarios. The signals travel at the speed of light. 186 miles per millisecond.

The problem you will encounter with pcie gen5 risers is signal integrity.

fooker 28 days ago
> The signals travel at the speed of light

It's about 75-90% the speed of light, but even that's too slow.

Modern hardware components are getting to latencies in single digit nanoseconds. Light travels about 30cms in a nanosecond, so extending a pcie port to a different box is going to have a measurable difference.

rcxdude 28 days ago
A single round trip isn't going to register, but there are multiple in a frame, so it's not inconceivable that it could add up at some point. I would like to see it demonstrated, though.
dylan604 28 days ago
Without one of these rigs, you would not be able to do much at all because of the limited PCIe slots in the host. "not much" here means render times into the hours per clip to even longer. With the external chassis and additional cards, you could achieve enough bandwidth for realtime playback. Specific workflow would have been taking Red RAW camera footage that takes heavy compute to debayer, running whatever color correction on the video, running any additional filters like noise removal, finally writing the output back to something like a ProRes. Without the chassis, not happening, with the chassis you can do realtime playback during the session and faster than realtime during rendering/exporting.

Also, these were vital to systems like the MacPro Trashcan that had 0 PCIe slots. This system was a horrible system, and everyone I know that had one reverted back to their latest 2012 cheese grater systems with the chassis.

There was another guy I know that was building his own 3D render rig for his own home experimental use when those render engines started using GPUs. He built a 220v system that he'd unplug the dryer to use. It had way more GPU cards than he had slots for by using PCIe splitters. Again, these were not used to draw realtime graphics to a screen. They were solely compute nodes for the renderer. He was running circles around the CPU only render farm nodes.

People think that the PCIe lanes are the limiting factor, but again, that's just for getting the GPUs data back to the screen. As compute nodes, you do not need full lanes to get the benefits. But for doubting Thomas types like you, I'm sure my anecdotal isn't worth much

dylan604 29 days ago
There were no latency concerns. These were video rigs, not realtime shoot'em ups. They were compute devices running color correction and other filters type of thing, not pushing a video signal to a monitor 60fps 240Hz refresh nonsense. These did real work /s
fooker 28 days ago
Ah makes sense, the other kind of graphics!
behringer 29 days ago
We could also do like we do on car audio, just two big fat power cables, positive and negative, 4awg, or even bigger with a nice crimped ferrule or lug bolted in.
KeplerBoy 29 days ago
true. at this prices they might as well include a power brick and take responsibility of the current carrying path from the wall to the die.
mschuster91 28 days ago
> If only we had connectors which could actually handle such currents.

The problem isn't connectors, the problem (fundamentally) is to share electric connectivity between multiple conductors.

Sure, you can run 16A over 1.5 mm² wires, and 32A over 2.5 mm² (taken from [1], yes it's for 230V but that doesn't matter, the current is important not the voltage). And theoretically you could run 32A over 2x 1.5 mm² (you'd end up with 3 mm² cross section), but it's not allowed by code as when, for any reason, either of the two legs disconnects entirely or has increased resistance e.g. due to corrosion or a loose screw / wire nut (hence, please always use Wago style clamps - screws and wire nuts are not safe, even if torqued properly which most people don't), suddenly the other leg has to carry (much) more current than it's designed for and you risk anything from molten connectors to an outright fire. And that is what NVidia currently is running into, together with bad connections (e.g. due to dirt ingress).

The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.

Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.

Another and even more annoying issue by the way is grounding - because all the electricity that comes in also wants to go back to the PSU and it can take any number of paths - the PCIe connector, the metal backplate, the 12VHPWR extra connector, via the shield of a DP cable that goes to a Thunderbolt adapter card's video input to that card, via the SLI connector to the other GPU and its ground...

Electricity is fun!

[1] https://stex24.com/de/ratgeber/strombelastbarkeit

Dylan16807 28 days ago
> The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.

> Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.

So as you say, monitoring multiple inputs happened on the older xx90s, and most cards still do it. It's not hard.

Multiple DC-DC converters is something every GPU has. That's the only way to get enough current. So all you have to do is connect them to specific pins.

mschuster91 28 days ago
> It's not hard

It still is because in the end you're dealing with dozens of amps on the "high" voltage side and hundreds of amps on the "low" (GPU chip) voltage side. The slightest fuck-up can and will have disastrous consequences.

GPUs these days are on the edge of physics when it comes to supplying them with power.

Dylan16807 27 days ago
Let me rephrase.

Doing the power conversion is hard.

Realizing that you already have several DC converters sharing the load, and deciding to power specific converters with specific pins, is comparatively easy. And the 3090 did it.

short_sells_poo 29 days ago
This is the top end halo product. What's wrong with pushing the envelope? Should we all play tetris because "what's wrong with block graphics?".

I'm not defending the shitty design here, but I'm all for always pushing the boundaries.

KeplerBoy 28 days ago
Pushing the boundaries of a simple connector is not innovation, that's just reckless and a fire hazard.
29 days ago
lr1970 29 days ago
> If the voltage was higher (i believe 'low volt' classification tops out at 48v)

Yep, 48V through sensitive parts of the body could be unpleasant but 24V is almost as safe as 12V. Why didn't they use 24V and 25A to achieve required 600W of power instead of 12V and 50A?

armada651 29 days ago
Because no PC power supply has a 24V rail and even though there's a fancy new connector you can still use an adapter to get the old-fashioned plugs.

After all you don't want to limit your market to people who can afford to buy both your most expensive GPU and a new power supply. In the PC market backwards compatibility is king.

nullifidian 29 days ago
>Because no PC power supply has a 24V rail

Servers with NVIDIA H200 GPUs (Supermicro ones for example) have power supplies that have 54 volt rail, since that gpu requires it. I can easily imagine a premium ATX (non-mandatory, optional) variant that has higher voltage rail for people with powerful GPUs. Additional cost shouldn't be an issue considering top level GPUs that would need such rail cost absurd money nowadays.

armada651 29 days ago
A server is not a personal computer. We are talking about enthusiast GPUs here who will install these components into their existing setup whereas servers are usually sold as a unit including the power supply.

> Additional cost shouldn't be an issue considering top level GPUs that would need such rail cost absurd money nowadays.

Bold of you to assume that Nvidia would be willing to cut into its margin to provide an optional feature with no marketable benefit other than electrical safety.

nullifidian 28 days ago
>Nvidia would be willing to cut into its margin to provide

Why would that be optional on a top of the line GPU that requires it? NVIDIA has nothing to do with it. I'm talking about defining an extended ATX standard, that covers PSUs, and it would be optional in the product lines of PSU manufacturers. The 12VHPWR connector support in PSUs is already a premium thing, they just didn't go far enough.

ObscureScience 28 days ago
Electrical safety -> not destroying your GPU does seem like something sellable.

It could probably be spinned into some performance pitch if you really want to.

johnwalkr 29 days ago
A higher input voltage may eventually be used but a standard PC power supply only has 12V and lower (5V and 3.3V) available, so they'd need to use a new type of power supply or an external power supply, both of which are tough sells.

On the other hand, the voltages used inside a GPU are around 1V, and a higher input voltage introduces lower efficiency in the local conversion.

12V is only really used because historically it was available with relatively high power capacity in order to supply 12V motors in disk drives and fans. If power supplies were designed from the ground-up for power-hungry CPUs and GPUs, you could make an argument for higher voltage, but you could also make an argument for lower voltage. Or for the 12V, either because it's already a good compromise value, or because it's not worth going against the inertia of existing standards. FWIW there is a new standard for power supplies and it is 12V only with no lower or higher voltage outputs.

Neikius 29 days ago
Lower voltage would be OK, but then all of the cables and plugs would need to be redesigned. And there would still need to be for voltage switching unless we start adding a protocol and have PSU switch voltage dynamically... which is also not efficient.

Since they went so far to create a new cable which wont be available on old PSU they would have easily extended that slightly and introduced an entirely new PSU class which has a new voltage also. But now they went the easy route and it failed which is even worse as they will have to redesign it now instead of it being safely done the first time.

johnwalkr 28 days ago
There are versions of the cables (and adapters) that work on older PSUs, although new PSUs are starting to come with the same connector that new GPUs have.

Anyway there are pros and cons to using 12V, or lower or higher, and anything except 12V would require a new PSU so it's a hard sell. But even without that detail, I have a feeling 12V is a reasonable choice anyway, not too low or high for conversion either in the PSU or in the GPU or other component.

In any case, at the end of the day sending 12V from the PSU to GPU is easy. The connector used here is bad, either by design or manufacturing quality, but surely the solution can be a better housing and/or sockets on the cable side connectors instead of a different voltage.

0x457 28 days ago
> Lower voltage would be OK, but then all of the cables and plugs would need to be redesigned. And there would still need to be for voltage switching unless we start adding a protocol and have PSU switch voltage dynamically... which is also not efficient.

It's not like that. It's a design where PSU only provides 12V to motherboard and motherboard provides the rest. Only location of those connectors change. It's called ATX12VO.

In modern PC almost nothing draws from 3v3 rail, not even RAM. I'm pretty sure nothing draws 3v3 directly from PSU at all today.

5v rail directly from PSU only used for SATA drives.

opencl 29 days ago
Because nobody makes 24V power supplies for computers, they'd have to convince the whole industry to agree on new PSU standards.
cesarb 29 days ago
> they'd have to convince the whole industry to agree on new PSU standards.

We already have a new PSU standard, it's called ATX12VO and drops all lower voltages (5V, 3.3V), keeping only 12V. AFAIK, it's not seen wide adoption.

masklinn 28 days ago
It's also of no use for the problem at hand, PCIe already uses 12V but that's way too low for the amount of power GPUs want.
Dylan16807 29 days ago
It's not great. Dropping 5V makes power routing more complicated and needs big conversion blocks outside the PSU.

I would say it makes sense if you want to cut the PSU entirely, for racks of servers fed DC, but in that case it looks like 48V wins.

immibis 28 days ago
There are already huge conversion blocks outside the PSU. That's why they figured there's no need to keep an extra one inside the PSU and run more wiring everywhere.

Your CPU steps down 12 volts to 1 volt and a bit. So does your GPU. If you see the big bank of coils next to your CPU on your motherboard, maybe with a heatsink on top, probably on the opposite side from your RAM, that's the section where the voltage gets converted down.

Dylan16807 28 days ago
Those are actually at the point of use and unavoidable. I mean extra ones that convert to 5V and then send the power back out elsewhere. All those drives and USB ports still need 5V and the best place to make it is the PSU.
immibis 28 days ago
Why is the PSU the best place to make 5 volts? In the distant past it made sense because it allowed some circuitry to be shared between all the different voltages. Now that is not a concern.
Dylan16807 27 days ago
The motherboard is cramped, the PSU has a longer life time, and routing power from PSU to motherboard to SATA drive is a mess.
jrockway 28 days ago
Yup, exactly. The VRMs on my Threadripper board take up quite a bit of space.
forza_user 28 days ago
24VDC is the most common supply for industrial electronics like PLCs, sensors etc. It is used in almost every type of industrial automation systems. 48VDC is also not uncommon for bigger power supplies, servos, etc.

https://www.siemens.com/global/en/products/automation/power-...

quickthrowman 29 days ago
Cutting the ampacity in half from 50A to 25A only drops the minimum (single) conductor size from #8 to #10, also there is no 24v rail in a PSU.
ta988 29 days ago
But you would then need to bring it down to the low voltages required by the chips and that would greatly increased cost, volume, weight, electrical noise and heat of the device.
michaelt 29 days ago
Nah, modern GPUs are already absolutely packed with buck converters, to convert 12v down to 2v or so.

Look at the PCB of a 4090 GPU; you can find plenty of images of people removing the heatsink to fit water blocks. They literally have 24 separate transistors and inductors, all with thermal pads so they can be cooled by the heatsink.

The industry could change to 48v if they wanted to - although with ATX3.0 and the 16-pin 12VHPWR cable being so recent, I'd be surprised if they wanted to.

BizarroLand 29 days ago
They could make a new spec for graphics cards and have a 24v/48v rail for them on a new unique connector.

I guess the problem is not only designing the cards to run on the higher voltages but also getting AMD and Intel on board because otherwise no manufacturer is going to make the new power supplies.

nyrikki 29 days ago
IIRC the patchwork of laws, standards and regulations across the world for low voltage wiring is what restricted voltage in the 36 V – 52 V range. Some locations treating it as low, some as an intermediate and others treated it as high voltage.

It may be marine market specific, but several manufacturers limit to 36v for even high amperage motors because of it.

Obviously I=V/R will force this in the future though.

masklinn 29 days ago
USB PD can go up to 48V so I'd assume that's fine from a regulatory standpoint.

Going from 12V to 48 means you can get 600W through an 8-pin with a 190% safety factor, as opposed to melting your 12VHPWR.

BizarroLand 27 days ago
Not even to mention the fact that the SXM format Nvidia cards have been running on 48-52v power for a few years already.
ta988 29 days ago
Of course there is, same on motherboards and to a smaller extent hard drives.
mpreda 29 days ago
The voltage step-down is already in place, from 12V to whatever 1V or 0.8V is needed. Doing the same thing starting from 48V instead of 12V does not change anything fundamentally, I guess.
ta988 29 days ago
It changes a lot. You are switching at different frequencies and although the currents are smaller, there is an increased cost if you want to do it efficiently and not have too many losses.

But anyway for consumer products this is unlikely to happen because it would force users to get new power supplies which would reduce their sales quite drastically at least for the first one they make like that.

The solution would maybe be to make a low volume 48V card and slowly move people over it showing them it is better?

Anyway this is clearly not a case of "just use X" where X is 48V. It is much more subtle than that.

jeffbee 29 days ago
> a low volume 48V card

I wouldn't be shocked if someone told me that Nvidia already sells more 48V parts than consumer 12V parts.

29 days ago
fooker 29 days ago
48V would work with significantly cheaper wiring.
DannyBee 29 days ago
Yes. I'm not suggesting they increase the voltage, as i said, there are lots of tradeoffs.

But i'll also say - outside of heat, all of the things you listed are not safety concerns (obviously, electrical noise can be if it's truly bad enough, but let's put that one mostly aside).

Having a small, cost efficient, low weight device that has no electrical noise is still not safe if it starts fires.

superjan 28 days ago
When you work with normal AC power, it is considered unsafe practice to use parallel wires to share load in a circuit. Reason: one might get decoupled somehow, you don’t notice, and when fully loaded the heat causes a fire risk. This problem sounds similar. A single fat wire is the easiest, but I guess it’s not that simple.
masklinn 28 days ago
> This problem sounds similar. A single fat wire is the easiest, but I guess it’s not that simple.

The problem is the 12V architecture, so the only way you can ramp power up is to increase amperage, and sending 50A over a single wire would probably require 8AWG. That's... really not reasonable for inside a PC case.

Then again, burning down your house is somewhat unreasonable too.

quickthrowman 28 days ago
> When you work with normal AC power, it is considered unsafe practice to use parallel wires to share load in a circuit.

The NEC permits using conductors #1/0AWG or larger for parallel runs, it doesn’t forbid it entirely.

DannyBee 28 days ago
Yeah. I have 800 amp service which is basically always done with parallel 400 mcm or 500 mcm (depending on where it is coming from, since POCO doesn't have to follow NEC)

Within conduit, there is basically no other option. In free air there are options (750 mcm, etc).

Even if there were, you could not pay me to try to fish 750 mcm through conduit or bend it

quickthrowman 27 days ago
Yeah, (2) sets of 4”C 4#500MCM #1/0G (copper) is typical for an 800A service. My electricians feel the same way as you do about anything over #500, usually for 400A we parallel #4/0 instead of one set of #500.
xxs 29 days ago
The 8awg would need a massive connector, else it will still melt/desolder.
btbuildem 29 days ago
Would be trivial to add a fuse / resettable breaker inline.
lanstin 29 days ago
That would be a novel failure mode: the GPU scheduler had an unbalanced work load across the cores and tripped a breaker. The OS can reset? Kill the offending process "out of power"?
dragontamer 29 days ago
Both of my previous cars had door recalls.

Ford Focus Mk3 and Prius Prime 2024.

Yup. The door had weird failure cases that needed a recall.

--------

Connectors and cables is damn near Masters level in knowledge and application. Its a rarely studied and often ignored piece of engineering. The more you learn about them, the crazier it gets.

That being said, this news that the 4090 and 5090 are using but one shunt resistor for all 6 power pins is horrifying to me. I'm not Power Engineer but it looks horrifyingly wrong to my napkin math.

People underestimate the problems of physical design or the design effort needed to make good designs.

gloxkiqcza 29 days ago
A bit off topic, but this is something HN might enjoy. It’s a video by a mechanical engineer that worked for Tesla, Apple and a NASCAR team about Tesla door handles over time.

https://youtu.be/Bea4FS-zDzc

wing-_-nuts 29 days ago
I see content like this, and it inspires me for what I want my retirement to be like. Just some crazy old eccentric puttering about, tinkering in his workshop. I want my back yard to look like a solar punk world's fair.
immibis 28 days ago
A card that draws 600 watts likely already has more than 6 phases of power conversion, so it could put a separate phase on each pin, and one more on the PCIe slot power, and then guarantee load balancing as well as being able to detect any single broken connection.
dragontamer 28 days ago
Could do yes.

But when Founders Edition cards made by NVidia are $2000 and the FE editions have no such mechanism, why would any AIB maker go above and beyond?

You just make your cards more expensive and it's difficult to tell consumers what the difference is exactly.

BonoboIO 29 days ago
Yeah, but we also have reliable mature connectors for decades, and they try to make new connectors and it’s not rocket science to transfer that kind of load.
buffet_overflow 29 days ago
Reminds me a bit of BMWs and their infamous and persistent coolant pump woes. The running joke in those circles is "replacing the entire cooling system" counts as "basic, regular maintenance". BMW makes a fantastic engine, then makes the water pump impeller out of plastic. For what feels like decades.
CommieBobDole 29 days ago
My E39 (which is a beacon of reliability for the brand) had a radiator neck made of a kind of plastic that becomes brittle with prolonged exposure to heat. It's a good thing there's no heat associated with the radiator. "Replace the entire radiator" was a ~70k mile maintenance task.
CobaltFire 28 days ago
Yup, mine blew apart in traffic on H1 (Oahu) at 35MPH when the car (530i) was ~3 years old. I think it had maybe 35k miles on it.

Dealer offered to allow me to pay extra for an all aluminum since that’s what they recommended but the factory wouldn’t cover.

karolist 29 days ago
E39 beacon of reliability, is this sarcasm? I really can't tell
CommieBobDole 29 days ago
It's not, actually - the E39s are incredibly reliable compared to more recent models. I drove mine for 20 years.

That said, it's the difference between "fairly unreliable" and "spectacularly unreliable".

CobaltFire 28 days ago
I miss mine enough I’ve been debating on buying another.

A 530i Sport with a manual (!!!) popped up near me for a song but I just can’t justify it.

isatty 28 days ago
I have the B58 which is fantastic and does come with an all metal, mechanical water pump which I thought would be a pleasant break, but my gasket still failed. BMW and water pumps, classic.
threeducks 29 days ago
It boggles the mind how a $10 space heater has better overheating protection than this $4000 space heater.
0x457 28 days ago
To be fair, $10 space heater main purpose is to get hot, so makes sense there is protection from getting too hot.
Denvercoder9 28 days ago
It's not an engineering problem, it's a political problem. If you present this problem to any power engineer at Nvidia, they'd probably say something akin to "yeah, delivering 600W at 12V over a 12-prong connector is insane, up the voltage". The issue is that 12V has been the standard voltage for ages, and if you want to sell a product that requires a higher voltage, you first need to get the industry to produce PSUs that deliver a higher voltage.
29 days ago
roland35 28 days ago
Cables are hard! Back when I did EE work we tried to avoid cables as much as possible because they cause all sorts of annoyances
bastardoperator 29 days ago
My 4090 connector melted too
sandos 28 days ago
So when will PCs migrate to 48V? :)
rkagerer 29 days ago
Recall in 3, 2, 1...
beebaween 28 days ago
Frankly I'd be fine using barrel connectors to power my 5090, heck even consider just directly soldering the connection.
29 days ago
ribcage 29 days ago
[dead]
elorant 29 days ago
All these incidents are from aftermarket cables.
sz4kerto 29 days ago
The current within a single cable is something like 25 A according to der3uer's measurements. That's crazy for those thin cables. So if current distribution is so uneven then no cables will be completely safe IMHO.
aaronmdjones 28 days ago
All 12V-2x6 GPU power cables are aftermarket third-party cables. Nvidia doesn't make or supply one because PSUs have different pinouts (or may not be modular in design at all). This isn't any different to the previous situation of PCIe 8-pin power connectors on GPUs that don't come with 8-pin cables.

They do ship an adapter for older PSUs.

All of those cables and the adapter are all using the same 12VHPWR connector as standardised by PCI-SIG, with the same gauge wire. I find it highly unlikely that any of them are more safe than any other.

We really need to get out of the mindset of blaming non-first-party stuff unless one can actually demonstrate an issue with a specific product, such as CableMod's (rightly) recalled right-angle adapter (which wasn't a cable, by the way).

At the end of the day this connector has a 10% margin of safety, while a PCIe 8-pin connector had a 92% margin of safety. It is incredibly delicate, much more prone to failure, and will result in a much more spectacular failure when it does fail.

It boggles my mind that it hasn't been written off and replaced yet.

perching_aix 28 days ago
The cable and the connector are standardized, and no evidence of noncompliance has been demonstrated yet. ¯\_(ツ)_/¯

Also note that the only first-party cable NVIDIA provides is an adapter for converting between four 8-pin PCIe power connectors and the 16-pin 12V2x6 power connector. They do not provide a 12V2x6 to 12V2x6 cable nor a 12VHPWR to 12V2x6 cable, you're left with your PSU manufacturer's or other third parties' cables in those scenarios (which would include the reporter's).

I will say that it's regrettable that even derbauer's analysis was basically "idk man looks good to me". He did use a cable from the manufacturer of his PSU however at least for his own testing, so if you'd consider that a first or second party product, there you go.

29 days ago
whywhywhywhy 29 days ago
People are letting their hatred of Nvidia blind them to what happened here, a customer upgraded from a 4090FE to 5090FE, they were using a 3rd party ASUS 12VHPWR cable and didn't realize the 5090FE actually uses 12V-2x6 not 12VHPWR, while the port is the same the pins are different lengths.

End of the day PC power cabling is such a shambles with things that look standard but are not so you should only ever really use the cables that came with the product if you don't want to risk issues, especially with this specific ports poor history with 3rd party cables.

stordoff 29 days ago
At least according to Corsair, there are no changes to the cable, only the PSU/GPU-side connectors:

> Cable: 12V-2x6 = 12VHPWR No difference!

> So what does this mean if you’ve already got hardware for 12VHPWR? Fortunately, existing 12VHPWR cables and adapters will work with the new 12V-2x6 connector as the new changes are only related to the GPU and some PSUs (Our new RMx PSUs for example). The cables you've got already will work fine, so don't worry.

https://www.corsair.com/uk/en/explorer/diy-builder/power-sup...

whywhywhywhy 29 days ago
Connectors are where the issue is and there is a difference even if they fit in the same plugs and power can still go through them.

From your link

> Compared to the original 12VHPWR connector, the new 12V-2x6 connector has shorter sensing pins (1.5mm) while the conductor terminals are 0.25mm longer

stordoff 28 days ago
AIUI, the connector _on the GPU/PSU_ is slightly different, but the connector on the cable is the same:

> As with any new standard, things are likely to evolve quickly and we’re now seeing the introduction of a new connector on the GPU and the PSU side of things. To be clear, this is not a new cable, it is an updated change to the pins in the socket, which is referred to as 12V-2x6.

Corsair's messaging on Reddit[1] emphasises this:

> Cable is the same. a 12VHPWR cable is a 12V-2x6 cable. it is ONLY the plugs on the graphics card / power supply that have changed.

> The cable is the same. 12VHPWR = 12V-2x6. You will get the exact same cable if you upgrade to a new PSU.

> As mentioned in image one, the cable is the same. Only the plug on the graphics card / PSU changed from 12VHPWR to 12V-2x6.

[1] https://www.reddit.com/r/Corsair/comments/1ha9no1/ive_made_s...

erinaceousjones 28 days ago
That's inconsistent messaging from Corsair, then. Parent comment quotes the times they're like "ehh, they're same thing, don't worry about it" and then they go on to say "well TECHNICALLY there's a teeensy difference in conductor sizes"???

Either they are confident that the 0.25mm terminal difference is within tolerance enough that they consider 12VHPWR to be functionally equivalent to 12V-2x6, or they're getting themselves confused let alone the target audience of their article.

29 days ago
jiwidi 29 days ago
NVIDIA made the 12V-2x6 port retro compatible with the previous 4000 series 12VHPWR. If you make your port compatible with past gen and it breaks with it its a design flaw, dont make it retro compatible.

This is not a user error, this is NVIDIA design error.

mastax 29 days ago
If the new 12V-2x6 connector was incompatible with the old 12VHPWR connector, they should have (and would have) made it physically incompatible. They didn’t. You cannot blame the user for doing something which is specifically allowed by design.
whywhywhywhy 29 days ago
> You cannot blame the user for doing something which is specifically allowed by design.

We really saying this when swapping a PSU to a different PSU and reusing existing cables that all look and plug in the same fries your build.

I think it's utterly absurd that this is the case but that's PC components for you.

Dylan16807 28 days ago
Those cables fitting is a mistake, not by design. Even then the users are only a bit to blame, but in the case of deliberate compatibility there's zero blame.
account42 29 days ago
Wait so Nvdia made a connector that is physically but not electrically compatible with their previous generation and you think that's an argument for not blaming them?
Deathmax 29 days ago
The cables between 12V-2x6 and 12VHPWR are identical, it's the port that has different pin lengths (shorter sensing pin, longer conductor pins) to allow for better detection of poorly seated cables and better conductivity while loose.
whywhywhywhy 29 days ago
Pedantry between "cable" and "connectors" and claiming one if the same doesn't help people understand the situation.
jchw 29 days ago
The two standards are compatible, the port end hasn't changed, just the connector. The shorter sense pins are just designed to help detect an improperly connected cable, so that the sense pins only connect when the power pins are definitely connected.

Buildzoid and others have covered the design issues of the 12VHPWR cable and especially it's horrific implementation on the 5090FE well enough[1] that I don't think it's worth going into too much detail here. For some god forsaken reason, they decided to just dump all of the voltage lines in parallel and then run a single shunt resistor to it, so if a conductor fails and the load becomes improperly distributed, there's no way for the card or the user to know until it catches fire. It's hard to come up with a reasonable justification for this.

But just so we're clear, there are 2 reports of catastrophic failures with the 5090 already, which should be even more alarming considering how few 5090 actually exist right now. The other failure didn't involve third-party cables.

Of course, if used improperly, you can burn through basically any cable, and any cable can fail... but when the failure rate of a specific cable is so high above the rest, it raises many questions. If a specific model of aircraft seems to have an oddly bad problem with pilot error, you can't just shrug that off. In my opinion, consumer computer equipment is the same. It shouldn't light on fire unless you've done something horribly wrong. And even if you do something horribly wrong, the hardware should at least be designed in a way that gives it a chance at failing gracefully first. The connectors that 12VHPWR replaced were specced with good safety margins and previous NVIDIA cards were designed to ensure current was balanced across voltage lines.

It is unclear why NVIDIA didn't see the issue with the 12VHPWR last generation and put some serious effort into fixing the problem. If they continue recklessly like this, there is a non-zero chance that the 12VHPWR connector is only retired after it finally causes loss of life.

[1]: https://youtu.be/kb5YzMoVQyw

0x073 29 days ago
"didn't realize the 5090FE actually uses 12V-2x6 not 12VHPWR, while the port is the same the pins are different lengths"

Broken by design.

I use always the cables that are shipped with the psu and noth the cables from gpu.

bee_rider 29 days ago
Building PCs has gone pretty mainstream at this point. Cases where it is easy to melt the thing by plugging it in wrong should be pretty rare.
CarVac 29 days ago
Third party cables were never an issue on sanely designed ports with power balancing.
Algent 29 days ago
Another video today from Der8auer showed him reproduce the issue on his model. He showed a 22A load and 150°C point on a single wire. The problem seem to be much worse somehow, clearly no balancing between the wire on the founder edition.
ChoGGi 28 days ago
I mean unless you use the Nvidia adapter for the old fashioned 8 pins, then technically every cable is a third party cable. Or did Nvidia get into PSU sales?

Also 12V-2x6 only changes the female connector on the card, the male end on the cable is unchanged.