- cross-posted to:
- games@lemmy.world
- cross-posted to:
- games@lemmy.world
the nvidia 12VHPWR shitstorm continues!
This standard by Nvidia must have been horribly rushed IMO if it’s this hard to design one that won’t melt
yeah, it’s a sloppy standard with a lot of variance allowed for how close to electrical limits they were trying to run it… and the mechanical execution in manufacturing was clearly corner cut.
So, I haven’t kept up on GPU news. I get that the connector isn’t robust enough for certain systems, but why? I read the recall notice, and it talked about melting connectors.
The 12VHPWR specification in ATX3.0 was meant to simplify connections for ever-higher power PCIe devices. Instead of chaining multiple 6 or 8 pin connectors together, it provides a single 12pin connector (6 power, 6 ground) with 4 additional tag-along “sense pins” that allow the cable to communicate the power rating of itself and the power supply to the connected device. In theory it’s sound engineering; allows only a single connector to run devices anywhere from 75W up to it’s stated max of 600W.
The key is that the 12VHPWR spec really pushes the wire and connectors much closer to their safe limit in order to improve packaging and wire flexibility. For example, the old PCIe 8-pin spec officially only provides three 12V hot pins, each with one wire- the entire connector is rated for 150W, which means that each pin/wire is carrying only 50W, or 4.2 amps at 12V.
Comparing this to the 12VHPWR spec, there are 6 +12V and 6 ground pins/wires, meaning when running at it’s max spec of 600W, each pin is carrying 100W, or 8.3 amps at 12V! This is DOUBLE the 8-pin per-pin spec, so we’re already flying close to the sun on single 16gauge wires.
where the problem arises: the electrical connection between the male and female plugs in the 12VHPWR standard is mechanically flawed and can sometimes have too much resistance due to poor fit tolerances and slop in the clip/insertion guides. And by too much, I mean only a fraction of an ohm caused by a pin being 0.1mm undersized is enough to cause failure, especially when being run so close to it’s design limits. When paired specifically with the RTX4090, which pulls 450W or more depending on OC settings, despite the plug supposedly being rated for 600W at 12V, this slightly high resistance causes a big heat buildup in the connecting pins and it melts the plastic connector housing. Melted plastic is Very Bad, it can start fires and damage/destroy the GPU.
CableMod was providing adapters that were meant to relieve the stress on the connector itself caused by trying to bend the wire around side panels, which theoretically would help with the poor contact melting problem. but due to manufacturing tolerances and existing flaws in the 12VHPWR spec they were also causing their own melting problems. Really, it’s not even cablemods fault… but they got themselves stuck in the middle of it by choosing to make products for this segment.
I really appreciate the detailed info!
…too much resistance due to poor fit tolerances and slop in the clip/insertion guides. And by too much, I mean only a fraction of an ohm caused by a pin being 0.1mm undersized is enough to cause failure, especially when being run so close to it’s design limits.
So, sounds like we need to graduate to bolt-on connectors or thicker wires. Maybe a mini breaker panel. h/j
- “Ah, I see you put liquid cooling on your GPU!”
- “Nah, that’s just my GPU’s power supply.”
of course, you’re welcome!
So, sounds like we need to graduate to bolt-on connectors or thicker wires. Maybe a mini breaker panel. h/j
the wires wouldn’t really help, since it’s the connector pins themselves that are causing the resistance bump. But yes, why they chose to go to an even SMALLER pin spec while doubling the amperage is baffling. But yeah, bolt on or even some more traditional larger spade connectors would indeed have made this connection so much better…
Yeah if I got a card that required it I think I’ll just straight solder the wires.
Isn’t the newer 12v6x2 connector supposed to fix this? I think nvidia has been putting this on all new GPUs .
yes, the 12V6x2 lengthens the power pins for more connection surface area, while also shortening the sense pins to prevent any operation at all if the plug is not fully seated. The updated standard seems to be doing better so far.
Do you need both a new cable and the updated gpu pcb port to take advantage of this?
The primary changes have been to the male pin connector which exists on the GPU or PSU side not the female socket connector which exists on the cable. So the cable is fine but the gpu is not.
It could likely be replaced on thr GPU by a good repair shop. The PCB plug is a quite easy solder job all things considered.
Don’t you have that reversed? Isn’t the cable the male inputs? You have to slot them into the GPU/PSU.
well, the plug standard is a bit confusing.
The “female” mechanical socket on the motherboard contains the MALE electrical contacts. And the “male” mechanical plug on your cable contains the FEMALE electrical contacts. This way they slip inside one another without exposed contacts.
I was talking about things from the electrical point of view since that’s where the only spec modifications were made.Plug/cable end.. note the electrical contacts are hollow tubes (female)
Socket/PSU or GPU end.. Note the singluar male pins in the holes.
Eli5 but wouldn’t going to 24v fix all of this? People buy new PSUs for their 4090 anyway.
Yes, because 24V would halve the current requirements back to the older “safe” 8-pin spec. However, you introduce a huge host of other compatibility problems, namely shared 12V rails between the PSU and motherboard that are now mismatched (since the motherboard PCIe slot does provide some power to the GPU). It would require a significant redesign of either the GPU board, or the motherboard and entire PC ecosystem. Both of which are a much more costly endeavor compared to “same voltage, new pins”.
Now I’m glad I was too lazy to ever install mine.
procrastinators win once again!