Asus hidden cable alliance

https://www.extremetech.com/gaming/asus-announces-hidden-cable-hardware-alliance

Those cable free ASUS GPUs are going on sale and Asus is trying to make it a standard.
Gigabyte Project Stealth- https://www.extremetech.com/computing/329711-gigabytes-project-stealth-aims-to-hide-your-pcs-wiring
MSI Project Zero- https://www.tweaktown.com/news/9174...-keeping-motherboard-cables-hidden/index.html

I think hidden cable motherboards/GPUs will eventually become very prevalent.
 
I don't see this being a bad thing.
Just move the ATX 3.0 and/or PCIe AUX cables to the motherboard, it would be a lot cleaner.

Rarely do I see proprietary designs that should turn into universally accepted standards but this is one of them.
The BTF (Back to the Future) standard with the HPCE (High-Power Card Edge) connector does seem like good marketing and a good standard.

images-1.fill.size_670x633.v1693412749.png
 
Seems like a great idea to me. No more massive cables that are a pain to manage neatly, no more bent cables and contact issues. Can that small connector provide the same amperage as 3x 8 pins, though?
 
Seems like a great idea to me. No more massive cables that are a pain to manage neatly, no more bent cables and contact issues. Can that small connector provide the same amperage as 3x 8 pins, though?
Pretty sure it is supposed to do up to 600 watts at the current standard, which would be four of the 8-pin PCIe AUX connectors at 150 watts each.
 
So this "cable free" design is one that uses a connector on the motherboard that is powered... by a separate cable to the motherboard?

Snide remark aside, I can see this as being a cleaner alternative as those cables to the motherboard tend to look nicer, and who knows maybe we'll get connectors that naturally have a 90° or 180° bend in them so we don't have to torque the corners of our board by plugging it in straight and trying to make that 270 degree turn from the motherboard top, to the space behind the board, and then down the side of the case.
 
That is a lot of amps to be running through traces on a motherboard. I imagine there is going to be some considerable heat.
 
That is a lot of amps to be running through traces on a motherboard. I imagine there is going to be some considerable heat.
No more than the wires we use now?
1 extra PCB layer and problems solved. Just a matter of keeping the width of the trace equal to or greater than the circumference of the wires we use now.
 
How will the gpu sell if there are no motherboards to use it in the consumers space?
 
Humbug.

Cables are beautiful.

(my ex-wife 5 years ago. NOT)
 
No more than the wires we use now?
1 extra PCB layer and problems solved. Just a matter of keeping the width of the trace equal to or greater than the circumference of the wires we use now.
True, but if some issues do happen, it's not going to be a simple order a replacement cable. You're going to have to go through an RMA process, and if you have a Windows OEM license a new OS license to buy also.

I personally don't mind the cables. I'd rather have the cables than the insanity of adding yet another thing that will cause a nearly $1k motherboard to be a paperweight because some PCB layer cracked at install due to the 30 pound heat sink the video card requires. Not to even mention having to deal with <insert manufacturer name here> support for days as they ask you to do the basic troubleshooting stuff you already did before you can get them to issue an RMA only to have it denied for one reason or another. An ugly cable in my case is 100000x better than any of that.
 
Just go ahead and have my soapbox moment right now.

Why is everyone so goddamn obsessed with looks?! I have watched cases move along from functional side panels, many of them with side fan slots that helped dissipate heat from aftermarket GPUs... to the disco light show of the contemporary times where we have useless as hell tempered glass side panels. Those tempered glass panels basically only serve to accentuate your "wow you have a lot of fucking RGB, PCB, and clear consumerism with displaying your adherence to certain manufacturers in there bro" "thanks bro" moments which happen like <=1% of the time you're using the damn thing. This is while GPUs have moved along from the 250W the 780 through the 2080 have used, to like the 320-600W behemoths of the contemporary times. So we're removing potential fan slots while increasing heat generated. The heat is also generated right on top of where your NVME SSD's probably are. Great stuff. Good logic.

Now as another (at least likely) symptom of the "well I hope my computer build can double as a nerd fashion show" industry, let's take these perfectly good and working 8 pin power connectors and start replacing them with these crappy 12VHPWR or <insert whatever you want here> connectors, which introduce clear failure points. Yes, yes, blah blah user error blah, but when the design invites it (and I'm not sure if it would happen without any user error given a long enough term of the cable being basically bent in any fashion anyway), it's just broken. I'm also not too fond of shoving up to 600W into some traces on my motherboard (which is kind of a vital component) and giving up on the idea of modularity of failure points entirely.

That rant could be slightly misplaced because, sure, Nvidia probably did have other reasons to moving towards that one cable design, but I wouldn't doubt people's obsession with looks to be at least one part of it. As a function-first person with my computer hardware, contemporary times kind of piss me off. Of course, more cables do also inhibit airflow slightly, but in my experience that is horribly, horribly overstated and also easily overcome.
 
If that is the standard, that is something I can get behind assuredly.

As for "hidden cable" motherboards. Wallace Santos, the founder of Maingear, has the patent on this. I remember talking to him about this forever ago.
Do you expect the Maingear patent to get in the way of this, be it coalescing around Asus or one of the others' plugs and way of doing things or coming up with an entirely new standard? I'd absolutely support a universal standard, but I wouldn't want a patchwork of proprietary implementations that varied either greatly or slightly but enough to make them incompatible between different generations or models of hardware.

Also, since you're well connected in these circles are you at all familiar with Singularity Computers/Cases "Powerboard" project? https://singularitycomputers.com/powerboard/ - Their cases seem to be extremely purpose built "case as a distribution plate" watercooling focused platforms, with part of this being their PowerBoard system which greatly restricts the necessity for cable usage thanks to custom PCBs. Granted, the degree to which this works in removing cable necessity depends on the sophistication of the power board PCB versions, and is designed specifically for Singularity's case setup where stuff like the liquid cooling components such as pumps, fans for rads etc... are all basically connected into the distroplate, so its easier to run and predict what goes were, it seems. I don't think their system will be universally useful but its still an interesting, if pricey and extremely focused on a certain kind of build.
 
Last edited:
On the surface that is an easy proposition. And a smart one. However, the legacy ecosystem produces some hurdles.

How long is it taking to move ATX forward?
I mean just the pushback from 12VO alone proves it’s really hard to make any changes to things that break legacy compatibility.

I can only imagine if they declared that the 5080 and 5090 were 48v and required a different PSU.
 
No more than the wires we use now?
1 extra PCB layer and problems solved. Just a matter of keeping the width of the trace equal to or greater than the circumference of the wires we use now.
You are thinking of AC. DC would be the cross section of the wire.
 
I think some of the worry are justified in a vacuum, but apparently it has been used in the server space there would be an actual track record to know if it cause them or not.

CPU power delivery is already in good part like this and GPU power delivery already in part working like this, some CPU reach 500watt, and it all goes through the motherboard (maybe a smaller route, more pins ? but still), how often did we ever saw an issue for the power delivery trace for the CPUs ?

Why is everyone so goddamn obsessed with looks?!
From cars to washing machine, it is quite omnipresence, that said here advantage would be not only looks, possibly make building a pc faster, case simpler and make the giant gpu watt safer and more possible (heat sensor and so on)
 
I think some of the worry are justified in a vacuum, but apparently it has been used in the server space there would be an actual track record to know if it cause them or not.

CPU power delivery is already in good part like this and GPU power delivery already in part working like this, some CPU reach 500watt, and it all goes through the motherboard (maybe a smaller route, more pins ? but still), how often did we ever saw an issue for the power delivery trace for the CPUs ?


From cars to washing machine, it is quite omnipresence, that said here advantage would be not only looks, possibly make building a pc faster, case simpler and make the giant gpu watt safer and more possible (heat sensor and so on)
It’s also not a total looks thing it’s an airflow one too, backside power delivery for lots of this just makes for a clean bit of airflow. But it’s mostly looks, people spend big money on RGB fans and those are also trying their best to cut back on wires too.

Spend an extra $300 on lighting and a pretty case for these 2 ugly ass wires to run smack to the middle of it all. Sure you can spend more in time and effort to re sleeve them or maybe on custom cables that match. Or we work on doing away with the wires.
 
Also onboard this band wagon. Since 2+ M.2 ports on motherboards are now more or less ubiquitous, cutting out 2.5" SSDs has been a dream for cable management. If we cut down on PCI-E cable bulk, then the only thing left that mildly annoys me with DIY builds are the case's front panel jumpers:

1695589494367.png


Been doing DIY builds for 20+ years now and I'm still amazed that there's been virtually zero innovation here. I've seen some boards ship with an extension that makes the process a bit more convenient, but I'd be grateful if I never have to plug in another set of jumper cables in my life.
 
Last edited:
I think some of the worry are justified in a vacuum, but apparently it has been used in the server space there would be an actual track record to know if it cause them or not.

CPU power delivery is already in good part like this and GPU power delivery already in part working like this, some CPU reach 500watt, and it all goes through the motherboard (maybe a smaller route, more pins ? but still), how often did we ever saw an issue for the power delivery trace for the CPUs ?


From cars to washing machine, it is quite omnipresence, that said here advantage would be not only looks, possibly make building a pc faster, case simpler and make the giant gpu watt safer and more possible (heat sensor and so on)

It would make every motherboard have to either have 4 8 pin jacks (and god forbid future cards end up using more than that somehow, welcome to a useless platform that's going to cost more to replace) or 1 of Nvidia's new connector. It would just be on the another side of the motherboard. Most common cases have even less clearance in the back than they do in the front. So if it's Nvidia's connection, welcome to burn city. Also most cases don't really have much of a window in the back to actually hook it into the board. Which means it would have to go into either the side or bottom somewhere. If it's the back, the only open area in most cases is around the CPU area, and god forbid you have a failure in that area. You're not recovering. Ideally the plugs would need to be right where it slots into the GPU, or as close as possible.

I'm sure it's been used properly in the server space, but the server space is a different market. As far as I know, they have deep pockets. I have some serious doubts of it being done properly in the consumer space. If they were actually confident in this concept, why not make their new $4000 ASUS 4090 Platinum card use it and then release a matching motherboard to match? Why release this thing with a 4070 which barely needs any of the maximum wattage of the connector?

I'm mostly progressive with new ideas that have been introduced, but in my opinion the analog and analog power space... is a space you don't mess around in. I think I would have maybe kept one of the 4090s I tried out if they didn't have that new Nvidia connector. I think this would at least make replacing the GPU slightly easier. I'm just not on board yet. And I sure as hell am not buying any GPU that uses it until it's been tested for at least a year. So many things that can go wrong.

Also onboard this band wagon. Since 2+ M.2 ports on motherboards are now more or less ubiquitous, cutting out 2.5" SSDs has been a dream for cable management. If we cut down on PCI-E cable bulk, then the only thing left that mildly annoys me with DIY builds are the case's front panel jumpers:

View attachment 600967

Been doing DIY builds for 20+ years now and I'm still amazed that there's been virtually zero innovation here. I've seen some boards ship with an extension that makes the process a bit more convenient, but I'd be grateful if I never have to plug in another set of jumper cables in my life.

Aye, I'd be 100% on board with that. Everything else about system assembly I could almost do with my eyes closed. Almost every build I've done has posted on first try. On my X670E Carbon board, they even had these twist locks for the M2's, which was a great innovation since god I hate those tiny screws. But The pinout on the jumpers? Have to dig out the manual every single time...

It’s also not a total looks thing it’s an airflow one too, backside power delivery for lots of this just makes for a clean bit of airflow. But it’s mostly looks, people spend big money on RGB fans and those are also trying their best to cut back on wires too.

I think it takes a hell of a lot of wiring being in the way to actually appreciably affect system temperatures at all. And the wonderful thing about 8 pin wiring is it's generally bendable because it's handling much less wattage per connector. I've generally bent mine almost straight to the back of the GPU and never had issues. Airflow is imo absolutely not a reason for this.
 
Last edited:
Just go ahead and have my soapbox moment right now.

Why is everyone so goddamn obsessed with looks?!

Now as another (at least likely) symptom of the "well I hope my computer build can double as a nerd fashion show" industry,

Well said.
Of course, more cables do also inhibit airflow slightly, but in my experience that is horribly, horribly overstated and also easily overcome.
Agreed.
 
Pass, I can already see if a card is poor designed or not correctly installed so it's not properly supported it'll cause less pin contact on the power connections, creating heat.
 
I think it takes a hell of a lot of wiring being in the way to actually appreciably affect system temperatures at all. And the wonderful thing about 8 pin wiring is it's generally bendable because it's handling much less wattage per connector. I've generally bent mine almost straight to the back of the GPU and never had issues. Airflow is imo absolutely not a reason for this.
It dealt depends on the size of the case, I agree that 99% of the time wires aren’t the things buggering airflow. Not in well proportioned builds at least, shoebox builds are a different story though.

The big winners of this can take off are people who like showcase systems and OEMs. Fewer wires means shorter assembly times, and backside power means a uniform cable length all on a single plane, no need to cross from back to front. Also lets them try and go smaller and if they choose they could do what ever the hell they wanted to the connector on the back. Could easily see them using some custom 12 pin to replace the 2 8s or something.
 
Not gonna lie, having 3 or 4 super thick PSU-> GPU cables -> 12hp whatever adapter -> GPU felt like a lot of failure points in my build. Also was just kind of a pain for routing, and I have a massive case. (Fractal Define 7XL)

Just having much shorter PSU -> GPU cables that plug directly into a motherboard is gonna make cable management much better.

Also, it's not like the space behind the GPU is being used for anything right now. This is basically just an extension on the PCI-E 16x slot.
 
Last edited:
Also onboard this band wagon. Since 2+ M.2 ports on motherboards are now more or less ubiquitous, cutting out 2.5" SSDs has been a dream for cable management. If we cut down on PCI-E cable bulk, then the only thing left that mildly annoys me with DIY builds are the case's front panel jumpers:

View attachment 600967

Been doing DIY builds for 20+ years now and I'm still amazed that there's been virtually zero innovation here. I've seen some boards ship with an extension that makes the process a bit more convenient, but I'd be grateful if I never have to plug in another set of jumper cables in my life.
And break the compatibility that standard has been providing for 20+ years.
 
When it comes to SATA cables, I'm frankly mind boggled that we haven't moved past SATA 6.0gbps yet.

I feel like they could easily implement a faster, more flexible, cable that also does power pretty damn easily. Using something like more modern USB cables.
 
When it comes to SATA cables, I'm frankly mind boggled that we haven't moved past SATA 6.0gbps yet.

I feel like they could easily implement a faster, more flexible, cable that also does power pretty damn easily. Using something like more modern USB cables.
Nah, no need they could but then we get into issues with either latency or PCIE lanes.
The consumer market is better served by NVME, SAS can easily double those speeds but the needed controllers to back them up add costs and are still slower than NVME. There just isn’t a consumer need for it.
 
When it comes to SATA cables, I'm frankly mind boggled that we haven't moved past SATA 6.0gbps yet.

I feel like they could easily implement a faster, more flexible, cable that also does power pretty damn easily. Using something like more modern USB cables.

For flash the NVME protocol is much better than ATA because it's designed around how flash works instead of how platters spin. For HDDs you're still only able to saturate a 6GB connection for the brief period it takes to fill the drives onboard write cache. Until that changes there's no need to implement a more expensive controller on consumer devices.
 
So this "cable free" design is one that uses a connector on the motherboard that is powered... by a separate cable to the motherboard?

Snide remark aside, I can see this as being a cleaner alternative as those cables to the motherboard tend to look nicer, and who knows maybe we'll get connectors that naturally have a 90° or 180° bend in them so we don't have to torque the corners of our board by plugging it in straight and trying to make that 270 degree turn from the motherboard top, to the space behind the board, and then down the side of the case.

90* connectors would be mandatory unless we want huge back chambers. I've got a 3-4 inch tall loop on my 24 pin connector from where it comes out from behind the board to the mobo just because I needed that much to have it stop trying to flex the board.
 
No more than the wires we use now?
1 extra PCB layer and problems solved. Just a matter of keeping the width of the trace equal to or greater than the circumference of the wires we use now.

Probably no need for a full layer. Just put the connectors on the back behind the GPU connector on the front, then split a pair of existing 12V and ground layers so a few square inches around the GPU power area is dedicated to them not 12V to the rest of the board.
 
And break the compatibility that standard has been providing for 20+ years.
Technology innovation has a way of doing that. However, it would be trivial to include something like this with case and motherboards accessories during the transition period. I know because some already do this.

1695685428088.png


I'm sure there were plenty of folks here who were upset when we moved off of serial, parallel, IDE, AGP, etc. in the name of breaking compatibility. Will the the two people still using their 20 year old steel beige case even care? The front panel jumpers are infinitely less complicated than all of the above, "backwards compatibility" is a very weak excuse not to innovate there.
 
Last edited:
Back
Top