How long until high end gpu's are external?

sirsad

Limp Gawd
Joined
Mar 12, 2006
Messages
303
Lets face it, the ATX motherboard layout was not designed for cooling GPUs, especially 300W GPUs, let alone FOUR 300W GPUs. In fact, the slot design of PCIE just plain sucks ass.

I have zero confidence that ATX will change to something useful any time soon (lol BTX) so I expect external GPUs will be the norm in a few years. Just imagine taking one of these out the back of the case to a box that has enough room for a cooler similar to the current CPU air coolers.

Yeah I know only the GTX 480 is hot as hell right now, but how soon until the slot design of ATX becomes a much bigger limiting factor to GPU design?

ARC1-PELX16-Cxm.jpg
 
A redesign where the videocard has a dedicated area would be good simply for the fact that you need more room for larger or custom cooling and just how big and long these cards are getting.

The CPU has plenty of space around the sockey for all sorts of large cooling options but videocards are always limited in what you can place in that area without blocking off needed slots or other areas.

Videocards need not be as loud or hot, the problem is that we are cooling something with far more transisitors and more power consumption than the CPU with tiny cooling solutions and tiny fans mounted in an efficient manner that pale in comparison to even the most generic CPU fan tower.
 
you are completely backwards. there is no good reason to expect external gpus to ever be normal. the technology exists to do it and yet nobody has ever cared. computers are getting smaller, not bigger.

the "norm" in a few years has nothing to do ATX motherboards.
 
I expect desktop form factors will cease to be relative at all once the shift to mobile computing is complete. I'm predicting in 12-15 years there won't even be much of a consumer market for video cards or desktop sized components at all.
 
My 5850 was actually a bump in the opposite direction. Cooler running and drawing less power than the previous generation. I don't see external GPU for desktops going anywhere.
 
Just increase the temperatures and you're fine. People are still stuck on this notion that something should run at 70C or 80C or whatever. Design chips that can withstand higher temperatures and you can run more power. Increase the number of fins and increase the air flow and you can run more power. Or just water cool it and be done with it. Really, if you're willing to run your card temps at 80C under water, you'll be measuring heat in kilowatts, from a triple rad.

External GPUs aren't coming any time soon.
 
I would be fine with an entirely separate "GPU Box" beside the PC, that houses the video cards only, has its own PSU and cooling devices and connects to the computer via a custom cabled interface. I mean, at this rate, why not. Keep the heat and power away from everything else.
 
you are completely backwards. there is no good reason to expect external gpus to ever be normal. the technology exists to do it and yet nobody has ever cared. computers are getting smaller, not bigger.

the "norm" in a few years has nothing to do ATX motherboards.
I've bought 4 cases in the last 15 years and each one is bigger than the previous. Sure the office pc is smaller but this site isn't for office pc's ;)

The reason no one has cared before is because the power draw of GPUs hasn't been an issue. The power draw has been increasing since the voodoo 1 and games like Crysis 2 is enough of a reason for that power draw to continue to increase each generation.
 
Just increase the temperatures and you're fine. People are still stuck on this notion that something should run at 70C or 80C or whatever. Design chips that can withstand higher temperatures and you can run more power. Increase the number of fins and increase the air flow and you can run more power. Or just water cool it and be done with it. Really, if you're willing to run your card temps at 80C under water, you'll be measuring heat in kilowatts, from a triple rad.

External GPUs aren't coming any time soon.
Increase the temperature? Design to a higher temperature? Time for you to take a physics course! Basically that isn't possible with the transistor technology currently used.
 
i have seen some modding efforts to making external gpu's for notebooks, i wouldnt be surprised if something like that rears its head on desktops before long. Hell that would be a good plan in my opinion. Allow us to run our system on a 350w psu and have a separate built in psu for the gpu
 
I would be fine with an entirely separate "GPU Box" beside the PC, that houses the video cards only, has its own PSU and cooling devices and connects to the computer via a custom cabled interface. I mean, at this rate, why not. Keep the heat and power away from everything else.

That custom cabled interface is going to be its achilles heel. There really is no going around the huge need for bandwidth to feed data to the "GPU Box" you propose. This communication needs to be bidirectional as well.

The fastest external interface currently is USB 3.0 if I'm not mistaken, and that provides what? 400MegaBytes/s before overhead, whereas a PCIexpress 1x Lane already gives you 500MegaBytes/s after the 20% of overhead. There needs to be a breakthrough in external interface speeds for it to deliver the same high end experience discrete graphic accelerators are capable of giving us right now.
 
Last edited:
That custom cabled interface is going to be its achilles heel. There really is no going around the huge need for bandwidth to feed data to the "GPU Box" you propose. This communication needs to be bidirectional as well.

The fastest external interface currently is USB 3.0 if I'm not mistaken, and that provides what? 400MegaBytes/s before overhead, whereas a PCIexpress 1x Lane already gives you 500MegaBytes/s after the 20% of overhead. There needs to be a breakthrough in external interface speeds for it to deliver the same high end experience integrated graphic accelerators are capable of giving us right now.
There are already external PCIe cabling being used. It's been standardized since 2007. I haven't seen any 16x connections yet, but that should easily be possible. Most of the external PCIe connections I have seen were for large storage arrays for servers. I believe Nvidia is using something similar for its CUDA boxes.

http://www.pcisig.com/specifications/pciexpress/pcie_cabling1.0/
http://www.innovative-dsp.com/products.php?product=PCI Express X1 Cable Adapter
http://www.google.com/products/cata...g_result&ct=result&resnum=11&ved=0CFYQ8wIwCg#
 
There are already external PCIe cabling being used. It's been standardized since 2007. I haven't seen any 16x connections yet, but that should easily be possible. Most of the external PCIe connections I have seen were for large storage arrays for servers. I believe Nvidia is using something similar for its CUDA boxes.

http://www.pcisig.com/specifications/pciexpress/pcie_cabling1.0/
http://www.innovative-dsp.com/products.php?product=PCI Express X1 Cable Adapter
http://www.google.com/products/cata...g_result&ct=result&resnum=11&ved=0CFYQ8wIwCg#

Very interesting. I glanced at the specs and it is full duplex, pretty cool. I did a little more digging and there seems to already be 4x 8x and 16x cables assemblies and connectors. Pretty cool indeed.
P.S: I'm retarded. How could I forget about the ASUS XG Station. http://www.asus.com/product.aspx?P_ID=cjytZpI6lGKd6XX8 this is already possible.
 
Increase the temperature? Design to a higher temperature? Time for you to take a physics course! Basically that isn't possible with the transistor technology currently used.

Please don't talk out of your ass. It smells when you do. The thermal limitations on chips increase year over year. That's why chips today run at 90C just fine, while 10 years ago they could not. Furthermore, none of that has anything to do with a basic physics course.
 
I like how people come up with their own theories why this won't/can't happen. Fact is, there are already products on the market that do this, and has been for some time.

See:
http://www.elsa-jp.co.jp/english/products/pes/vridge_x100_quad8/index.html

http://www.magma.com/pciexpress.asp

http://www.netstor.com.tw/_03/03_02.php?ODI=#

http://www.villagetronic.com/vidock2/index.html

And a thread on some people trying to build a DYI version of the last link:
http://forum.notebookreview.com/gaming-software-graphics-cards/397667-lets-figure-out-how-make-diy-vidock.html

Shuttle also recently showed off their own version (which might only work with their laptops):
http://www.slashgear.com/shuttle-i-power-gxt-mini-external-gpu-gets-video-demo-0476667/


Granted, the products on the market are mostly geared to external graphics for notebooks running over an external PCIe 1x link, and (no doubt expensive) specialty racks in multi-GPU configs running at higher link speeds. But that's because there's not a significant market for anything else yet. If GPUs start to exceed the thermal/power capacity of conventional cases, we might start seeing more of these type of solutions outside those niches.
 
I have seen some already. They are actually quite expensive and not really worth buying until that market is expanded, don't get one for now.
 
Please don't talk out of your ass. It smells when you do. The thermal limitations on chips increase year over year. That's why chips today run at 90C just fine, while 10 years ago they could not. Furthermore, none of that has anything to do with a basic physics course.

He's right, and it has a lot to do with basic physics. Thermal limitations on chips have increased only as our ability to design, manufacture, and operate those chips to more extreme tolerances have. You can only go so far though. It's fine to design more efficient cooling systems capable of dissipating more energy, but we're fast approaching a fundamental limit on energy levels that chip materials can reliably operate under. The solution to that problem so far has been to continually move to smaller and smaller processes but we're going to hit a wall there soon also.

The day is coming when (silicon) chips will be made at the smallest possible process and under the highest allowable thermal envelope before the materials themselves fail. Not many people are going to buy desktop components that require elaborate and bulky systems of fans, fins, pumps, hoses, blowers, radiators, fittings, duct work, filters, compressors, etc just so they can shoot zombies in higher detail.
 
He's right, and it has a lot to do with basic physics. Thermal limitations on chips have increased only as our ability to design, manufacture, and operate those chips to more extreme tolerances have. You can only go so far though. It's fine to design more efficient cooling systems capable of dissipating more energy, but we're fast approaching a fundamental limit on energy levels that chip materials can reliably operate under. The solution to that problem so far has been to continually move to smaller and smaller processes but we're going to hit a wall there soon also.
I'm confused. Are you saying that there is a limit on the allowable heat flux through "chip materials"? Or just temperature limits?
Not many people are going to buy desktop components that require elaborate and bulky systems of fans, fins, pumps, hoses, blowers, radiators, fittings, duct work, filters, compressors, etc just so they can shoot zombies in higher detail.
The cooling subforum disagrees. We already do this.
 
Of course WE here on enthusiast computing forums will deal with ridiculous cooling. Most end users won't.
 
I'm confused. Are you saying that there is a limit on the allowable heat flux through "chip materials"? Or just temperature limits?
I'm saying there is a practical limit to allowable heat flux in small-die silicon chip packages. We are approaching the edge of what ordinary cooling solutions can accommodate.
The cooling subforum disagrees. We already do this.
That's why I said "Not many people".
 
I don't think there's even a question anymore. These things already exist if you need them, so how long only depends on how long you're going to wait before you buy them.
 
When I first read this, I couldn't help but think of running external GPUs off of a video RAID card...

SLI is RAID-0, obviously...

But RAID-1/10/5 is for those hardcore gamers who can't be sidelined for even a moment...

And you may have issues if you think RAID-60 is really something to consider...
 
What about light peak to connect the gpus. 100gb per sec i think.
 
Last edited:
What about light peak to connect the gpus. 100gb per sec i think.

actually it's currently only 10gbits with potential for 100. PCI-e 3.0 x16 slot will have 16GB/sec. So even at lighpeak's max (which might not come for years) pci-e v3 already has it beat by nearly 4GB. Now whether GPU's acutally need that much, I don't know.
 
actually it's currently only 10gbits with potential for 100. PCI-e 3.0 x16 slot will have 16GB/sec. So even at lighpeak's max (which might not come for years) pci-e v3 already has it beat by nearly 4GB. Now whether GPU's acutally need that much, I don't know.

We're still not saturating a PCI-e 2.0 bus even with dual GPU cards, 3.0 will be needed, but it's not needed for current graphics cards. It is needed soon so it will be on most motherboard when the graphics cards that need it hit the shelves though.
 
The Nvidia TESLA rackmounts are external graphics cards, but their more GPGPUs than graphics cards. Ive always wondered why there cand be non board motherboards. SO the GPU slot part is a cylinder in the middle with 4 slots facing each way. It could be on a motor and spin for aircooling!
 
The future is On Live - assuming we ever develop the proper broadband infrastructure. Forget about a box for your GPU; your whole hardware setup will be outsourced to some data center 20 miles away and you will connect to it at 100 mb/s.

Or not. Depends on the state of infrastructure.
 
The future is On Live - assuming we ever develop the proper broadband infrastructure. Forget about a box for your GPU; your whole hardware setup will be outsourced to some data center 20 miles away and you will connect to it at 100 mb/s.

Or not. Depends on the state of infrastructure.

If the future happens in 2013. Which is the limit of the garanteed support for Onlive. What happens if 4 of you want to play the game. Were going to need 400mb/s
 
The future is On Live - assuming we ever develop the proper broadband infrastructure. Forget about a box for your GPU; your whole hardware setup will be outsourced to some data center 20 miles away and you will connect to it at 100 mb/s.

Or not. Depends on the state of infrastructure.

Sorry, but on live is not the future of gaming. You're holy grail is going to be plagued with lag issues and low resolution forever. Good luck streaming online at 1080p, much less eyenfinity resolutions like 7680x1600.
 
I love the idea of external GPU's. You should be able to buy any POS computer, run out to Best Buy, grab a GPU, plug it into the wall, and slap a fiber cable into the dedicated GPU port on the back of the computer. The only people with motivation NOT to do this are OEMs.

It will never happen, but a man can dream!
 
There was a thread a month ago or so started by a guy who hooked up his laptop to a decent video card through some kind of PCI-E 1x equivalent interface, or something to that effect. It seemed to work fairly well.
 
Back
Top