Two 240 Rad Ncase M1 V2

MrJerico

n00b
Joined
Aug 23, 2014
Messages
27
Hey guys, sorry for another Ncase M1 build... seemed like there was some interest in seeing the M1 with two 240 rads in it though.

Ill do the generic "what's in there" list then, pictures!

Computer:
- 4790K OC at 4.8GHz
- EVGA GTX 780 SC
- 16GB Trident X 2800MHz
- 256 GB 840 pro SSD
- Silverstone SX600-G
- Asus Impact VI

Cooling Loop:
- 2X Alphacool ST30 240mm rad
- Swiftech Apogee Drive 2
- EK FC780GTX water block (Copper)
- FrozenQ Ncase M1 reservoir
- 2X Scythe GT AP-15
- 2X Scythe Slipstream slim 1600RPM

Before I start, sorry for the potato quality I took them on my Iphone.

OjQa87h.jpg


tiZ4Kw7.jpg


Everything fits in, it was a tight though, the two extra tubes for the second rad was definitely noticeable.

d4IeNXT.jpg


36X39qd.jpg


RZa4AOm.jpg


Probably the hardest water cooling loops to set up ever. Once i was finished I was bleading from almost every knuckle.

XO59wcW.jpg


979LRXE.jpg


This is probably the most interesting part. Since I placed the fans on the bottom of the radiator the rad/fan combo was able to rest over the front I/O cables. Also the rad cover is about 5mm over the actual fins so between the fins and the graphics card there is almost a whole cm. I know its not much... was a nice surprise though. Also that the radiator is only secured on 1 half because of how far the front I/O sticks out thank you Ncere for putting that extra fan mounting hole on the bottom worked out great!

iHm8ab9.png


Alright guys here's my idle temps with my room at 75.7F (24.28C). These temps are with stock cpu voltage and clock. If you want to see my temps at my 4.8GHz clock let me know they idle more at high 30s low 40s. I thought that stock bios setting was more relevant.

S5jbRMv.png


These temps are after playing an hour of watch dogs with "The Worse Mod" V1 and full reskin. Scary seeing the max temps reached almost 70C at some point, when i was watching it though it stayed around the mid 50s to low 60s. Take it how you will.
 
Last edited:
Exactly. Very nice combo with the orange and black. :)
Really curious to see the temps on that setup. :D
 
That is pure pornography. Looks surprisingly clean too.

Thanks! It was a little cluttered at first, so I shortened all of the SX600-G cables. They were too long since there was no room to put the cables with the rad on the bottom.


Exactly. Very nice combo with the orange and black. :)
Really curious to see the temps on that setup. :D
Epic!
Could you put some temperatures?

Yea, I will update my post with temps as soon as I get home.


How are you powering all those fans and the pump?

The Asus impact VI has a cpu fan header and 3 case fan headers. The pump is using a molex connector to the psu and a 4 pin fan cable to the cpu header. I have each of the GT AP-15s on a header, and a splitter connected to both of the slipstream slims on another header. The slipstreams don't go above 20% unless my cpu temp gets above 50C so they aren't on very often. They're surprisingly quiet actually.
 
Last edited:
As someone who's considering a dual 240mm radiator cooling solution, I am very happy to see others explore such a setup! Many thanks for sharing this. That build is simply fantastic.

I can't resist asking a few questions:

  1. How is the noise in your case? What's loudest?
  2. How did you go about hooking up hoses & installing cooled components - or, more specifically, what order did you do it? And how did you go about filling/bleeding the loop?
  3. How tight is the fit with the Apogee Drive II on the Impact motherboard?

Thanks a million for your answers, and for sharing! :D
 
As someone who's considering a dual 240mm radiator cooling solution, I am very happy to see others explore such a setup! Many thanks for sharing this. That build is simply fantastic.

I can't resist asking a few questions:

  1. How is the noise in your case? What's loudest?
  2. How did you go about hooking up hoses & installing cooled components - or, more specifically, what order did you do it? And how did you go about filling/bleeding the loop?
  3. How tight is the fit with the Apogee Drive II on the Impact motherboard?

Thanks a million for your answers, and for sharing! :D

  1. The noise at idle is completely silent mainly because I have my fans so low. Under load though they start to ramp up. For four fans in a tight enclosure they are still pretty quiet. I promise you will be happy with it.
  2. This was actually pretty tricky, and will be hard to explain with out a few paragraphs. I will try to paraphrase. The order was mobo/ram/pump combo, then i installed the gpu. Without screwing it in though I had to tilt the card up a little being careful not to break the pci lane and slide the bottom rad under it, with one of the fans attached and the other free floating so i could secure it from the bottom(this took a long time to get it to fit right). Then I installed the tubing and psu. By the way there is not room for a compression fitting on the bottom rad due to graphics card clearance so you have to use barbs there. Hope that answers your question a little.
  3. The Apogee Drive 2 fits great on the mobo! no problems at all. However, fitting the short tubing to the reservoir was incredibly difficult.
 
You thought it out very well :)

I'm interested which orientation you had the case when filling the loop? As I know it can be tricky to prime the AD2 with non-standard reservoir orientation.
 
Do those load temperatures seem way too high for water at stock clocks to anyone else? What direction are your fans blowing? On non-fanned grills, how much passive airflow is there/how much pressure does it feel like there is in the case?
 
Do those load temperatures seem way too high for water at stock clocks to anyone else? What direction are your fans blowing? On non-fanned grills, how much passive airflow is there/how much pressure does it feel like there is in the case?

Max CPU seems high, but it's an i7 and I presume it's not delidded.
Max GPU temps seem fine, good even.
 
Do those load temperatures seem way too high for water at stock clocks to anyone else? What direction are your fans blowing? On non-fanned grills, how much passive airflow is there/how much pressure does it feel like there is in the case?
Max CPU seems high, but it's an i7 and I presume it's not delidded.
Max GPU temps seem fine, good even.

CPU is higher than what I would have guessed, but the GPU is (IMO) an excellent result.

It's worth mentioning that, at stock/reference clocks, the CPU and GPU have a combined TDP of 338W. That's just 20% lower than a configuration with the same CPU and dual GTX 980s. Even if you extrapolate that power consumption to temperature increases completely proportionately (which will grossly overestimate the actual temperature, but bear with me), it's not only feasible to support the thermal envelope of a high end dual-GPU configuration... It's actually an impressively good solution in its own right. You could even overclock that dual-GPU, since overclocking would be limited more by power and noise than thermals, in all likelihood... And, perhaps, CPU temps, if you've got a toasty chip.

This is why I have been so enthusiastic with regards to experimenting with two 240mm radiators in the M1. Certainly the work in implementing that is significant, but if the right hardware is released, you're potentially achieving parity with desktops of any size when it comes to maximum gaming performance, within reason (3-4 GPU setups remain an unrecommended waste and a frequent performance killer). In a freakin' 12L case!

*Holds out hope for a GTX 990* ;)

I'd be quite curious as to what the temps are once you overclock the CPU, MrJerico. If I had that build I would also see how far I could overclock the graphics, too - you certainly wouldn't be limited by GPU temps, and you have plenty of power headroom. Again, your CPU temps seem to be the only real possible bottleneck here.
 
You thought it out very well :)

I'm interested which orientation you had the case when filling the loop? As I know it can be tricky to prime the AD2 with non-standard reservoir orientation.

Yea priming was tough I had to hold that case at a very sharp angle forward, and had to fill the res that way. On the FrozenQ res both stops on the top had to be open while I had my funnel attached, else there was a vacuum effect that would prevent water from flowing into the AD2. This created a very difficult situation where I had to tip it enough to make the water flow into the pump but not so much that it would spill out of the other open port on the res.

Do those load temperatures seem way too high for water at stock clocks to anyone else? What direction are your fans blowing? On non-fanned grills, how much passive airflow is there/how much pressure does it feel like there is in the case?



Im using 2 GT-AP15s on the top res in push which push a lot of air through the rad and out of the top of the case. I had to do it in this configuration (push instead of pull) so the fittings were on the inside of the case. They are pulling air through DEMCiflex filters so I'm sure they are a little restricted.

Also I feel air coming through the bottom rad when the slipstream slims are maxed otherwise its basically passive, the bottom fans are also in push with air blowing up into the case.

There is definitely a positive pressure in the case with upward air flow out of the top it is very noticeable when all the fans are maxed.

CPU is higher than what I would have guessed, but the GPU is (IMO) an excellent result.

It's worth mentioning that, at stock/reference clocks, the CPU and GPU have a combined TDP of 338W. That's just 20% lower than a configuration with the same CPU and dual GTX 980s. Even if you extrapolate that power consumption to temperature increases completely proportionately (which will grossly overestimate the actual temperature, but bear with me), it's not only feasible to support the thermal envelope of a high end dual-GPU configuration... It's actually an impressively good solution in its own right. You could even overclock that dual-GPU, since overclocking would be limited more by power and noise than thermals, in all likelihood... And, perhaps, CPU temps, if you've got a toasty chip.

This is why I have been so enthusiastic with regards to experimenting with two 240mm radiators in the M1. Certainly the work in implementing that is significant, but if the right hardware is released, you're potentially achieving parity with desktops of any size when it comes to maximum gaming performance, within reason (3-4 GPU setups remain an unrecommended waste and a frequent performance killer). In a freakin' 12L case!

*Holds out hope for a GTX 990* ;)

I'd be quite curious as to what the temps are once you overclock the CPU, MrJerico. If I had that build I would also see how far I could overclock the graphics, too - you certainly wouldn't be limited by GPU temps, and you have plenty of power headroom. Again, your CPU temps seem to be the only real possible bottleneck here.

Yea my CPU is definitely on the toasty side even stock which is unfortunate. I didn't delid it obviously, but I have considered reapplying thermal paste its possible the AD2 slid around a little too much while installing and ruined my application. If the temps get any worse I will consider it, but it's a huge hassle.

I think that sounds amazing if they come out with a dual 900 series, that is instead of the stupid $3000 titan z.... Before you go out and try this I should let you know though that there is absolutely no room for a larger card, I got incredibly lucky that this card fit at all. There is maybe 1-2mm of clearance. between the bottom rad tubing and the 780. I feel like the 990, if nvidia ever graces us with one, may not fit. =(

Also I will get you overclock temps tonight, mind you they aren't going to be very impressive.
 
Last edited:
Yea my CPU is definitely on the toasty side even stock which is unfortunate. I didn't delid it obviously, but I have considered reapplying thermal paste its possible the AD2 slid around a little too much while installing and ruined my application. If the temps get any worse I will consider it, but it's a huge hassle.

What temps did you have, and on what cooling, to get 4.8Ghz on that chip previously? And what temperatures did you see then? I guess I'm just wondering how you could (presumably) have good temps at 4.8 previously, but pretty lame ones at stock on a WC loop that has a GPU with much lower temps (suggesting that the loop has plenty of thermal headroom).

I think that sounds amazing if they come out with a dual 900 series, that is instead of the stupid $3000 titan z.... Before you go out and try this I should let you know though that there is absolutely no room for a larger card, I got incredibly lucky that this card fit at all. There is maybe 1-2mm of clearance. between the bottom rad tubing and the 780. I feel like the 990, if nvidia ever graces us with one, may not fit. =(

The Titan Z is just ridiculous, though marginally less so now that it's easily had for $1500. Still, the 375W TDP would be quite a bit of heat for the configuration (463W with the i7). I wouldn't do it, personally.

I am cautiously optimistic that we will see a GTX 990 with two 980 GPUs on one die, mostly because it is feasible to do with an air cooler, and because nVidia would be remiss to leave the ~$1000-2000 segment uncontested. It certainly feels like the right time to do it, at least.

You're right to be concerned about length. Here's the last two dual-GPU cards, for perspective:

GTX 690: Dual-slot, 11'' long, 300W.
GTX TITAN Z: Triple-slot, 10.5'' long, 375W.

Even though the TITAN Z consumes up to 75W more than the 690, it's actually the same length as your card. Its cooler is also three slots tall, but that wouldn't matter if you put a water block on it. However, the 690 was 11", and the theoretical maximum consumption of a 2xGTX 980 at stock is 330W - which is to say, closer to the 690. I'd bet that nVidia would prefer a longer two-slot card over a shorter three-slot one.

Could nVidia get consumption down to ~300W or lower through binning and mild downclocks, thus possibly keeping the size down? Certainly. What the length and height of this mysterious card would be is totally up in the air, though.

I'm guessing that you couldn't use angled fittings or anything to get a little more space? Doesn't necessarily look like you could, from your photos...

Also I will get you overclock temps tonight, mind you they aren't going to be very impressive.

If you could share BCLK, voltage, etc, that would be awesome!
 
w9MbV2f.png


Here's my computer at 4.8GHz i have to set it to 1.37V to get it to run with my ram... The temps here are idle after super PI, but you can see how high they got during the test. I will do another Watchdogs test for science.
 
First of all sorry for the double post

What temps did you have, and on what cooling, to get 4.8Ghz on that chip previously? And what temperatures did you see then? I guess I'm just wondering how you could (presumably) have good temps at 4.8 previously, but pretty lame ones at stock on a WC loop that has a GPU with much lower temps (suggesting that the loop has plenty of thermal headroom).

Yea I don't understand it, but my temps are very similar even though it is a whole mV higher. I'm interested to see what your thoughts are.

The Titan Z is just ridiculous, though marginally less so now that it's easily had for $1500. Still, the 375W TDP would be quite a bit of heat for the configuration (463W with the i7). I wouldn't do it, personally.

I am cautiously optimistic that we will see a GTX 990 with two 980 GPUs on one die, mostly because it is feasible to do with an air cooler, and because nVidia would be remiss to leave the ~$1000-2000 segment uncontested. It certainly feels like the right time to do it, at least.

You're right to be concerned about length. Here's the last two dual-GPU cards, for perspective:

GTX 690: Dual-slot, 11'' long, 300W.
GTX TITAN Z: Triple-slot, 10.5'' long, 375W.

Even though the TITAN Z consumes up to 75W more than the 690, it's actually the same length as your card. Its cooler is also three slots tall, but that wouldn't matter if you put a water block on it. However, the 690 was 11", and the theoretical maximum consumption of a 2xGTX 980 at stock is 330W - which is to say, closer to the 690. I'd bet that nVidia would prefer a longer two-slot card over a shorter three-slot one.

Could nVidia get consumption down to ~300W or lower through binning and mild downclocks, thus possibly keeping the size down? Certainly. What the length and height of this mysterious card would be is totally up in the air, though.

I'm guessing that you couldn't use angled fittings or anything to get a little more space? Doesn't necessarily look like you could, from your photos...

QcMWc8a.jpg


Maybe this will answer your question, its pretty hard to see hopefully this helps you get a good idea of how close the tubes are too the card. They are almost touching, it's amazing how lucky I got.



If you could share BCLK, voltage, etc, that would be awesome!

Yea for sure I want to actually save my bios settings this time so give me a little while to undo my overclock but I will let you know. I think it was 1.21V with a multiplier of 44X99.99 not 100% sure though
 
Yea I don't understand it, but my temps are very similar even though it is a whole mV higher. I'm interested to see what your thoughts are.

This graph should help some (Max temperatures for each component, by test):

eILhX5F.png


At first blush, what I would say is happening is that, when your graphics card is at full bore (and your CPU is semi-loaded), CPU cooling suffers, and your GPU is just fine. But there's not enough here to be confident in that because none of these runs are comparable - both the CPU and GPU are running at different loads across all these tests.

Another observation: a pretty highly overclocked CPU would have a TDP approaching ~40-45% that of your graphics card, and yet the effect on your GPU temperatures are negligible, which suggests that this isn't an issue with the loop. Meaning... you may be having not-so-great thermal transfer off of the processor, and/or your voltage increases are (relatively speaking) greatly increasing power consumption beyond what you would normally expect. I'm theorizing that it's a little of both, but I reiterate that we just don't know; these are guesses.

What would be really interesting is if we know temps for CPU stress (GPU idle) at stock, and temps for CPU + GPU stress at your overclock. That way you can infer a lot more from these observations, since you can roughly isolate cause and effect between the components. Though, possibly, you'd hit a new peak for CPU temps with the latter test.

If you have a kill-a-watt meter, that would be even better - I could get into much more precise detail since I would know the amount of power entering the system exactly, rather than by inferring via TDP and wattage/voltage of components.
 
a pretty highly overclocked CPU would have a TDP approaching ~40-45% that of your graphics card, and yet the effect on your GPU temperatures are negligible, which suggests that this isn't an issue with the loop. Meaning... you may be having not-so-great thermal transfer off of the processor, and/or your voltage increases are (relatively speaking) greatly increasing power consumption beyond what you would normally expect. I'm theorizing that it's a little of both, but I reiterate that we just don't know; these are guesses.

Not just guesses - that is correct and insightful analysis :)

Watercooling can be imagined as two-stage process. Heat from chip into waterblock, then heat from water out through rads. His radiators are obviously plenty for removing heat from the water, even at low fan speeds. So the issue is transfer between chip and water.

This could be due to imcompatibility between heat spreader on the chip, and the bottom plate of the pump. I've lapped one Apogee Drive II and found it a little concave (ie sanding on glass the outside edge of the metal was smoothed first). If the heatspreader on the chip is 1) concave then thermal paste may be too thick between the metal plates and actually insulate a bit; 2) if convex then there is better connection between heatspreader and waterblock, but then the chip die underneath is making worse contact with the heatspreader. This is why people delid. To introduce a more convex shape by removing the glue holding the heatspreader the pressure of the waterblock can now flatten the plate. And by using liquid metal to improve contact between die and its cover. Core temp drop at load can be dramatic for an overclocked chip: 10C, 15C even 20C on Ivy/Haswell chips.

Moreover: compare the heat transfer of the CPU compared to the GPU. The CPU heat is concentrated in a very small area. CPU die itself is tiny compared to it's heatspreader, and the active cores are all bunched on one end. With a GPU the cores are spread evenly and the die is close to the size of the heatspreader. I believe I calculated once that the hotspot of my i7-3770K was one sixteenth the area of the hotspot under my GTX660Ti chip. So although GPU is producing 3x or 4x more watts, the rate of heat transfer is still much better.

TL;DR in watercooling: radiators are underestimated, chip die size and hotspot area often ignored.
 
I'm also seeying this after playing BF4 on my setup (see signature): my CPU rises to about 60-65°C, my GPU rises to 50-55°C. If I play Diablo 3, a graphically and "physically" less intensive game, the CPU temp is around 50°C and my GPU is about 45-50°C.

But the CPU has a lot more "spikey" temps: just loading an app, shoots the package temp 10°C higher, but it also falls back down fast. I guess it's the "isolation" of the crappy TIM underneath the heat spreader.
 
But the CPU has a lot more "spikey" temps: just loading an app, shoots the package temp 10°C higher, but it also falls back down fast. I guess it's the "isolation" of the crappy TIM underneath the heat spreader.

I can't find the link now that tested it, but someone did some examination of the Intel TIM and found it actually performed favourably compared to some popular aftermarket emulsion type pastes. It's the fact that it's paste at all and the glue used to hold the heatspreader is too thick.

The conclusion of the test was really that while TIM can be effective between two sheets of metal, it's less effective between the die and the underside of the heatspreader. Solder or liquid metal is required for good thermal transfer from the glass of the die.
 
Hey guys I have a new update for you I reapplied thermal paste and down-clocked back to Bclk here are my idle temps now. Pretty significant difference. I should also not that it is probably closer to 22-23C in my room now.

AlppLy2.png


I've been really busy this weekend, but i will get a prime 95 load temp at Bclck and see what happens. Great graph PlayfulPhoenix its interesting to see how much more the cpu peaks compared to the gpu.
 
Last edited:
I reapplied thermal paste and down-clocked back to Bclk here are my idle temps now. Pretty significant difference. I should also note that it is probably closer to 22-23C in my room now.

A change of 2C in ambient temperate is essentially negligable, especially when using a water loop; your idle temperatures would not be measurably different.

Those new idle temperatures are much, much closer to what I would expect a stock 4790K to have on a custom loop, and in a good way! They now approximate the idle temperature of your GPU, which is pretty good, actually. Whatever you did to reapply the block worked well - I would love to know what temperatures you'll get if and when you reapply your overclock.

I've been really busy this weekend, but i will get a prime 95 load temp at Bclck and see what happens.

If possible, I would suggest performing tests in every combination, while you are at stock clocks (ranked in order of most-useful to least in case you don't feel like doing all of them):

  1. Stress the CPU; Leave the GPU at idle
  2. Stress the GPU; Leave the CPU at idle
  3. Stress the CPU and GPU
You can use something like Furmark to do GPU testing. But these tests will be helpful for a few reasons:

  • You'll know how well thermals are at stock clocks when everything is maxxed out, which can inform what overclocks you want, among other things.
  • We will be able to infer how the thermals of one component can affect the other, if that's an issue.
  • We will have a very good idea as to how well the loop is performing overall (I shamelessly admit that this is where I am most interested, given my own intentions for a future build).

By the way, if noise is something you care about, I would ensure that the fans never breach your "noise limit" during these tests, and that you don't change their behaviour between tests. That way you can ignore the variable of noise
(knowing it to be acceptable) unless you are compelled to trade temperatures for fan RPM's (or vice versa) after the fact.
 
I wouldn't recommend Furmark for GPU testing, Heaven or Valley Benchmark are much better suited to have a realistic idea what your GPU is going to be doing. The same for Prime95: unrealistic loads that never happen (certainly not while gaming), so you could better just compress a movie file to some format with something like Handbrake or use PCMark 8 or CineBench.
 
Last edited:
I can't find the link now that tested it, but someone did some examination of the Intel TIM and found it actually performed favourably compared to some popular aftermarket emulsion type pastes. It's the fact that it's paste at all and the glue used to hold the heatspreader is too thick.

The conclusion of the test was really that while TIM can be effective between two sheets of metal, it's less effective between the die and the underside of the heatspreader. Solder or liquid metal is required for good thermal transfer from the glass of the die.


Yup delid and CPU temps will improve significantly.

http://forums.anandtech.com/showthread.php?t=2261855
 
Yup delid and CPU temps will improve significantly.

I'm not convinced that a delid would be worthwhile here, considering the effort involved. If temps at OC under a stress test were only peaking at 61C, temperatures are already comfortable. And that's before the CPU block was even reseated - temperatures should be better yet while under stress.

Still, I actually remain somewhat confused by these results in aggregate. If we accept that the high CPU temps were due to bad contact with the block, and we accept that the loop can easily cool all the components, why is it that the CPU had higher peak temperatures during gaming at stock clocks, rather than during a stress test at a higher clock and voltage?

Playing devil's advocate, if we instead accept the inverse argument - that the CPU contact was good and the loop does not easily cool everything - how is it that a much hotter CPU isn't affecting the temperature of the GPU, at all?

My best guess is that the max temperatures represent fleeting spikes that are relatively random, and therefore not representative of what the sustained temperatures actually are during these loads. But the discrepancy seems too large for it to be explained by that alone. I'm wondering if there's some interaction going on across all these variables that explains it.

Doing the aforementioned tests will hopefully shed some light on this.
 
I'm not convinced that a delid would be worthwhile here, considering the effort involved. If temps at OC under a stress test were only peaking at 61C, temperatures are already comfortable. And that's before the CPU block was even reseated - temperatures should be better yet while under stress.

+1. Mid-60C peak temps at 4.8GHz on Devil's Canyon is quite impressive. If he can keep that same clock during Prime95 or AiDA64 blended tests in the 70s and 80s, then I'd leave the IHS as-is. If he can do 4.8GHz on the FFT-only tests without throttling, then there's definitely no reason to delid imo.
 
how is it that a much hotter CPU isn't affecting the temperature of the GPU, at all?

It is of course, but remember back to your physics lessons. Heat transfer is proportional to area, but moreover to delta temperature between two surfaces. Because it's a dynamic system we would have to make an integration to model/calculate the exact interaction going on here.

But nevertheless it should be clear that the ~90W that the overclocked CPU is putting into the water is not going to raise water temperature enough to really 'heat' the GPU. Whereas when both are putting ~300W of heat into the water, the temperature becomes closer to the natural heat of the chips/blocks and so the rate of heat transfer becomes much lower for both.
 
It is of course, but remember back to your physics lessons. Heat transfer is proportional to area, but moreover to delta temperature between two surfaces. Because it's a dynamic system we would have to make an integration to model/calculate the exact interaction going on here.

My knowledge of thermodynamics is pretty limited, but I've used regression analysis in the past quite successfully to understand what causes temperature changes, and what to expect when overclocking. In theory this method becomes less helpful the closer you approach the limits of how much heat the radiators can dissipate, but in practice the estimates are quite good, to within a few degrees, if you have lots of points of reference. At a minimum it's a good sanity check for determining the upper limits of power consumption (which correlates with clocks) and heat.

MrJerico, have you gotten around to taking any measurements? You'll have to forgive my eagerness ;)
 
Thanks to the OP for posting this build. I've been planning a dual radiator setup in the M1 also. Still in the hardware acquisition phase, but it's good to see that it is possible. For want of keeping the front panel IO and for want of using a dedicated pump I was planning on trying to fit a single 120mm rad on the bottom instead of the 240mm you have set up but I am waiting to have most everything else installed before committing on the rads ...
 
Thanks to the OP for posting this build. I've been planning a dual radiator setup in the M1 also. Still in the hardware acquisition phase, but it's good to see that it is possible. For want of keeping the front panel IO and for want of using a dedicated pump I was planning on trying to fit a single 120mm rad on the bottom instead of the 240mm you have set up but I am waiting to have most everything else installed before committing on the rads ...

I'm not sure you'll find an arrangement where you can have a 120mm radiator and a dedicated pump on the bottom. Or at least I can't imagine what parts you'd use, and how you would orient them... How were you thinking of doing it?
 
Thanks to the OP for posting this build. I've been planning a dual radiator setup in the M1 also. Still in the hardware acquisition phase, but it's good to see that it is possible. For want of keeping the front panel IO and for want of using a dedicated pump I was planning on trying to fit a single 120mm rad on the bottom instead of the 240mm you have set up but I am waiting to have most everything else installed before committing on the rads ...

Your welcome! If you want help I'd be happy to tell what problems I faced. The biggest issue you are likely going to face is getting all the tubes from under the graphics card up into the case where your second rad, reservoir, and cpu block is.
 
I'm not sure you'll find an arrangement where you can have a 120mm radiator and a dedicated pump on the bottom. Or at least I can't imagine what parts you'd use, and how you would orient them... How were you thinking of doing it?

I'm not entirely sure. But, I do have a slim 30mm thick 120mm rad and a Swiftech MCP50x on the way so I will know soon. I will use a short PCB Zotac 970 with what seems to be a low profile water block (Heatkiller). Hopefully that combo will do the trick but we will see soon.
 
I'm not entirely sure. But, I do have a slim 30mm thick 120mm rad and a Swiftech MCP50x on the way so I will know soon. I will use a short PCB Zotac 970 with what seems to be a low profile water block (Heatkiller). Hopefully that combo will do the trick but we will see soon.

So are you thinking of putting the radiator under the card, and then having the pump sit at the bottom-front of the case? Have you thought about how you'll manage tubing?

I think I'm beginning to see how you want to do it but I'm still not sure that (even with angled fittings) you'll have quite enough room...
 
So are you thinking of putting the radiator under the card, and then having the pump sit at the bottom-front of the case? Have you thought about how you'll manage tubing?

I think I'm beginning to see how you want to do it but I'm still not sure that (even with angled fittings) you'll have quite enough room...

Yes, that is exactly what I was thinking. FrozenCPU came through with the parts last Friday, except for a fan that they forgot to package. I would ideally have run the system on air cooling to make sure everything works but I don't really have the time for that. Got the Heatkiller 970 GPU block and backplate installed on the Zotac 970 ... That's a damn small card! Will get the Bitspower motherboard block installed next and then get parts in the case. Then I will know if there is enough room for all the plumbing with a second rad. Still waiting on the FrozenQ reservoir ... Must be nearly 2 months now since I ordered it.

I can post pics here if it actually will fit and if the OP doesn't mind me tagging along on his thread, with the common theme being dual rad setup for the M1. I'm sure the world doesn't need another M1 thread.
 
I'd go ahead and post, it's certainly germane to the discussion, and I don't know that MrJerico ever got around to testing his build, unfortunately.

Sorry that you've had so many troubles ordering everything - I would be pretty bummed after so many hangups.
 
Back
Top