Pictures Of Your Dually Rigs!

Older pics but you'll get the idea.

Corsair 900D
EVGA SR-2
2 x Xeon X5675
48GB DDR3 1600Mhz
2 x 120GB PNY SSD RAID0
2 x Titan V
1 x Titan Xp
1 x RTX 3090 KINGPIN
EVGA 1600 G+

View attachment 437222
View attachment 437223



I was curios how you use the diversity of video cards in the same build. Are you running it in "containers" and vgpu / grid?

Clean build
 
My last dual system

20180325_095701.jpg.dac4800321fa911ca061e0743ed5c781.jpg

My first
20160325_100245.jpg
aircooled and watercooled

fa17477a_p32100798lp.jpeg
 
Took another picture of my server yesterday, so I figured I might as well share. It is a server, not a workstation build, so it will naturally be a little bit different than many other builds in here

PXL_20220126_203906206.jpg

(Click for larger)

It looks a little ghetto, but it does the job and is a beast, and that's what matters.

Specs are as follows:
- 2x Xeon E5-2650v2 (total of 16C/32T)
- Supermicro SuperMicro X9DRI-F motherboard
- 256GB Registered ECC DDR3
- Boots off of a ZFS mirror of two 500GB Samsung 850 EVO's (motherboard does not allow for m.2 booting)
- Three Asus 16x to four m.2 risers, populated with twelve Inland Premium MLC M.2 drives ranging from 256GB to 2TB each
- One 8x to two U.2 riser cards. Slots populated by two 280GB Intel Optane 900p drives
- One LSI 9305-24i SAS HBA connected to direct passthrough backplane (no SAS Expander)
- 12 Seagate EXOS Sata hard drives in backplane. 11 are 10TB drives, one is a 16TB drive (I have just started swapping in larger drives to grow the pool)
- Supermicro SC846 2u 24 drive chassis (modified)
- Has one Intel Dual 10Gbe SFP+ network card. One slot is connected to my main switch using a DAC cable. The other is direct linked to my workstation via fiber optic.


Storage Pools Configuration:

Code:
Boot Pool
      mirror-0
        Samsung 850 EVO 500GB (SATA)
        Samsung 850 EVO 500GB (SATA)
    logs
      mirror-1
        Intel Optane 900p 280GB
        Intel Optane 900p 280GB



MythTV Scheduled TV Recordings Pool
      mirror-0
        Inland Premium M.2 MLC 1TB
        Inland Premium M.2 MLC 1TB


VM and Container storage pool
      mirror-0
        Inland Premium M.2 MLC 256GB
        Inland Premium M.2 MLC 256GB
    logs    
      mirror-1
        Intel Optane 900p 280GB
        Intel Optane 900p 280GB


Data Storage Pool
      raidz2-0
        Seagate Exos 10TB
        Seagate Exos 10TB
        Seagate Exos 10TB
        Seagate Exos 10TB
        Seagate Exos 16TB
        Seagate Exos 10TB
      raidz2-1
        Seagate Exos 10TB
        Seagate Exos 10TB
        Seagate Exos 10TB
        Seagate Exos 10TB
        Seagate Exos 10TB
        Seagate Exos 10TB
    special    
      mirror-4
        Inland Premium M.2 MLC 2TB
        Inland Premium M.2 MLC 2TB
        Inland Premium M.2 MLC 2TB
    logs    
      mirror-3
        Intel Optane 900p 280GB
        Intel Optane 900p 280GB
    cache
        Inland Premium M.2 MLC 1TB
        Inland Premium M.2 MLC 1TB

For those of you unfamilliar with ZFS notation, for each pool the data drives are up top. Using the Data pool as an example, there are two RAIDZ2 vdevs that are striped, amking it the ZFS equivalent of hardware RAID60.

Under that there is a special VDEV. Three mirrored m.2 drives. Special VDEV's are used to speed up a pool by storing pool metadata and small files. This is a thre-way mirror, because since if the special vdev is lost, the whole pool is lost, so I matched the redundancy to the main data pool. In the worst case with two RAIDZ2 vdevs, I can lose all data if I lose three drives on the same VDEV, so I made the special vdev tolerate losing two drives without losing data as well

Under that we have the log drives. The LOG is also called the SLOG or ZIL drive. (Log, Separate Log drive or Zero Intent Log). These are special devices that must be very low latency, which is why I have used Optane drives. They are written to in parallel with writing to the main data pool when performing sync writes only, allowing the writes to be reported as complete once writing to the SLOG's is finished, even if writing to the main drives isn't finished yet. Once the data is completely committed to the main storage pool, the data written to the SLOG is purged. These drives are under normal use never read from, only written to. The only time reads are performed on them are if there is a system crash or loss of power after the SLOG has finished writing but before the main storage pool is done writing. In this case they are parsed when the pool is mounted, and the data is reconstructed into the main storage pool. These allow sync writes to be sped up tremendously, but do nothing for async writes which are reported as complete as soon as they hit the RAM.

Here I have done something rather unorthodox, which is to split the SLOG drives (I only have two, mirrored) into multiple partitions, and using the same SLOG drives across my boot pool, VM/Container pool and Data storage pool. (the TV recording pool doesn't need them, because it only does async writes). This is typically not recommended, but because they are usually not all writing at the same time, and because the Optane drives are insane in this capacity, I made the judgment call that it would be OK. Performance is solid thus far.

The last VDEV is the cache vdev. These serve as a read cache for the pool. These are two 1TB m.2 drives that are striped. Striping adds no additional risk here, as nothing is lost if they fail. The system will just re-read the data from the main storage pool (just slower) This gives me 2TB of lightning fast m.2 read cache.



Other system notes:
- I'm still on Ivy Bridge era Xeons simply because upgrading will force me to buy all new RAM, and I haven't had the desire to do that. I'd be lying if I said I hadn't been eyeballing some single socket EPYC boards though.
- I have 12 empty drive slots to add more hard drives should I so choose, but for now I am deciding to replace the drives with larger ones and grow the pool instead, as I want to repurpose the old drives for my remote backup server.
- The case is modified to allow it to be easier on the ears in a home rack. It came with an 80mm fan wall, but it was pretty loud, so I removed it, and installed three 120mm Noctua industrial fans instead, and ghetto-modded a little barrier on top of them to keep air flowing in one direction. I also replaced the original power supplies with two Supermicro 900w quiet models, to keep the noise down.
- The MythTV recordings m.2 pool serves to avoid any stutter in recordings by providing fast writes. It is 1TB in size, and automatically dumps the oldest TV recordings to the hard drive storage pool every night at 3am to leave the pool 70% empty and fit new recordings. it is the only pool to not utilize the LOG drives, as I have set it to always perform async writes. (async writes don't use the SLOG). The rationale here is that async drives are unnecessary. The only time they would be a problem is to get the last data in case of a crash or power interruption, but I don't want incomplete recordings anyway. I can always just re-record them when they are re-run.


Damn, that became a larger post than I expected.
 
never really added it up, it was put together over time. Some parts were used from my prior build (4x Titan V build including Titan V 32GB CEO ed). I sold a lot of the prior components to help finance the build.

I think if someone added up the price of the individual parts they'd get a rough aidea, but again, it was built from migrating prior systems and selling and buying.

My other other build is at my wife's desk and she uses dual 3090s as well. We sold her prior Titan Vs to help finance that build as well (but I kept the Titan V 32GB CEO ed)

So its transient as in selling and buying and migrating from one build to the next, the PC wasn't a from scratch cost.


A few of the prior builds and necessary components.
View attachment 436860
View attachment 436848
View attachment 436849
View attachment 436850
View attachment 436851
View attachment 436852
View attachment 436853
View attachment 436854
View attachment 436855
View attachment 436856
View attachment 436857
View attachment 436858
View attachment 436859

That is a thing of beauty.

Just chiming in to note that you don't have to be afraid of water. The added maintenance is minimal. Just drain and fill once every year or two, and it can help it be even more silent, even at full load.

I'm curious, what do you use it for? My last two builds with multi-GPU (one Crossfire in 2011, one SLI in 2015) I found to be huge disappointments, at least for game/graphic loads. Dyno queens for sure, elevating the average framerates in benchmarks, but in general in games, the 1% framerates were close to or the same as the same GPU in single GPU mode, meaning that when I needed the power the most, it wasn't there.

And then I had to fight all sorts of game bugs due to mgpu, microstutter, increased input lag (inevitable with AFR rendering modes) etc.

For me that meant that at the very least it was a poor investment, so I shifted my focus to getting the absolute most out of a single high end GPU instead, by overkill watercooling in order to maximize overcklocks/boost clocks.

Do you use them for something else? Some sort of productivity/research?
 
That is a thing of beauty.

Just chiming in to note that you don't have to be afraid of water. The added maintenance is minimal. Just drain and fill once every year or two, and it can help it be even more silent, even at full load.

I'm curious, what do you use it for? My last two builds with multi-GPU (one Crossfire in 2011, one SLI in 2015) I found to be huge disappointments, at least for game/graphic loads. Dyno queens for sure, elevating the average framerates in benchmarks, but in general in games, the 1% framerates were close to or the same as the same GPU in single GPU mode, meaning that when I needed the power the most, it wasn't there.

And then I had to fight all sorts of game bugs due to mgpu, microstutter, increased input lag (inevitable with AFR rendering modes) etc.

For me that meant that at the very least it was a poor investment, so I shifted my focus to getting the absolute most out of a single high end GPU instead, by overkill watercooling in order to maximize overcklocks/boost clocks.

Do you use them for something else? Some sort of productivity/research?

Hi

good observations and questions

if you look at some of the other post/pics underneath it, you'll notice that I did a couple of water builds. In fact, when hardocp.com, 2cpu.com and nvnews.net were the only decent websites out there... I posted the first water-cooled dual cpu on the net (other than cray). I also raided and had dual power supplies and cooled the chipset and video card. But for this newer build I wanted ALL water or ALL air, and ALL water was too complicated and retain the smaller dense size. I wanted it full and dense with little wasted space. Also, I wanted it close to zero maintenance and not worry about the water, which I did worry regardless of advancements. I was tempted to make a 4x 3090 build / dual CPU using water-cooling so that the card slot width was manageable, but in mock build it became rather complicated. (Side note, I've been with hardocp longer than the 2004 member marker, since the beginning, but in 2004 my account was merged/reset so it grabbed the date of the moment).


I posted in this forum (before anyone) how to re-enable 3 and 4 way SLI after nvidia hobbled it from the 1000 series cards and upwards. I was able to mitigate a lot of micro-stuttering and able to get "ok" scaling through a LOT of experimenting in inspector. Regardless, more than half of the games I like -- ran like crap in SLI. Then again, the multiple video cards I use were for the medical software in my case. (I think I also posted the first 4 way SLI pc with dual CPUs).


What is it used for:
I literally have three high level jobs, own two businesses and am a full time student (add in married with kids). I take my work home with me (a lot) which involves medical databases, isotope software, and other commitments. However, I am also a full time student and my dissertation (Phd) is in deep learning in clinical decision support systems (DL CDSS). I do a lot of my algorithms on my own rig. I have a dummy training database as well (75% training/ 25% verification).

Oh... and I love to play video games at max visuals ( 4k, 10 bit, NO DLSS, no film noise, no motion blur, no dynamic resolution, no vignette, etc etc).

Hope that helps and also hope that helps why I have used higher end pc builds boxes since 1998, dual cpu, mgpu etc - ...and its also an addiction / ocd

Here are some pics of my older water cooled rigs, first 4 way sli / 2cpu, etc - some of these are from a long time ago:

MVC-610F.JPG

d1.jpg

15photo.jpg

b1.jpg

584795_407117_98466_0AC411BB-EC91-40A3-A411-DF377CC93B45 (1).jpeg

e.JPG

sideboard.JPG
 
Last edited:
What is it used for:
I literally have three high level jobs, own two businesses and am a full time student (add in married with kids).
How do you even find time to do build these things? :eek: I manage about 22 different entities and basically live in two different states, but am married without kids and I had to pretty much let go almost all my hobbies when I took on all this responsibility. I was single with a single company before this and I had a lot more time to play with my cars and computers.
 
How do you even find time to do build these things? :eek: I manage about 22 different entities and basically live in two different states, but am married without kids and I had to pretty much let go almost all my hobbies when I took on all this responsibility. I was single with a single company before this and I had a lot more time to play with my cars and computers.
Simple, I have changed my day to 36 hours. I'll sleep when I'm dead.


Honestly, every day feels like a race and every day ends without a sense of anything really being accomplished. Pleasure is measured in micro moments. I tell myself its just going to be like this for a little while... rinse, repeat. -- been that way for decades.

But it sounds like you personally know and understand as well
 
dWQiOlsidXJuOnNlcnZpY2U6aW1hZ2Uub3BlcmF0aW9ucyJdfQ.jpg


SwiYXVkIjpbInVybjpzZXJ2aWNlOmZpbGUuZG93bmxvYWQiXX0.png


Taken from the retro thread, and while not a 2P, it is a 4P so I thought it might still count.
This is still in use to this day for 32-bit testing (not that 32-bit anything will be around much longer) and is also used for PrimeGrid with the GPU.

I will add that the Xeons in this unit are the very last ones that were 32-bit, circa 2004, right before Intel started releasing x86-64 CPUs in 2005, and I believe they are the only Netburst Xeons to feature 4MB L2 cache; for reference, the average desktop CPU had around 256KB-1MB L2 cache, and the average server CPU had around 512KB-2MB L2 cache at the time.

Dell PowerEdge 6600 6U Server (2004)
- 4x Intel Xeon MP 3.0GHz Socket 603 "Galatin" 130nm CPUs with 4MB L2 Cache and 400MT/s (200MHz) FSB
- 4GB (4x 1GB RDIMMs) ECC PC-1600 DDR SDRAM
- ServerWorks CMIC-HE 4P quad-socket motherboard
- PERC 4/DC(LSI MegaRAID) hardware RAID controller U320 SCSI with BBU and 128MB cache
- 8x IBM 73.4GB 10K RPM U320 SCSI 80-pin SCA HDDs in RAID5
- 88E8001 1000Base-SX MM-fiber 64-bit PCI-X gigabit Ethernet
- MSI NVIDIA GeForce GT 520 (CF119) 512MB PCI GPU
- 3x 600 watt PSUs
- Ubuntu 18.04.6 LTS Server (i686 - 32-bit) - this officially reached EOL on May 31, 2023, which marks the tail-end of 32-bit Ubuntu support - truly the end of an era.
 
Last edited:
That's just the sound of your air cooling. :)
Honestly, those fans are spinning between 600 and 960 rpm --- don't make much sound, barely perceptible in a very quiet room. They don't need to run that fast or noisy anyhow . They are capable of 3000 rpm, but even under load the max they have ever been is 960. For the CPUs I use 4x Phanteks T30. I found them to be better than my prior noctua A25 and less of an eye sore.

In a 70'F to 71'F (21.7 'C) house, the CPUs (8280L) hover normally around 79'F (26'C) under load they may get as high as 38'C - body temp.

The 3090s FE will actually stop their fans when not under much load. Under load the 3090 FE may get as high as 68'C, but the noise is not bad.

The raid array has its own fans (NoiseBlocker series) and they keep the micron 9300 raid (52TB) at less than 38 under full use load (also body temp) The motherboard controls the speed of the fans, so at non heavy moments all the fans are 600-700.

So no, not much from my air cooling sound for s system that under load runs at body temperature.

IMG_9739.jpg

586029_584561_fan1.jpg

IMG_9787.jpg
IMG_9740.jpg
 
The wife's PC is also similar in design as my dual CPU, but hers is a single Xeon, 256GB Ram, and dual 3090 RTX FE nvLink, and a slightly different raid array using 4x 860 pro (16TB array). ---> also runs silent and cool (body temp under load)

..yes, before you ask, she's a Deadpool fan.


wifePC.jpg
IMG_8867.JPG
 
Last edited:
Super nice rigs - and super expensive, too!

I can't afford that, but for me my Dual Xeon workstation ( dual E5 2680v4, 128 gb ram, GTX970) is sufficient.

Today I tried to render the PovRay Wiki Glass image in 1920x1080... Took ages!


Yours is also a super nice rig. I have one running a single E5 2680v4 with a GTX 750 and it's the fastest beast I own. Waiting to win the lottery before I upgrade. :D
 
Yours is also a super nice rig. I have one running a single E5 2680v4 with a GTX 750 and it's the fastest beast I own. Waiting to win the lottery before I upgrade. :D

Haha yeah me too.

Next upgrade will be something dual Epyc or dual S3647 like... But I assume not the next 2 years.
 
Super nice rigs - and super expensive, too!

I can't afford that, but for me my Dual Xeon workstation ( dual E5 2680v4, 128 gb ram, GTX970) is sufficient.

Today I tried to render the PovRay Wiki Glass image in 1920x1080... Took ages!


That’s really elegant system. I wish I had your patience on getting the rgb right. Love the dual infinity mirrors on the cpus
 
Venturi : I've got the rgb super slow and only cycling blue and green and the colors in between... Don't like a disco style light under my desk 😂

The aerocool mirage are good coolers, of course not the best, but sufficient. And quite silent until 1500rpm.
 
Well, it is not just for looks, this is the WRITE speed when making a FULL backup of my OS drive to the VROC Raid Array:

50.6 Gb/s

View attachment 437465

That's really neat. I'm not familiar with VROC.

Is that bit or byte? The lower case b seems to suggest bit, so ~6.325GB/s, which is still really awesome. What kind of drives are those?

I use ZFS for mine. The 10gig ethernet is usually the limiting factor for me. I don't do too many things locally on the server that truly challenges the performance of the pool, except maybe scrubs and resilvers.

I just decided to do a scrub (I have them on cron to initiate second Sunday of every month, but it doesn't hurt to do an extra one every once in a while)

1643509734001.png


1.5G GB/s isn't bad for old spinning rust!
 
That's really neat. I'm not familiar with VROC.

Is that bit or byte? The lower case b seems to suggest bit, so ~6.325GB/s, which is still really awesome. What kind of drives are those?

I use ZFS for mine. The 10gig ethernet is usually the limiting factor for me. I don't do too many things locally on the server that truly challenges the performance of the pool, except maybe scrubs and resilvers.

I just decided to do a scrub (I have them on cron to initiate second Sunday of every month, but it doesn't hurt to do an extra one every once in a while)

View attachment 438406

1.5G GB/s isn't bad for old spinning rust!
Actually, your numbers are quite good, especially for spinning disk!


The drives are Micron 9300 MAX u.2 drives, 12.8TB usable each (4x, almost 53TB), they are really 15.6TB but have been overprovisioned to 12.8TB. They will run really HOT (68'C +), so that's why I made the custom caddy to keep them cool (see pic) and why the whole case HAS TO BE open air to keep the whole PC no warmer than body temp (38'C) under load

Connected to the u.2 VROC and the skylake raid controller in each xeon, also using main ram as caching. A unique hardware and software direct to cpu raid.

https://www.micron.com/products/ssd/product-lines/9300

https://www.amazon.com/Micron-12-8TB-Enterprise-Solid-State/dp/B07SJ7Q6Y6


One must also install in the motherboard the Premium VROC key to activate the various levels of performance. Intel makes you pay an additional fee for it and it plugs right into the motherboard.


To fit all 4 of the u.2 connectors I had to lift all the components / video cards off the motherboard by 2.3 CM (see pics)

To answer your third observation: I rum all my data local, not across a network when avoidable. On occasion I extend to my wife's PC (my hand-me-down) and run the verification on that node. I've linked to several correct taxonomy/labeled databases to expand the training data set, but the networking takes the process out for hours and hours. Hence the faster local array for most endeavors.

584562_interstitial_label.jpg

584563_rearangle.jpg
584561_fan1.jpg

586024_3.jpg

584557_scenelabel.jpg
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Actually, your numbers are quite good, especially for spinning disk!


The drives are Micron 9300 u.2 drives, 12.8TB usable each (4x, almost 53TB, they are really 15.6TB but have been overprovisioned to 12.8TB. They will run really HOT (68'C +), so that's why I made the custom caddy to keep them cool (see pic) and why the whole case HAS TO BE open air to keep the whole PC no warmer than body temp (38'C) under load

Connected to the u.2 VROC and the skylake raid controller in each xeon, also using main ram as caching. A unique hardware and software direct to cpu raid.

https://www.micron.com/products/ssd/product-lines/9300

https://www.amazon.com/Micron-15-36TB-Enterprise-Solid-State/dp/B07SK8GSYZ

One must also install in the motherboard the Premium VROC key to activate the various levels of performance. Intel makes you pay an additional fee for it and it plugs right into the motherboard.


To fit all 4 of the u.2 connectors I had to lift all the components / video cards off the motherboard by 2.3 CM (see pics)

To answer your third observation: I rum all my data local, not across a network when avoidable. On occasion I extend to my wife's PC (my hand-me-down) and run the verification on that node. I've linked to several correct taxonomy/labeled databases to expand the training data set, but the networking takes the process out for hours and hours. Hence the faster local array for most endeavors.

View attachment 438419

View attachment 438420View attachment 438421
View attachment 438422

Wow, that is a quite intense storage solution.

I have considered doing some form of RAID on my Desktop system using M.2 drives in all five of the slots (3 on board + 2 on Asus proprietary DIMM.2 expansion) using the RAID feature on the motherboard because I have been curious, but I haven't gotten around to it.

Because of the NAS in my previous post, I simply haven't had the need for large storage capacity on the desktop, but having something faster wouldn't hurt. Not sure how I'd set them up though. Maybe a 5 drive RAID5 for good measure? Or maybe three striped 2 drive mirrors? (I'd need to add a PCIe slot adapter, but the Threadripper has plenty of PCIe lanes, so that is not an issue. The possibilities are endless and intriguing.

I'd do it with ZFS, but unfortunately it seems unlikely ZFS will ever get native support for Windows, so I'll have to sue the onboard software Raid solution.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Wow, that is a quite intense storage solution.

I have considered doing some form of RAID on my Desktop system using M.2 drives in all five of the slots (3 on board + 2 on Asus proprietary DIMM.2 expansion) using the RAID feature on the motherboard because I have been curious, but I haven't gotten around to it.

Because of the NAS in my previous post, I simply haven't had the need for large storage capacity on the desktop, but having something faster wouldn't hurt. Not sure how I'd set them up though. Maybe a 5 drive RAID5 for good measure? Or maybe three striped 2 drive mirrors? (I'd need to add a PCIe slot adapter, but the Threadripper has plenty of PCIe lanes, so that is not an issue. The possibilities are endless and intriguing.

I'd do it with ZFS, but unfortunately it seems unlikely ZFS will ever get native support for Windows, so I'll have to sue the onboard software Raid solutio

I've tried several approaches to get pretend ZFS through a Lookingglass in windows but only got a file browser to work, and had no measurable (reliable) success. There are a few people that were more successful than me. I don't remember the exact thread but it was in the WIn-Raid forum. Another great forum similar conceptually to hardocp

https://www.win-raid.com


On your other observation, I think Raid 5 is your better option. In my case I only use RAID 0, but I also keep recent backups of the main data. I have built in backups so that full backups (not incremental) take less than 5 miin.
I need the speed, so my options are limited. Maybe consider raid 0 if backing up / imaging the drive data is easy for you.
 
I've tried several approaches to get pretend ZFS through a Lookingglass in windows but only got a file browser to work, and had no measurable (reliable) success. There are a few people that were more successful than me. I don't remember the exact thread but it was in the WIn-Raid forum. Another great forum similar conceptually to hardocp

https://www.win-raid.com


On your other observation, I think Raid 5 is your better option. In my case I only use RAID 0, but I also keep recent backups of the main data. I have built in backups so that full backups (not incremental) take less than 5 miin.
I need the speed, so my options are limited. Maybe consider raid 0 if backing up / imaging the drive data is easy for you.

Looks like I was wrong. RaidXpert2 (at least on my board) supports RAID0, RAID1 and RAID10 only. No RAID5 or RAID6.

Redundancy would be nice, but honestly I don't use it now on any of my clients, only on the server.

Already as it is, I save all of my working files to shared drives on the server (direct linked with 10gig fiber), so the only thing that would be lost if I lost a drive would be my installed programs and configurations. A pain to recreate, but not the end of the world.

I could set up some sort of script to dump things to the server overnight, but I've never bothered. Usually this winds up being a pain because I shut down the system when not in use. I could have it as part of shutdown scripts, but that starts getting complicated, especially due to the dual boot setup, so I have both Windows and Linux partitions to worry about...

This would be a whole lot easier if I could use ZFS send/recv, sending incremental block based snapshots would be awesome, and make things very convenient, but, well, I'm not holding out hope for good support to ever come to Windows for this.

Thanks for the win-raid forum reference. I will have to check that place out.
 
My ""new"" CPU's finally showed up! a set of Xeon E5 2697 v2's - 12c/24t @ 2.7~3.5GHz
Benchmarks against the old CPUs

Out with the old
IMG_0519.jpg


and In with the (not really) new
IMG_0518.jpg


The heatsinks left interesting stripe marks in the (>5yr old) thermal paste when I went to swap them!
IMG_0517.jpg
IMG_0520.jpg

Excuse the cat hair, anything put down on the floor for more than a day and that's what happens! It got cleaned out before the new CPUs went in. I tried some of the Cooler Master Mastergel Maker - the big claim to fame is it's easier to use... easier to waste more like

Needs a bit of tidying but I'm just playing around at the moment.
IMG_0522.jpg

IMG_0521.jpg



I'm curious where al those 4x sata cables go. :eek: Looks like you've got a 24x drive setup. :)
They were just off to the side before, they go to a wonderfully unreliable Areca ARC1680 16 port SAS/SATA controller card.

Loving all the updates on this thread!🤘
 
The wife's PC is also similar in design as my dual CPU, but hers is a single Xeon, 256GB Ram, and dual 3090 RTX FE nvLink, and a slightly different raid array using 4x 860 pro (16TB array). ---> also runs silent and cool (body temp under load)

..yes, before you ask, she's a Deadpool fan.


View attachment 438182View attachment 438183

Love the clean look of the rig to go with the clean look of the room!
 
Since somehow there is no one interested in the board i got from another forum a few month ago, and it has only been lying around since the construction of my Dual Xeon E5-2680v4 workstation, I thought I would build a sleeper. Hyper-V and stuff like that can be run on it, at least temporarily.

A case from around 2005, an 850W power supply, and off you go. Very tight, but the temperatures are still within limits. In the front, a PWM 120mm and a YS-Tech 2.8W fan provide fresh air.

It's close, yes, but it fits... :) The little southbride fan is really annoying... but at least that way you know that the server is on. Power consumption around 160W idle (Windows Server 2022), >400W under load.

a1knm.jpg


A few curses later :) I think in 2005 it would have taken a whole rack for that performance...

kgjtx.jpg


sxk1f.jpg


Plenty of power... At least relative to the volume of the case... :)

a8j7f.jpg


wzj0g.jpg
 
Since somehow there is no one interested in the board i got from another forum a few month ago, and it has only been lying around since the construction of my Dual Xeon E5-2680v4 workstation, I thought I would build a sleeper. Hyper-V and stuff like that can be run on it, at least temporarily.

A case from around 2005, an 850W power supply, and off you go. Very tight, but the temperatures are still within limits. In the front, a PWM 120mm and a YS-Tech 2.8W fan provide fresh air.

It's close, yes, but it fits... :) The little southbride fan is really annoying... but at least that way you know that the server is on. Power consumption around 160W idle (Windows Server 2022), >400W under load.

View attachment 440195

A few curses later :) I think in 2005 it would have taken a whole rack for that performance...

View attachment 440196

View attachment 440197

Plenty of power... At least relative to the volume of the case... :)

View attachment 440198

View attachment 440199


you know, there are silent version of southbridge fans/heatsinks ;) Had the same issue on my prior supermicro board. Relatively silent except the damn chipset fan.

That little fan sounding like a hair drier would bother me as well. Enzotech has some really easy replacements.

http://www.enzotech.com



j
 
tbird_ YO Dynatron heatsinks dang, haven't seen since my socket 940 (?) Opeterons - they got beefier over time!

imga0275.jpg



Bonus pic of the venomous red back spider that lived in the rack the above server was in (if I remember correctly):

dscf1106.jpg
 
you know, there are silent version of southbridge fans/heatsinks ;) Had the same issue on my prior supermicro board. Relatively silent except the damn chipset fan.

That little fan sounding like a hair drier would bother me as well. Enzotech has some really easy replacements.

http://www.enzotech.com



j

Hm... Seems that those coolers are not easily available here in Germany 🙄😔

However, I've found some adequate replacement - from an old GeForce MX Card.

Works like a charm and it's not noisy anymore (at least not that fan 😂)

IMG_20220204_082407.jpg
 
I gave away my old Dell R610 to some college students about 6 months ago so they could get into IT with something to play with.

I figured the itch was gone. But I picked up a Dell R730 w/ 128gb and 2x E5-2680V4 14core/28thread, has 4 x 960GB SSD's in it.
$1400 or so, seemed to be fairly market value.

I'll be running Win10/Server 2019 and 2022 VM's. Considering it my AD --> Azure On-Prem connector and build platform for testing.
I figure 8GB Win10 2019 x 16 and 2022 x 16 and leave the rest for application server (services).
I been running VMW Workstation on my main rig to get me by, but 32GB is just not enough for what I wanna do, so figured I'd just go all in
again.
 
I gave away my old Dell R610 to some college students about 6 months ago so they could get into IT with something to play with.

I figured the itch was gone. But I picked up a Dell R730 w/ 128gb and 2x E5-2680V4 14core/28thread, has 4 x 960GB SSD's in it.
$1400 or so, seemed to be fairly market value.

I'll be running Win10/Server 2019 and 2022 VM's. Considering it my AD --> Azure On-Prem connector and build platform for testing.
I figure 8GB Win10 2019 x 16 and 2022 x 16 and leave the rest for application server (services).
I been running VMW Workstation on my main rig to get me by, but 32GB is just not enough for what I wanna do, so figured I'd just go all in
again.
Server 2022 is not too bad for that. I'm using 2022 data center as a workstation and have been able to get the OS to be fairly responsive and not communicative with the mothership.

The largest issue I had I was able to solve and it may help you if you are interested. VM and drive performance seemed a little lethargic for what I would expect. This was solved by putting in the mitigations in the registry for feature set mask (3) and feature set override mask (3). On reboot it felt like the trailer hitch was taken of the Ferrari. The speculative execution prevention kills look ahead performance, caching , and in my case had an impact on my array and VMs. On Skylake its particularly brutal. Anyhow here are the keys, it helped me in VM and array performance (and helped in general). I also use VROC array with a premium key, and that is affected by the mitigations directly.



reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 3 /f

reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f


Jay
 
Last edited:
Two HP z820s. Dual duallys! :)


First one has two E5 2696 v2 processors and 8 channel 1866Mhz
Second one has two E5 2673 v2 processors and 8 channel 1600MHz

P-20200318-063024.jpg

P-20200318-063413.jpg

P-20200318-064250.jpg

P-20200318-064515.jpg

P-20200318-064804.jpg

P-20200318-065450.jpg

P-20200318-094853.jpg

P-20200319-140632.jpg

P-20200319-140941.jpg

P-20200319-140951.jpg

P-20200319-141227.jpg

P-20200322-172021.jpg

P-20200402-175533.jpg

P-20200402-175555.jpg
 

Attachments

  • 1644734286997.png
    1644734286997.png
    260.6 KB · Views: 0
Last edited:


I had no idea there where heatsinks like that, pretty cool. Now I'm going to have to do a little personal research....

Looks great!
 
Those are factory little AIO water cooled heatsinks. Dell has them in some of the Precision workstations too. It is an optional item on the Dells and are a LOT quieter than the basic blowers. Asetek is the OEM for the AIO WC ones from Dell. I'd be willing to bet they make the ones for the HP's as well. The ones in my T7910 (dual E5 2667 v3 with 128gb 2133 and a quad NVMe card) sit flat instead of at an angle. :)

IMG_4035.jpg
 
I had no idea there where heatsinks like that, pretty cool. Now I'm going to have to do a little personal research....

Looks great!
Yeah it's a pretty wild concept. But despite the small size they are very efficient, and the pump is built right into the base of the radiator, so it's a very compact design as well. And they are actually mandatory for the 150 watt 2687W v2, Intel's highest TDP processor from the entire 2600 v2 family of processors.

They are socket LGA 2011 and use a PWM controller, so they would be adaptable to other rigs with a little bit of modification. You can also remove the LGA 2011 collar from the heat sink and probably swap it out with some other bracket to fit it to some other socket, although I cant confirm that tthey even make other collars for them.

Those are factory little AIO water cooled heatsinks. Dell has them in some of the Precision workstations too. It is an optional item on the Dells and are a LOT quieter than the basic blowers. Asetek is the OEM for the AIO WC ones from Dell. I'd be willing to bet they make the ones for the HP's as well. The ones in my T7910 (dual E5 2667 v3 with 128gb 2133 and a quad NVMe card) sit flat instead of at an angle. :)

Yup, exactly correct. Asatek is the manufacturer. I modify all my z820s to liquid cooling because it's quiet, the CPU turbos better and lower temps overall. You can usually pick up these coolers for about $89 on ebay if you want to experiment with them.

I already picked up a spare that Im going to convert to a liquid metal cooling loop.
 
Those heatsinks are pretty wild, are they stock or something you added? I'd love more info about it, they almost look like tiny AIOs!
Yes, they are essentially tiny AIOs. Everything is built as compact as possible making them very efficient. They (individually) perform on par with a decent 240mm AIO. I've highlighted the collar here with arrows so you can see how it is constructed.

1644844045361.png


They are both loaded with memory (the z820s). This is before and after the memory upgrade on one of the machines. AIDA64 falsely reports the CPU as being the retail sibling. But it's actually running two OEM 2673's v2 chips, which still have the highly coveted 4.0GHz turbo, and 8 cores, along with the 2667 v2 and the 2687W v2. The 2673 v2 is probably the most rare chip in the entire 2600 v2 family... and most times they can not be found in the states at all. Sick motherboard is all I can say:

1644845318553.png


1644842444871.png


1644842472457.png



1644842530654.png


This never gets old! lol I'm future-proofed on a machine that's nearly a decade old!
This processor actually outperforms the flagship 2697v2 in multi core performance. The 2697 v2 has an all core turbo of 3.0GHz vs the 2696 v2 at 3.1GHz. The 2696 v2 does this with a lower TDP as well. its an OEM prooessor.

1644844325560.png


1644844215278.png


Dell Dually
24GB? Triple channel
Dual x5650s

stock-cpu-z-1.png

stock-cpuz.png

image.thumb.png.5888bee89baeed0373a748589ff1f615.png

1644843625028.png


4 SSD raid zero just for kicks
1644843667364.png


1644843777727.png
 

Attachments

  • 1644842791042.png
    1644842791042.png
    251.2 KB · Views: 0
  • 1644843896950.png
    1644843896950.png
    1.9 MB · Views: 0
Oh man, I love those 'look how big my dick is' things. :D

But only 64 gigs of Ram? You could update that! :)

I also love my dually Workstation. Next update will be dual Xeon Platinum 8280 as soon as they are affordable (so in 2 years or so...)

(Bench was taken with 4 VMs running - so maybe does not reflect real performance)
1644855493651.png


1644855560733.png


1644855506537.png
 
Last edited:
Oh man, I love those 'look how big my dick is' things. :D

But only 64 gigs of Ram? You could update that! :)



View attachment 444334

View attachment 444333
Yes, that must be it! I think Im really going to blow everyone else away with decades old systems that are still running DDR3.

It's a thread about dually rigs, merely posting details of my build some people might find it interesting without getting into the "mine is better than yours" mentality.

TBH I'm more interested in per core than thread count anyway. Im just popping the hood so ppl can see the engine.
 
Back
Top