unlimited budget server/workstation, EPYC 9174F?

Have you decided on a case yet? There's some cases out there in 4U footprints that can also be rackmounted if you decide on making a server out of it later.

The new Lian Li V3000 Plus is finally getting released.

 
Bah, bring on the encouragement. There's tons of threads about bang for the buck -- it gets boring. You won the lotto, what do you build for a workstation/server hybrid monstrosity? No comments on the Graid SupremeRAID SR-1010?

Yep, case is Meshify 2, though everything is still laid out on a table. I'll do another build in a 4U case next year that is purely a server, posted about it above. Maybe Epyc for that one!

The Lian Li look pretty crazy. The modes are pretty clever, though they don't help unless you actually need/want an unusual setup. I just wanted an unassuming case with great airflow and without a glass side or RGB. The V3000 front panel light does look pretty cool though.

So I thought I sent myself the RAID0 benchmark data, but I didn't. :( I won't be able to post that until Tuesday or Wednesday. Sorry, I know that's the most interesting part. I do have the other data though, so here's some charts of that for now:

(there's 8 charts don't miss the next button, or just see them on imgur)

IIRC the 2x Optane 800TB in RAID0 (via RaidXpert2) is roughly the same as "Optane 800GB, dual U.2 card (1)" except ~14,000 MB/s read, ~7,500 MB/s write, and a little lower RND4K than non-RAID, but still a lot higher than the FireCuda. That's pretty much as expected: all the benefits of RAID 0 without downsides, comparable to what PCIe 5.0 SSDs are expected to be but with Optane benefits.

I put the 4090 in PCIe slot 2, since with it in slot 1 it would make slot 2 unusable. Slot 1 got the U.2 adapter card. Interesting the Ableconn card was a bit worse in RND4K, Q1T1, otherwise both cards worked fine. There were some quirks with setting the slots to auto, 8X, or Asus's "RAID mode", some choices fail to boot and I had to clear CMOS to recover. I don't remember the exact details but with enough fiddling it eventually worked like I wanted.

I ran the OS and apps off the FireCuda briefly and it felt quite fast. I didn't spend a lot of time on that and the drive was mostly empty. I haven't had much time with the Optane RAID0 either, but it also feels quite fast. I'm not sure I could pick which one is which in a blind A/B test if I'm just booting and opening apps. There are still some brief wait times with both, eg opening a heavy IDE that usually takes many seconds still takes maybe 1s -- not instant, but it's fast. I was surprised the un-7zipping was fastest with the FireCuda, I don't know what's going on there.

Once I live with it a while I'll be able to compare my common actions with the speed of my old systems, but I doubt I'll setup the FireCuda again, so I won't know for sure if that would be just as good for my particular usage. I can at least rest easy knowing the FireCuda wouldn't have been faster! Except for un-7zipping I guess!
 
Have you decided on a case yet? There's some cases out there in 4U footprints that can also be rackmounted if you decide on making a server out of it later.

sorry for encouraging him, but I miss fun builds like this.
I blocked him, you guys have fun.
 
What a loss. Enjoy your MacBook.

Seems the Graid card is hardware to do software RAID. The drives don't connect to it and it seems the data doesn't even go through it, which would limit bandwidth to PCIe 3.0. As I understand, it gets only parity data, the idea being to save the CPU effort and it's mostly useful with RAID > 1 and a large number of drives. Even then it's questionable since it doesn't have the error handling that ZFS does.
 
The new Lian Li V3000 Plus is finally getting released.
That looks slick but I was thinking more something that can slide into a rails if you decide to move it into a rack later. Things like these

Silverstone RM42-502​

silverstone.jpg


Supermicro CSE-745 (fugly, just an example)​

CSE-745.jpg
 
Here's the previous with RAID 0 added:

iAXQuff[1].png

all-write.png


Most interesting I think is the FireCuda vs the Optane RAID 0, so the same with just those:

raid-vs-ssd-read.png

raid-vs-ssd-write.png

raid-vs-ssd-write2.png

I think the writes are basically just measuring the FireCuda's DRAM buffer. That's fair though, since if you don't normally exceed the buffer then you never experience any downside.

Without and with RAID caching:
5955-g98n[1].png


Interesting the write RND4K, Q1T1 is the only number affected. I turned off caching and ran again, down to 270.

Here's something confusing, running the FireCuda again and I get much lower numbers! Good thing I have the old screenshot or I'd wonder if it was a transcription error. Makes me wonder if CrystalDiskMark is a good test!
5957-1cv0[1].png


The older (left) run does seem strangely high for the 3rd row -- shouldn't be possible. Charts with the run on the right, plus numbers from a new run of the RAID test to be sure all the data is right:

q1Fmd8x[1].png

raid-vs-ssd-write2.png


Writes are similar and the Optanes have a hefty edge for reads (at least in CrystalDiskMark).

Anyway, it's cool to see charts, especially since there's almost no one out there with RAID 0 Optane, but I never intended on doing a full blown scientific review. I'll still probably dork around with some other storage benchmark software briefly before calling it a day. Tests that trace real world app usage seem most interesting. I've been living with it for a few days now and it's certainly nice to have everything be fast.
 
Last edited:
That's an awesome build, thanks for pointing it out! It is super cool.

I'm a sucker for comparing everything to the latest tech, then it feels like I'm missing out if I use older tech. I think the timing will align better on my next build in mid-2023, when Epyc is available. I just need to avoid comparing that with the next new tech!

h3nblqeicx1a1[1].jpg


Here's ATTO disk benchmark for 2x Optane RAID 0:

atto-raid0.jpg

atto-raid0-iops.jpg


And for FireCuda 530:
atto-firecuda.jpg

atto-firecuda-iops.jpg


The same but queue depth of 1, Optane RAID:

atto-1-raid0.jpg


FireCuda has a little harder time:

atto-1-firecuda.jpg
 
Last edited:
I'm going to build a workstation, mostly for programming, that also does some server tasks (NAS, software builds). The budget is effectively unlimited, though I don't want to spend needlessly.

Looking around, I see AMD Genoa is out soon. What do you think of using the EPYC 9174F for a workstation/HEDT? Using ECC RAM sounds nice, as more stability than I usually get from my builds would be appreciated.
You can write code on a raspberry pi lol.... you literally just want to spend money. Haha nothing wrong with that. Just get an epyc 64 core, dual socket machine. Prices start around 100k for the servers id recommend. All flash-based storage 100gbit stuff standard.
 
Why would somebody block this dude for blowing shit tons of money on something he thinks is cool?!?
2 points:
This is [H]ard|Forum.
I like to live vicariously.
 
Why would somebody block this dude for blowing shit tons of money on something he thinks is cool?!?
2 points:
This is [H]ard|Forum.
I like to live vicariously.
Ya I love the jumping in head first into a optane raid array and then folowing up with benchmarks. It's a cool thread
 
Sure, if it's not too much trouble for me to get it done. PC Mark is supposed to run productivity apps and trace how long things take. I tried it but it's junk -- after it checks system info, it stops and says "no result was provided". Maybe there are better real world benchmarks?

Here's Real Bench, Optane RAID 0:
Code:
Image Editing 241,891, Time:22.0264
Encoding 556,253, Time:9.57837
OpenCL 797,368, KSamples/sec: 146690
Heavy Multitasking 407,546, Time:18.7267
System Score 500,764

The OS is running on the RAID and it's too much work to move it to the FireCuda, but I copied the Real Bench software to the FireCuda and ran it from there:
Code:
Image Editing 245,894, Time:21.6679
Encoding 552,099, Time:9.65044
OpenCL 795,236, KSamples/sec: 146261
Heavy Multitasking 407,307, Time:18.7377
System Score 500,134

So, basically the same. The work probably needs to be heavier on storage to see a difference, if we are going to see one.

Finally got around to setting up in the case:

oR3xExI[1].jpg


O77z73x[1].jpg


Still waiting on a CableMod GPU power cable. That GPU is crazy large, but it has been working fine even against the bottom of the case like that. The bottom is a grill, but there's not much space on the other side of the grill to the PSU. The 3rd GPU fan is past the PSU, and there's a bottom intake fan putting fresh air into that 3rd GPU fan, which blows through the card and out the top.

All those Noctua fans run at 45% most of the time, making it silent. The GPU and PSU fans stop completely when cooling isn't needed. I had some extra absorptive acoustic panels (Rhino on Amazon) so I put them on the inside of both side panels and next to the mobo. Maybe it helps some coil whine or other noises. I haven't been bothered by that, thankfully.

I ended up doing the delid. I was too curious to see the difference. It was super easy, no der8auer tools are needed. First cut the glue with steel wire, like guitar string #9. There's zero chance of damage doing that. Next, grab the IHS with pliers and pull gently while heating up the IHS. You want the IHS to get hot rapidly, so the heat has less time to soak into the CPU die and PCB. I used a hot air tool on max, but a simple butane torch would work. I used a hot plate but only to hold it down, it wasn't on because you don't want the PCB hot. The IHS comes off as soon as the indium reach 160C. There is also zero chance of damage doing this, unlike de8auer's tool which uses shear force and puts mechanical stress on the CPU die and PCB.

6020-2MjD[1].jpg


After that is the harder part: getting the damn indium off. It's commonly said that liquid metal melts it, but all you will do is make a mess if you put LM on a thick blob of indium. The indium is thick but soft and needs to be scraped off. That's not super hard, but it's stressful to use a razor so close to the CPU. Once 99% of it is off, then LM helps get the last bits.

The delidded CPU is lower than the retention bracket, so that has to go. I kept the stock backplate though. All I did was reduce the height of the DH-15 cooler stand offs by 3.8mm, which was the height reduced by removing the IHS. Doing that lets me still screw my cooler down all the way and not have to wonder if I'm putting too much pressure on the CPU. The DH-15 only has 2 screws which isn't great, but works fine.

Temps are only a few C less at idle and still hit the 95C limit at max. I assume this is because air cooling can't remove the heat fast enough. Oh well, it was fun to dork around with. I don't plan to do water cooling, I'd rather keep it simple. It doesn't hit 95C as fast as before the delid. With Cinebench it hits 90C fast, 93C soon after, then takes a little while to creep to 95C. Previously it would jump to 95C. It never hits 95C in normal usage. Playing games it hits ~54C, which is roughly the same as before the delid.

I tried overclocking, but the 7950X just doesn't have much headroom, especially air cooled. It's stable with a few more aggressive settings than stock, but PBO2 and some other settings don't help -- clock speeds are either the same or worse, or they are faster but unstable. CoreCycler with P95 set to AVX2 is the fastest way to see instability. It's 100% stable with the settings I ended up with, where clock speeds are ~5200MHz all core and ~5550MHz single core. I'm OK with that and stability is important, even though I was hoping for more. Ultimately more aggressive settings give higher clock speeds but don't pass AVX2 stress tests and sometimes crash under light loads (eg YouTube), even though they appear stable when playing games and even when doing other stress tests.
 
Sure, if it's not too much trouble for me to get it done. PC Mark is supposed to run productivity apps and trace how long things take. I tried it but it's junk -- after it checks system info, it stops and says "no result was provided". Maybe there are better real world benchmarks?

Here's Real Bench, Optane RAID 0:
Code:
Image Editing 241,891, Time:22.0264
Encoding 556,253, Time:9.57837
OpenCL 797,368, KSamples/sec: 146690
Heavy Multitasking 407,546, Time:18.7267
System Score 500,764

The OS is running on the RAID and it's too much work to move it to the FireCuda, but I copied the Real Bench software to the FireCuda and ran it from there:
Code:
Image Editing 245,894, Time:21.6679
Encoding 552,099, Time:9.65044
OpenCL 795,236, KSamples/sec: 146261
Heavy Multitasking 407,307, Time:18.7377
System Score 500,134

So, basically the same. The work probably needs to be heavier on storage to see a difference, if we are going to see one.

Finally got around to setting up in the case:

View attachment 531057

View attachment 531056

Still waiting on a CableMod GPU power cable. That GPU is crazy large, but it has been working fine even against the bottom of the case like that. The bottom is a grill, but there's not much space on the other side of the grill to the PSU. The 3rd GPU fan is past the PSU, and there's a bottom intake fan putting fresh air into that 3rd GPU fan, which blows through the card and out the top.

All those Noctua fans run at 45% most of the time, making it silent. The GPU and PSU fans stop completely when cooling isn't needed. I had some extra absorptive acoustic panels (Rhino on Amazon) so I put them on the inside of both side panels and next to the mobo. Maybe it helps some coil whine or other noises. I haven't been bothered by that, thankfully.

I ended up doing the delid. I was too curious to see the difference. It was super easy, no der8auer tools are needed. First cut the glue with steel wire, like guitar string #9. There's zero chance of damage doing that. Next, grab the IHS with pliers and pull gently while heating up the IHS. You want the IHS to get hot rapidly, so the heat has less time to soak into the CPU die and PCB. I used a hot air tool on max, but a simple butane torch would work. I used a hot plate but only to hold it down, it wasn't on because you don't want the PCB hot. The IHS comes off as soon as the indium reach 160C. There is also zero chance of damage doing this, unlike de8auer's tool which uses shear force and puts mechanical stress on the CPU die and PCB.

View attachment 531066

After that is the harder part: getting the damn indium off. It's commonly said that liquid metal melts it, but all you will do is make a mess if you put LM on a thick blob of indium. The indium is thick but soft and needs to be scraped off. That's not super hard, but it's stressful to use a razor so close to the CPU. Once 99% of it is off, then LM helps get the last bits.

The delidded CPU is lower than the retention bracket, so that has to go. I kept the stock backplate though. All I did was reduce the height of the DH-15 cooler stand offs by 3.8mm, which was the height reduced by removing the IHS. Doing that lets me still screw my cooler down all the way and not have to wonder if I'm putting too much pressure on the CPU. The DH-15 only has 2 screws which isn't great, but works fine.

Temps are only a few C less at idle and still hit the 95C limit at max. I assume this is because air cooling can't remove the heat fast enough. Oh well, it was fun to dork around with. I don't plan to do water cooling, I'd rather keep it simple. It doesn't hit 95C as fast as before the delid. With Cinebench it hits 90C fast, 93C soon after, then takes a little while to creep to 95C. Previously it would jump to 95C. It never hits 95C in normal usage. Playing games it hits ~54C, which is roughly the same as before the delid.

I tried overclocking, but the 7950X just doesn't have much headroom, especially air cooled. It's stable with a few more aggressive settings than stock, but PBO2 and some other settings don't help -- clock speeds are either the same or worse, or they are faster but unstable. CoreCycler with P95 set to AVX2 is the fastest way to see instability. It's 100% stable with the settings I ended up with, where clock speeds are ~5200MHz all core and ~5550MHz single core. I'm OK with that and stability is important, even though I was hoping for more. Ultimately more aggressive settings give higher clock speeds but don't pass AVX2 stress tests and sometimes crash under light loads (eg YouTube), even though they appear stable when playing games and even when doing other stress tests.
A razor is the way to go to get the indium off. You can feel the silicone so it's not too hard to just get the metal off.

Now that its delided, watercool it?
 
I ended up doing the delid. I was too curious to see the difference. It was super easy, no der8auer tools are needed. First cut the glue with steel wire, like guitar string #9. There's zero chance of damage doing that. Next, grab the IHS with pliers and pull gently while heating up the IHS. You want the IHS to get hot rapidly, so the heat has less time to soak into the CPU die and PCB. I used a hot air tool on max, but a simple butane torch would work. I used a hot plate but only to hold it down, it wasn't on because you don't want the PCB hot. The IHS comes off as soon as the indium reach 160C. There is also zero chance of damage doing this, unlike de8auer's tool which uses shear force and puts mechanical stress on the CPU die and PCB.
Now we’re talking! Love this…
 
Sure, if it's not too much trouble for me to get it done
You could just look the time it take the next time you have to build something and try to see if there a difference on the raid, back in the days with regular cpus there wasn<t much if any (even on a ram drive).

if you use vcpkg or other C++ package manager, building something that has a lot of dependencies (say osg[collada]) could stretch the leg of a machine really well (both ram and cores), type of work that can take 45 minutes on a new laptop and could take less than 3 on this.
 
A razor is the way to go to get the indium off. You can feel the silicone so it's not too hard to just get the metal off.

Now that its delided, watercool it?
Yep, the razor (scalpel) worked fine, but it's a dangerous step since one slip can mean buying a new CPU. All the other steps are 100% safe.

I'm tempted just to see how much better the delidded watercooling is but nooo, no watercooling on this one. Keeping it simple! I'm away from it ~6 months/year and don't want to think about loop maintenance.

Re: build times, true. I've been lazy and haven't setup the toolchains needed to build my largest product, as it's been easier (if inconvenient) to keep building on a remote machine. Eventually I'll set it up and can give numbers.

I've had a few (maybe 3) graphics drivers crashes, or that's what they seemed to be: screen goes black, then comes back after a couple seconds. I've also had two crashes: everything freezes, mouse stops moving, needs hard reset. Nothing in logs to indicate what is going on. Both happened at low loads. It's not temps, hopefully it's new platform woes that get ironed out. I went back to not OC'ed and updated BIOS and GPU and chipset drivers.

Cablemod GPU cables came -- the final piece, so now she's complete!
 
Back
Top