New Server build

Harvestor

Limp Gawd
Joined
Apr 21, 2009
Messages
237
My current file server has been rock solid for the last 10 years but its really showing its age when trying to stream 4k rips , plus most of the drives are over 7 years old so I am starting to feel the pressing need to build now and migrate while still functional.

Current server is 6 6TB WD blacks In raid, would like to at least double the capacity looking for the best bang for my buck but pretty fast storage.

Main use is going to be a plex server but would like to try new things like maybe local cloud storage for the family. Will also be storing my steam library for the first time since next year we are going to be building rural with Starlink internet so i wont be able to just download a game in 15min like i do now.

Wishlist will be something that can saturate my 2.5gb Switch with future plans for 10gb when new house is built and wired.

Thinking about repurposing my current CPU/motherboard but i am open to all suggestions.
 
At 10 years on your current server, it's definitely the right time to replace the CPU/MB. What are you running now? If you're interested in the big stuff, Skylake SP Xeons and Zen 2 Epycs are going pretty dang cheap.

For HDD's, I'd suggest looking at the big WD (Red+/Pro/Gold) or Seagate (Exos) drives depending on your budget and deals being offered. Traditional RAID is dead, but unRaid and ZFS/RaidZ(2) are a thing if you are interested in putting in the work for them (they have dedicated threads on this board, and also benefit from lots of RAM and caching SSDs). Depending on the age of your Raid card (if you have one) you might want to snag a modernish PCIe3 LSI/Avago/Broadcom HBA off of fleabay for cheaps.

A big standalone SATA or NVMe SSD would work great for a Steam cache.
 
I'm in the process of upgrading my home server as well. In my old server I used mirrored disks (zfs) for redundancy. In my new server I want - at least for some data - the speed of ssds. But I sure don't want to pay for mirrored ssd storage! Realising that I want data redundancy but not necessarily high availability, I'm going to set up single-disk, non-redundant zfs pools for the ssd storage, and set up automatic migration of the data (zfs snapshots + send/recv) to hdds (maybe once a day for personal storage and once a week for media libraries?). So a kind of automated, on-line/first-line backup rather than seamless failover.

If an ssd fails I can then switch over to use the migrated file system(s) on the corresponding hdd while waiting for the replacement ssd, with some but not a lot of downtime, and max 24 hours (or whatever) of data loss. I gain ssd speed + cost savings. I will also reuse my existing hdds for the replication, since speed isn't important in that role, and they can be easily replaced when they fail. (Obviously there will also be offline backups.)

This solution might not be right for everyone, but it's an idea to mull over at least. :)
 
I would not continue to use a 10 year platform, for 1)reliability concerns and 2) performance. I'm a big fan of socket AM4 for stuff like this. All the CPUs that aren't APUs support ECC (the Pro APUs like Pro 5650Ge support it though). Most motherboards support unbuffered DDR4 ECC except for MSI. Asrock even has a "proper" server boards that have IPMI and video output without needing iGPU via the ASPEED AST2500 such as the X470D4U.

I've had two setups running TrueNAS Core (I think I started when it was called FreeNAS still...) X370 Taichi / 2700X and now X570S Aero G / Pro 5650Ge (both with 128GB Nemix Unbuffered ECC) and they've been rock solid. In the X370 Taichi system I had some hiccups that were my own fault when I initially did the build (my launch Ryzen 1700 didn't like FreeBSD, and my onboard 10 SATA ports- some from ASMedia 3rd party controller- didn't like all being populated but moving to a 2700X and LSI HBA totally solved everything)

1700232224255.png


For storage I went with the Seagate 20TB Exos models for that have been on sale for a long while at $279.99. 5 year warranty, enterprise quality, so far so good but I have plenty of redundancy and backups if the array were to fail for whatever reason. I know many people don't trust Seagate, but it's hard to beat that deal in terms of price/capacity on new drives. Not sure what your budget is, obviously you can buy whatever drives you feel comfortable with. ZFS planning is "fun", you have to consider if you want to mirror vdevs and/or run RAIDZ2/RAIDZ3 etc... You don't have to use TrueNAS Core or anything FreeBSD based, Linux based OSes have pretty good OpenZFS support nowadays so choose whatever OS fits your use case best. Optionally get a GPU if you're doing something like Plex and need extra transcoding horsepower if the CPU isn't going to keep up. I would at least encourage ECC memory but it's ultimately your choice.

To "saturate" 2.5GbE it's pretty easy, your read spead on the array will only need to be around 312 MB/s. To "saturate" 10GbE you're looking at around 1250 MB/s. I'm bottlenecked on network transfers pretty hard right now, but I've been too lazy to swap in a 10GbE NIC. I started with the onboard 2.5GbE just to get it up and running and said screw it. It works fine for what I'm doing.

Last thing I'll mention is hosting a Steam Library is fine. Just keep in mind you probably/definitely?? can't do this as an SMB share, you'll have to create a zvol and set it up as an iSCSI target on your gaming machine.
 
I second some things already said.

Look into ZFS. And if you have throughput problems already then don't do raidz, do mirrors and stripes only. 4k video does not saturate even 1 Gb/sec, though.

AM4 with ECC memory is a very attractive platform for small servers.

I still use Toshiba HDs.
 
4k video does not saturate even 1 Gb/sec, though.

Uncompressed 4k video does in fact saturate a 1 Gb/sec.
The bandwidth needed for video doesn't have to do with the resolution of the video. It has to do with how high the video's bitrate is.
 
Uncompressed 4k video does in fact saturate a 1 Gb/sec.
The bandwidth needed for video doesn't have to do with the resolution of the video. It has to do with how high the video's bitrate is.

Right, of course. I was thinking about compressed video with bitrates of streaming or as ripped from 4k Blu Rays.

Uncompressed 4k is probably rarely done on fileservers.
 
I would not continue to use a 10 year platform, for 1)reliability concerns and 2) performance. I'm a big fan of socket AM4 for stuff like this. All the CPUs that aren't APUs support ECC (the Pro APUs like Pro 5650Ge support it though). Most motherboards support unbuffered DDR4 ECC except for MSI. Asrock even has a "proper" server boards that have IPMI and video output without needing iGPU via the ASPEED AST2500 such as the X470D4U.

I've had two setups running TrueNAS Core (I think I started when it was called FreeNAS still...) X370 Taichi / 2700X and now X570S Aero G / Pro 5650Ge (both with 128GB Nemix Unbuffered ECC) and they've been rock solid. In the X370 Taichi system I had some hiccups that were my own fault when I initially did the build (my launch Ryzen 1700 didn't like FreeBSD, and my onboard 10 SATA ports- some from ASMedia 3rd party controller- didn't like all being populated but moving to a 2700X and LSI HBA totally solved everything)

View attachment 614120

For storage I went with the Seagate 20TB Exos models for that have been on sale for a long while at $279.99. 5 year warranty, enterprise quality, so far so good but I have plenty of redundancy and backups if the array were to fail for whatever reason. I know many people don't trust Seagate, but it's hard to beat that deal in terms of price/capacity on new drives. Not sure what your budget is, obviously you can buy whatever drives you feel comfortable with. ZFS planning is "fun", you have to consider if you want to mirror vdevs and/or run RAIDZ2/RAIDZ3 etc... You don't have to use TrueNAS Core or anything FreeBSD based, Linux based OSes have pretty good OpenZFS support nowadays so choose whatever OS fits your use case best. Optionally get a GPU if you're doing something like Plex and need extra transcoding horsepower if the CPU isn't going to keep up. I would at least encourage ECC memory but it's ultimately your choice.

To "saturate" 2.5GbE it's pretty easy, your read spead on the array will only need to be around 312 MB/s. To "saturate" 10GbE you're looking at around 1250 MB/s. I'm bottlenecked on network transfers pretty hard right now, but I've been too lazy to swap in a 10GbE NIC. I started with the onboard 2.5GbE just to get it up and running and said screw it. It works fine for what I'm doing.

Last thing I'll mention is hosting a Steam Library is fine. Just keep in mind you probably/definitely?? can't do this as an SMB share, you'll have to create a zvol and set it up as an iSCSI target on your gaming machine.
Good info, i have done zero research into steam shares yet i just know id like to have the option to have as much stored as possible.
Any good articles/guides on using ssds as a cache, i have zero knowledge in that and would love to research it
 
I second some things already said.

Look into ZFS. And if you have throughput problems already then don't do raidz, do mirrors and stripes only. 4k video does not saturate even 1 Gb/sec, though.

AM4 with ECC memory is a very attractive platform for small servers.

I still use Toshiba HDs.
I will look into zfs for sure thank you. Any recommendations for a am4 motherboard? Something with good ECC support and room to eventually put a 10gb nic into
 
Good info, i have done zero research into steam shares yet i just know id like to have the option to have as much stored as possible.
Any good articles/guides on using ssds as a cache, i have zero knowledge in that and would love to research it
I'd recommend against using any sort of cache SSD. Just get as much RAM as you can and you'll be fine. You can read more about the types of ZFS cache drives here: https://www.45drives.com/community/articles/zfs-caching/
I will look into zfs for sure thank you. Any recommendations for a am4 motherboard? Something with good ECC support and room to eventually put a 10gb nic into
And yea sorry I saw bitnick mention it, and mixed that up with you and though you were already aware and familiar with ZFS.

Basically any motherboard that isn't from MSI, fits your budget and has the features you want. Asrock Rack has many boards with IPMI and video output without the need for an APU/separate video card. Here is a Newegg search that has most of them or look on Asrock Rack's website in server motherboards and filter by CPU socket AMD AM4. These are hands down the best for a server, since with IPMI you can remotely control it even to the point of navigating the BIOS, booting ISOs, etc...

If you don't want one of the Asrock Rack boards pretty much anything from Asus/Asrock/Gigabyte will work with Unbuffered ECC. You can check the motherboard support page to verify, maybe some very low end ones won't work. I have used Asrock X370 Taichi (repurposed desktop board) and Gigabyte X570S Aero G (I liked this one for the 2.5GbE, iGPU output capability, 4 NVME and PCIe layout). For a 10GbE NIC later I'd make sure the board has a free PCIe x4 slot (they are usually physical x16 slots at the bottom of the board or whatever).

Another random piece of advice, ideally connect your hard drives to an LSI HBA that's flashed in IT Mode. ZFS needs direct access to the drives so normal RAID controllers won't work and I feel like the motherboard SATA ports can be hit or miss (ESPECIALLY if you're using a third party controller from ASMedia or whatever). Here is an example of what you would want. It can handle 8 SATA/SAS drives hooked up to it and will take one of your PCIe slots.
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
I will look into zfs for sure thank you. Any recommendations for a am4 motherboard? Something with good ECC support and room to eventually put a 10gb nic into

I use the Asus Prime x570. And I have a dual 10 Gb/s Ethernet card in there.
 
Researching that one now. Does video card matter that much with transcoding in plex, i have my 5600xt thats going to be replaced and was thinking of putting it into the server if it was a big improvement.

Who has the best drive warrenty these days? I have heard nightmares about WD the last few years but my WD blacks have been great and Im thinking of doing them again unless theres another drive thats better performance/ dependability.
 
Leaning towards freenas as of right now but still looking for other options. What else should i be looking at besides zfs and raidZ.

Specs so far im thinking

Asus prime x570
64gb ram
Ryzen 5 5600(can pickup local for $100) if its not sufficient i will upgrade
8 12tb seagate ironwolfs
Wd black boot drive
5600xt (already have)

Looking for pci solution for more sata ports
 
Leaning towards freenas as of right now but still looking for other options. What else should i be looking at besides zfs and raidZ.
Looking for pci solution for more sata ports

Hardware seems ok. For more hd ports I would use a12G/ 8port LSI HBA with an 9300 chipset. You can connect 6G Sata disks or (2x mpio) 12G SAS disks.

Do you plan to use a barebone filer?
An option is an All in One config with a virtializer as base and all services including storage on guest VMs like BSD, Linux, OSX or Solaris.
I prefer the ultra minimalistic webbased ESXi as base, ProxMox is another option.

Regarding the OS you have the choice of BSD, Linux or Solaris.
Mainstream is Linux. Best ZFS integration/lowest resource needs is Solaris (native ZFS) or a free Solaris fork like OmniOS (Open-ZFS).

Main advantage for a Solaris based filer beside easy up/downgrades is the OS/kernelbased SMB server. Unlike Linux or SAMBA based solutions it offers full NFS4 ACL integration into the ZFS filesystem (a superset of Windows ntfs ACL, Posix ACL and classic Unix permissions) with Windows SID as user/owner file reference or local Windows compatible SMB groups. It is also much easier to configure than SAMBA via smb.conf and allows a backup/restore/move of ZFS filesystems with AD permissions intact without additional idmappings uid->SID.

Btw.
If you prefer ZFS on Linux, avoid Open-ZFS 2.2 until bug state becomes more clear,
https://github.com/openzfs/zfs/issues
 
Leaning towards freenas as of right now but still looking for other options. What else should i be looking at besides zfs and raidZ.

Specs so far im thinking

Asus prime x570
64gb ram
Ryzen 5 5600(can pickup local for $100) if its not sufficient i will upgrade
8 12tb seagate ironwolfs
Wd black boot drive
5600xt (already have)

Looking for pci solution for more sata ports
LSI HBA for the 8 drives - I linked one on eBay in a prior post. It NEEDS to be running in IT mode- ZFS needs direct access to the drives and if it's in traditional RAID mode you're bound for trouble.

If I was starting fresh I'd probably choose TrueNAS Scale (Debian Linux based) over TrueNAS Core (FreeBSD based). They basically have feature parity, but you can easily spin up Docker containers (soooo many easily deployable things) on Scale vs iocage jails on Core. VMs are better on Scale too. The plugins are poorly maintained on Core so you have to take the time to learn basic FreeBSD operation/management and maintain your own stuff. It's not that bad and there is a good bit of stuff supported but nowhere near the ease and amount of software available via Docker containers.

Scale will be easier to do GPU passthrough for transcoding Plex or whatever as well. You'll probably need a separate GPU (at least initially) to do video output since you're not using an APU with integrated graphics or a motherboard with dedicated BMC. After you get it setup, I think???? you can run headless and just use web interface/SSH for management.
 
Hard to add much to the already great advice. I also have some 10+ year-old servers been trying to get the customers to upgrade for a few years now. I too echo, yes it's time to replace and upgrade the hardware. Also, there seems to be som much better support for the server and tools to monitor compared to a server that is 10 years old.

Do take the time to consider the cost versus performance factors as well.
 
Picked up a few items on decent black friday sales, Cdn prices are in the crapper.

As far as software i am completely in over my head, current server is running windows server with a basic raid card, i am one step above a linux noob so most of it is foreign to me.

Since plex is the number 1 use of this server whats the best resource to research how to setup for gpu passthrough.

Will most likely start off using the onboard network port but would like to go up to 10gig, what is going to be most user friendly/ easy to setup.
 
Will most likely start off using the onboard network port but would like to go up to 10gig, what is going to be most user friendly/ easy to setup.
NICs are basically plug and play. Just have the drivers downloaded in advance to save yourself any hassle. That said, if you're going straight 10Gb and not looking at intermediate NbaseT options, an Intel X520 or X540 is a great option. Both are available in RJ45 and SFP+. NbaseT would be either an AQC107 or the more expensive Intel X550 (with caveats).

Edit: Forgot that the X520/X540 are x8 slots for some dumb reason. If that's an issue, go for the AQC 107 or X550 which are only x4.

What hardware did you buy?
 
Last edited:
I picked up a ryzen 5 5600
Msi b550 tomahawk
2tb WD blue m.2
Corsair cx650m

Going to do more research on ram, i have 16tb to get it up and running, no crazy deals on HD’s yet going to keep looking
 
Since plex is the number 1 use of this server whats the best resource to research how to setup for gpu passthrough.
Well first you should probably decide on what OS you're going to be using before looking for a guide. But assuming you go with TrueNAS Scale (which is what I would personally wholeheartedly recommend). It should be as simple as installing the Plex plugin and in the resource reservation section settings, make sure you select the 5600XT.

1701090356009.png


Then in Plex settings enable Use hardware acceleration when available

1701090496647.png


I think it should work with a 5600XT which you mentioned you have. Most people are using Nvidia or Intel integrated graphics but maybe research it a little just to verify it will work. There are lots of good resources on the TrueNAS community forums (stick to the SCALE subforum as CORE subforum is FreeBSD based and configured a lot differently). Another thing, I don't even use hardware transcoding on my Plex and I'm using a 5650GE which is slower than a 5600. All my content is 720P/1080P HEVC so most clients direct play, but if a transcode needs to happen the CPU can easily handle 3 - 4 at that same time with a fair bit of other software running in the background (various software running "natively" in FreeBSD jails and stuff running in Linux virtual machines). If you have 4k content it will be more demanding. You might be surprised, try it without hardware (GPU) transcoding and monitor your CPU usage. Then you can just throw it the cheapest potato GPU just to have video output for when you need to physically access the server.

Get ready to set basic permissions for your datasets. You will need to make sure yourself (probably a user with SMB share access), Plex, and any programs that need to access the files (sonarr, radarr, whatever) can read/write data to the dataset.

Will most likely start off using the onboard network port but would like to go up to 10gig, what is going to be most user friendly/ easy to setup.
Debian Linux (the underlying OS) will most likely be plug and play with pretty much ANY 10GbE NIC so just get whatever. I'm a fan of SFP+ NICs, then assuming your switch is in close proximity and has SFP+ available you can just run a DAC cable. SFP+ NICs can usually be had for less money, and same with switches.
 
I have been running Truenas/Freenas/Nas4Free for over a decade. I've never run ECC memory, Lawrence Systems on Youtube concurs that ECC memory is a "nice to have" in the home but not needed. As long as you run a file scrub job you should be fine. I have MP3s that I downloaded from the napster days that I rarely touch on that server and I haven't lost one yet.

I run Intel "T" series processors, used from [H] or ebay, 35w, perfect. New motherboard, used ram, new pico-PSUs and new SSDs or HDDs. Never had a problem.

My NAS is also now an NFS server server for my Proxmox box to hold the VMs. No issues on a 10Gbps link betwseen them.
 
I've never run ECC memory, Lawrence Systems on Youtube concurs that ECC memory is a "nice to have" in the home but not needed. As long as you run a file scrub job you should be fine. I have MP3s that I downloaded from the napster days that I rarely touch on that server and I haven't lost one yet.

When you buy a car for private use, do you declare one or another security feature like an airbag as nice to have?
If you want to be prepared to an accident as good as possible, use all available/ affordable security options.

The risk of memory errors that lead to data manipulation or corruption is not very high regarding percentage of io or ramsize. But it is a statistical number what means that it scales with time, ram size/usage and read/write io. This means a certain number of problems per year. If you are lucky, it does not affect critical data but it may result in a -100.000 instead +1000.000 or another wrong data. If you wait long enough you have errors for sure.

But what can happen on a ram error. Best case is a kernel panic (no data corruption in ZFS due copy on write, ram writecache lost without sync write). But a ram flip can occur during data processing after read checksum control or prior write checksumming. In such a case you have bad data. Even ZFS cannot do anything against. If it happens prior write you have bad data on pool with correct checksums. A scrub cannot detect such problems as checksums are correct.

So, if you like good data, use ECC. Missing ECC is the only way to loose data with ZFS beside bugs, human errors and amok hardware.
 
The random chance to suffer a flipped bit from cosmic rays is one thing.

DIMMs or DIMM slots going bad or overheating or whatever are quite another. There can be masses of bit errors from that one day to the next. With ECC you will be warned about this condition.

Or in other words: if you don't have ECC you can't tell whether you need ECC.
 
With masses of bit errors you have a very high chance of a kernel panic. Also number of detected checksum errors will increase to a "too many errors" level what means that ZFS will take the disks offline.This is what I have seen more than once with unreliable RAM.

But you are right, without ECC you will never be told when ram errors have happened
or as we say in German "Was ich nicht weiß, macht mich nicht heiß"

(What I do not know will not hurt me)
 
Found an example this morning of my 5650GE server transcoding one 1080P HEVC -> 1080P H264 and one Direct Play. This along with some VMs / a good bit of other software running in jails. CPU usage hovering mid 20s with a few spikes. Unless you're doing multiple 4k transcodes, or more than 4 simultaneous 1080P transcodes, I don't think you need to utilize a GPU. Plus you might be surprised at what clients can direct play different media without the need for transcoding.

1701264809000.png


I'm so impressed with my 5650GE setup. If I was starting fresh I'd probably take TrueNAS Scale over TrueNAS Core but at this point I have everything automated and working how I need it to with 0 downtime (shoutout APC XS 1500M running on the network.)

I picked up a ryzen 5 5600
Msi b550 tomahawk
2tb WD blue m.2
Corsair cx650m

Going to do more research on ram, i have 16tb to get it up and running, no crazy deals on HD’s yet going to keep looking

You'll not be able to use ECC, since MSI is the only vendor that disallows this as I had previously mentioned. I would at minimum start with 2 x 32GB, that way it leaves you the option later to upgrade to 4 x 32GB. ZFS cache loves to use RAM, and if you end up spinning up a ton of different containers / VMs whatever the extra RAM can come in handy. 3200MHz would be ideal. Here's a PCPartpicker link: https://pcpartpicker.com/products/memory/#b=ddr4&Z=65536002&S=3200&sort=price&page=1 (at the current time for only $10 more you can get C16 instead of C22 so I'd spring for that, but ultimately RAM speed/timings aren't going to give you a huge uplift). 3600MHz gets a little dicey when you're doing 4 dual rank sticks on Zen 2. It usually works but you could lose the silicon lottery on the CPU's IMC and have major instability. 3200MHz is a safer bet.

Don't install your host OS on the 2TB Blue, you only need about 128GB tops for your boot drive and with TrueNAS you just back up your config file. Can easily restore it to another boot drive if yours ever fails (or you can RAIDZ1 the boot drives). Ideally use an inexpensive NVME/SATA SSD that is decent quality. You can run it off your motherboard SATA port that way you don't take up one of the LSI HBA drive spots, then stick to running all your hard drives in the ZFS zpool on the LSI.
 
Found an example this morning of my 5650GE server transcoding one 1080P HEVC -> 1080P H264 and one Direct Play. This along with some VMs / a good bit of other software running in jails. CPU usage hovering mid 20s with a few spikes. Unless you're doing multiple 4k transcodes, or more than 4 simultaneous 1080P transcodes, I don't think you need to utilize a GPU. Plus you might be surprised at what clients can direct play different media without the need for transcoding.

View attachment 616803

I'm so impressed with my 5650GE setup. If I was starting fresh I'd probably take TrueNAS Scale over TrueNAS Core but at this point I have everything automated and working how I need it to with 0 downtime (shoutout APC XS 1500M running on the network.)



You'll not be able to use ECC, since MSI is the only vendor that disallows this as I had previously mentioned. I would at minimum start with 2 x 32GB, that way it leaves you the option later to upgrade to 4 x 32GB. ZFS cache loves to use RAM, and if you end up spinning up a ton of different containers / VMs whatever the extra RAM can come in handy. 3200MHz would be ideal. Here's a PCPartpicker link: https://pcpartpicker.com/products/memory/#b=ddr4&Z=65536002&S=3200&sort=price&page=1 (at the current time for only $10 more you can get C16 instead of C22 so I'd spring for that, but ultimately RAM speed/timings aren't going to give you a huge uplift). 3600MHz gets a little dicey when you're doing 4 dual rank sticks on Zen 2. It usually works but you could lose the silicon lottery on the CPU's IMC and have major instability. 3200MHz is a safer bet.

Don't install your host OS on the 2TB Blue, you only need about 128GB tops for your boot drive and with TrueNAS you just back up your config file. Can easily restore it to another boot drive if yours ever fails (or you can RAIDZ1 the boot drives). Ideally use an inexpensive NVME/SATA SSD that is decent quality. You can run it off your motherboard SATA port that way you don't take up one of the LSI HBA drive spots, then stick to running all your hard drives in the ZFS zpool on the LSI.
I went with the msi because it had really good reviews, onboard 2.5 and was a door crasher blackfriday sale, with cdn exchange rate in the toilet i grabbed it, i honestly forgot about ECC ram if it becomes an issue ill sell it and go with something else
 
I went with the msi because it had really good reviews, onboard 2.5 and was a door crasher blackfriday sale, with cdn exchange rate in the toilet i grabbed it, i honestly forgot about ECC ram if it becomes an issue ill sell it and go with something else
I mean there was already some ECC discussion in this thread, so I'm not going to expand on it. I would do some further research and ultimately decide if you think it's worth it for you or not.

1701297169712.png


Looking at the motherboard layout, not super ideal for you IMHO (but you can make it work) since you plan on going for 10GbE down the road. Looking at it like this:
The top metal slot will be for your 5600XT if you end up using it, if not you can populate it with your LSI HBA or 10GbE NIC.
The second black physical x16 slot (electrically x4) will be ideal for your LSI HBA or 10GbE. You can potentially lose this slot or have it run at PCIe 3.0 x2 depending on how you populate the board (refer to pages 13 and 16 in the manual)
PCIe 3.0 x1 slots can only move about 985 MB/s or 8Gbps "best case", but realistically it might run slightly slower. This is going to most likely limit your zpool read speeds if the LSI HBA lives there and not run a 10GbE NIC at full speed. But would be fine for a potato GPU just for basic video output to get the server going.

You can use this RAIDZ calculator plan out your array and this one helps explain read/write advantages. A 2.5GbE NIC can get you accessing your server at about 300 MB/s while a 10GbE NIC will get you about 1200 MB/s. The read speed on your HDD array can easily exceed either one of these values, and a single NVME can as well.

You'll have to consider designing your zpool aka how many hard drives (multiple vdevs, or just one? RAIDZ1, RAIDZ2 or RAIDZ3?). Then you'll get to learn ZFS datasets, zvols, snapshots and Unix/Linux permissions and making sure your applications can access the stuff they need as well as yourself or other users. Then you have sharing protocols like SMB/NFS/iSCSI. YouTube and the TrueNAS forums will probably be your biggest helps. Phew, /end rant and good luck in your endeavors
 
I mean there was already some ECC discussion in this thread, so I'm not going to expand on it. I would do some further research and ultimately decide if you think it's worth it for you or not.

View attachment 616934

Looking at the motherboard layout, not super ideal for you IMHO (but you can make it work) since you plan on going for 10GbE down the road. Looking at it like this:
The top metal slot will be for your 5600XT if you end up using it, if not you can populate it with your LSI HBA or 10GbE NIC.
The second black physical x16 slot (electrically x4) will be ideal for your LSI HBA or 10GbE. You can potentially lose this slot or have it run at PCIe 3.0 x2 depending on how you populate the board (refer to pages 13 and 16 in the manual)
PCIe 3.0 x1 slots can only move about 985 MB/s or 8Gbps "best case", but realistically it might run slightly slower. This is going to most likely limit your zpool read speeds if the LSI HBA lives there and not run a 10GbE NIC at full speed. But would be fine for a potato GPU just for basic video output to get the server going.

You can use this RAIDZ calculator plan out your array and this one helps explain read/write advantages. A 2.5GbE NIC can get you accessing your server at about 300 MB/s while a 10GbE NIC will get you about 1200 MB/s. The read speed on your HDD array can easily exceed either one of these values, and a single NVME can as well.

You'll have to consider designing your zpool aka how many hard drives (multiple vdevs, or just one? RAIDZ1, RAIDZ2 or RAIDZ3?). Then you'll get to learn ZFS datasets, zvols, snapshots and Unix/Linux permissions and making sure your applications can access the stuff they need as well as yourself or other users. Then you have sharing protocols like SMB/NFS/iSCSI. YouTube and the TrueNAS forums will probably be your biggest helps. Phew, /end rant and good luck in your endeavors
Looks like i am returning/reselling the msi.
I have been skimming the Truenas forums a bit so much information to consume, im thinking about having 2 pools 1 being z3 for my super important data as another layer of protection.
I am undecided on what to use for my plex media, since i just need speed and not to concerned about loosing a movie.
 
I am undecided on what to use for my plex media, since i just need speed and not to concerned about loosing a movie.
Does your Plex really need the ability to read at very fast speeds? What resolution, codec (e.g. AV1/HEVC/H264), and how many simultaneous streams are you potentially looking at?
 
Does your Plex really need the ability to read at very fast speeds? What resolution, codec (e.g. AV1/HEVC/H264), and how many simultaneous streams are you potentially looking at?

Since we have ditched all streaming services the majority of the time it will be 2 4k streams at a time, usually full audio h.265, but occasionally there will be a 3rd 1080p as well.
I am trying to plan fast storage for the future if i decide to try and store my steam Library on it as well
 
Since we have ditched all streaming services the majority of the time it will be 2 4k streams at a time, usually full audio h.265, but occasionally there will be a 3rd 1080p as well.
I am trying to plan fast storage for the future if i decide to try and store my steam Library on it as well
Google generative AI coming in clutch

1701310107795.png

So like take that example of 4K @ 60fps HEVC: 54 Megabits per second, that's only going to need about 6.75 MB/s per stream. 4K @ 60fps H264 could be around 85 Mbps aka 10.625 MB/s. Even if you've got weird encodes I doubt you'll ever exceed needing to read each file at around 20 MB/s for each stream. Any spinning rust array will have you covered.
 
Google generative AI coming in clutch

View attachment 616965
So like take that example of 4K @ 60fps HEVC: 54 Megabits per second, that's only going to need about 6.75 MB/s per stream. 4K @ 60fps H264 could be around 85 Mbps aka 10.625 MB/s. Even if you've got weird encodes I doubt you'll ever exceed needing to read each file at around 20 MB/s for each stream. Any spinning rust array will have you covered.
Fantastic info saving that for later. Now to find a good deal on some hard drives. Just picked up 3 2tb ssds for $71cdn each im going to tinker with a z1 pool of fast drives once i get up and running, worst case they go into the gaming pc.
 
Fantastic info saving that for later. Now to find a good deal on some hard drives. Just picked up 3 2tb ssds for $71cdn each im going to tinker with a z1 pool of fast drives once i get up and running, worst case they go into the gaming pc.
Good luck. I don't track the Canadian market at all so not sure how much I can help. Some people buy reputable used/refurbished enterprise drives (not the white label stuff) and it works out well for them. I prefer new stuff myself but to each their own. Enterprise / 5 year warranty drives > consumer 2/3 year warranty drives in general.

The one thing I'll tell you, make you get a CMR drive! SMR drives should not be used in vdevs. If you ever have to resilver (aka replace/rebuild) a drive in your vdev it will take for-ev-er to finish. Also some people will advise against RAIDZ1 (I'm on the fence about this personally especially in smaller vdevs) because in the event a single drive fails, when you're doing the stressful resilver with a replacement to get it healthy again if you get unlucky and lose one more drive, poof goes your zpool.
 
6 drives in RAIDZ1 can pretty much saturate 10GbE providing you're using newer CMR 4K sector large drives. I back up my Plex storage to Qnap QuTS Hero setup with a 6 drive array and it'll saturate the line.

I've had good luck buying seagate and wd/hgst renewed drives from Amazon lately, just test them out for a day or two before throwing them into production.
 
6 drives in RAIDZ1 can pretty much saturate 10GbE providing you're using newer CMR 4K sector large drives. I back up my Plex storage to Qnap QuTS Hero setup with a 6 drive array and it'll saturate the line.

I've had good luck buying seagate and wd/hgst renewed drives from Amazon lately, just test them out for a day or two before throwing them into production.
Been on amazon looking for deals all week, I'm narrowing down to cmr drives for sure.
If i decide to do a smaller z1 pool just to move files quickly back and forth over the network is it possible to mirror that onto the larger more secure raid z2 pool as a little safety net? Or just say screw it and stick with a larger z2 for everything
 
Been on amazon looking for deals all week, I'm narrowing down to cmr drives for sure.
If i decide to do a smaller z1 pool just to move files quickly back and forth over the network is it possible to mirror that onto the larger more secure raid z2 pool as a little safety net? Or just say screw it and stick with a larger z2 for everything

I used a RAID 6 array for my main storage and have the secondary QNAP for backup storage. Ideally I'd have them in different geographic locations but different floors of the house is the best I can do. The 8 disk RAID 6 array will saturate 10GbE as well, it's what's writing to the RAIDZ1 array. As long as you have modern hardware and enough disks, your network is going to be your limitation when it comes to moving files back and forth.
 
honestly don't go scale unless you like the containers getting borked forcing a rebuild. it's happened twice to me and I had enough. go proxmox and virtual free nas core with sas card passed directly through. for Plex transcoding a cheap p400 quadro card will work. you can pass that directly to the VM.
 
Back
Top