icarus69

n00b
Joined
Dec 3, 2023
Messages
15
I've read some great threads on here on this topic, but I am struggling to know exactly what to do (and buy) in relation to my own issue.

First, my setup. I haven't built this yet and they're all still individually boxed up, but I have all the following:
  • Motherboard: MSI MPG Z390M GAMING EDGE AC Micro ATX LGA1151
  • CPU: Intel Core i5-9500T 2.2 GHz 6-Core
  • Memory: Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-2666 CL16
  • SSD (3x): TEAMGROUP MP34 4 TB M.2-2280 PCIe 3.0 X4 NVME
    • Before anyone comments on this, I got these for close to nothing (from a friend), so want to make use of what I have rather than going for HDDs right now - low budget
  • Case: Asus Prime AP201 MicroATX Mini Tower
  • PSU: Corsair RM650x
My motherboard only has 2x M-Key M.2 NVMe drives. But as you can see from my parts list, I've got 3 NVMe SSDs I want to install. These will be installed in a RAID configuration.

The motherboard has the following PCIe slots:
  • 2 x PCIe 3.0 x16 slots (support x16/ x0, x8/ x8 modes)
  • 2 x PCIe 3.0 x1 slots
This leads me to assume that if I want to connect a third NVMe SSD to the motherboard, I'm going to need to split one of the x16 slots and fit an NVMe adapter card to it. The reason being that I don't want the spare lanes in the x16 to go to waste. If I'm only connecting one additional NVMe SSD (for now), that's only going to require x4 lanes, right? That leaves x12 lanes unused, and I'd like to keep those x12 lanes available for something else.

What I want to find out is the best way of connecting the third NVMe SSD to the motherboard.

Am I right in thinking that since the x16 slots support x8/x8, all I need to do is 1) get a splitter that splits the x16 into two x8 ports 2) connect a PCIe to M.2 adapter to one of the x8 ports via the splitter 3) connect the SSD to that. Is that how that works?

That said, it's important to note that my motherboard does not support PCIe bifurcation. Having read a bunch of threads here, I understand that it is still possible to bifurcate a PCIe slot, provided the splitter card has certain chips on it. I'm not quite sure I understand the difference between a PCIe slot being able to support x8/x8 while not supporting bifurcation.

I spoke to Chris Payne and he said to "get an asm2824 or pex8747 M.2 switch off aliexpress."

While I appreciate the advice, I haven't learnt any more about how this all works, and which specific product/s I need to purchase. As I understand it, ASM2824 and PEX8747 are chips that you can get, and so I need to find an adapter card with one of these chips.

From here, though, I still don't know what I need.

Is my thinking on how this would work correct?

1. PCIe splitter (with an ASM2824 or PEX8748 on the splitter)
a. Ideally this would be a splitter that divides the x16 into two separate x8 slots
2. PCIe to 2x M.2 NVMe adapter card (does this also need an ASM2824 or PEX8748 chip on the card?)
3. Connect 1 (or 2) SSDs to that card
4. An x8 PCIe slot remains free for me to connect other devices to (e.g. network cards, capture cards, port expansion, etc etc etc)

I'm sure someone will say this is incorrect and that I'm making some false assumptions, as I believe it is incorrect. If so, please could you explain what I am not getting. Also, if anyone has knowledge of which specific products I should buy (direct links appreciated) that would be incredible.
 
I would just raid the 2 drives and be done with it
Thanks, but (sorry if I didn't make this clear originally) I'm committed to using the 3x 4TB SSDs in a RAID configuration. I want 8TB usable space. So I really am just looking for responses that address my questions - both to actually achieve it as well as my pure desire to learn more about this.
 
Am I right in thinking that since the x16 slots support x8/x8,

I think you're misinterpreting this spec. As I read it, you can use 16 lanes in slot 1, or 8 lanes in each slot.

The easiest and cheapest path forward is to just get a simple adapter that lets you put the m.2 card into the second x16 slot. Your gpu, if you have one, will run fine at x8. You'll lose out on 4 lanes, but that's life. Low budget means taking the simple way.

Theoretically, you could get an active pci-switch on the adapter and have any number of m.2 slots, with 8 or 16 lanes back to the cpu depending on if you need both x16 slots or not. Depending on the switch chip, and the creativity of the board maker, there's lots of potential here, but none of it is going to be inexpensive. A passive adapter for one slot is not very expensive. I think you can go ahead and get a multi-slot passive adapter that needs bifurcation, and the first slot will work, but the others won't; those aren't much more and maybe you'll be able to use the other slot(s) at some other time, but honestly, probably not: budget boards don't usually include it in their firmware. If you're adventurous, you could try firmware editing, but I'm not sure if bifurcation is just a setting that needs to be enabled, or if there's more to it than that.

Edit to add: that cpu and board are max pci-e 3.0, so it's no big deal that your ssds are also 3.0. Not that it was much of a deal anyway. I didn't see a system diagram for the motherboard, but the cpu has 16 lanes of pci-e, so I think the board m.2 slots and the x1 slots must feed through the chipset. The cpu support page says it can do x8 + x4 + x4, so there's a chance of bifurcation working for you with bios shenanigans, but it'd be easier if MSI already did it.
 
Last edited:
I took a look at your motherboard's manual, and PCIE bifurcation (the splitting of a single PCIE slot's PCIE lanes for multiple devices attached) is not listed as being supported. There's no mention of bifurcation at all.
 
Bifurcation is exactly what you are trying to do, but like the previous poster said your motherboard has to support this or it will not work. Also what was mentioned before is when a 16x slot supports 8x/8x mean that when you use the slot paired with it they both run at 8x. The 16x slot runs at 8x and the slot it is paired with runs at 8x, not that you can split out 2 8x slots out of a 16x slot. Thats what bifurcation is, that you can split up the PCIe lanes how you want, but if your board doesnt support this it will not work.
 
Bifurcation is exactly what you are trying to do, but like the previous poster said your motherboard has to support this or it will not work. Also what was mentioned before is when a 16x slot supports 8x/8x mean that when you use the slot paired with it they both run at 8x. The 16x slot runs at 8x and the slot it is paired with runs at 8x, not that you can split out 2 8x slots out of a 16x slot. Thats what bifurcation is, that you can split up the PCIe lanes how you want, but if your board doesnt support this it will not work.
This is true, and I was already aware my motherboard doesn't support it (as I say in my original post).

However, I am also aware that certain adapter cards exist which have chips on them (such as PEX8747 chips) which manage bifurcation themselves, which makes it possible to bifurcate on this board.

My questions are related, but slightly different/separate to that issue.
 
I think you're misinterpreting this spec. As I read it, you can use 16 lanes in slot 1, or 8 lanes in each slot.

The easiest and cheapest path forward is to just get a simple adapter that lets you put the m.2 card into the second x16 slot. Your gpu, if you have one, will run fine at x8. You'll lose out on 4 lanes, but that's life. Low budget means taking the simple way.

Theoretically, you could get an active pci-switch on the adapter and have any number of m.2 slots, with 8 or 16 lanes back to the cpu depending on if you need both x16 slots or not. Depending on the switch chip, and the creativity of the board maker, there's lots of potential here, but none of it is going to be inexpensive. A passive adapter for one slot is not very expensive. I think you can go ahead and get a multi-slot passive adapter that needs bifurcation, and the first slot will work, but the others won't; those aren't much more and maybe you'll be able to use the other slot(s) at some other time, but honestly, probably not: budget boards don't usually include it in their firmware. If you're adventurous, you could try firmware editing, but I'm not sure if bifurcation is just a setting that needs to be enabled, or if there's more to it than that.

Edit to add: that cpu and board are max pci-e 3.0, so it's no big deal that your ssds are also 3.0. Not that it was much of a deal anyway. I didn't see a system diagram for the motherboard, but the cpu has 16 lanes of pci-e, so I think the board m.2 slots and the x1 slots must feed through the chipset. The cpu support page says it can do x8 + x4 + x4, so there's a chance of bifurcation working for you with bios shenanigans, but it'd be easier if MSI already did it.
Thanks for this!

Yeah, the more I scratch my head over this, the more I'm inclined to follow your advice and just whack a simple M.2 adapter in the second x16 slot.

But because I'm using this project to learn as much as possible, I would also like to iron out this more complicated approach, even if for theory's sake.


Don't have/need a GPU for what I'm doing.

the cpu has 16 lanes of pci-e
[/QUOTE

If I understand this correct (and please correct me if I'm wrong), you're saying that the CPU only supports up to x16 lanes of PCIe, meaning that even if the motherboard supported 32 PCIe lanes, they could never be utilised because the CPU can only manage 16 at most?

you can use 16 lanes in slot 1, or 8 lanes in each slot.
So going with the more simple approach like you suggest, would you advise getting something like this? My understanding is that this would go into one of the PCIe slots, but of course only one M.2 slot on this card could be utilised (due to no bifurcation support)?

Alternatively, would it be worth looking at putting an M.2 NVMe adapter card into one of the 6 SATA ports on the motherboard? Would that be easier/more reliable?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
However, I am also aware that certain adapter cards exist which have chips on them (such as PEX8747 chips) which manage bifurcation themselves, which makes it possible to bifurcate on this board.
The PEX chips are PCIe switches; they don't do bifurcation. Bifurcation means e.g. splitting PCIe x16 into 4x PCIe x4 - the total number of lanes does not change, and lanes assigned to one x4 device cannot be used by any other device. In contrast the PCIe switch cards are able to change the number of available lanes and share the total bandwidth, so for instance if you have a PEX card with four NVMe M.2 slots, you should be able to put that card in an x8 slot and still be able to use all of the four M.2 slots (the drives will still all have x4 lanes to the switch IC, but will share the available x8 bandwidth to the host).

A bifurcation adapter is not visible to the host OS while a switch card is.
 
Alternatively, would it be worth looking at putting an M.2 NVMe adapter card into one of the 6 SATA ports on the motherboard? Would that be easier/more reliable?
You cannot, as far as I know, create PCIe lanes from a SATA port. So there should be no such thing as a SATA-to-NVMe adapter. What probably exists is a SATA-to-M.2 adapter, but the M.2 slot in the adapter will then only support M.2 SATA drives, not NVMe ones.

(The confusion here stems from the fact that the M.2 connector can support both the NVMe and the SATA protocol. But those protocols are not compatible.)
 
Your motherboard can split the x16 from the CPU into two x8 slots (two slots! not x8/x8 in one). That's the full extent of its bifurcation abilities (bifurcation requires extra hardware, it's not just a firmware thing). So you can add two m2-pcie cards using two passive adapters in two slots (not counting the x1 slots). There's no benefit in this case to using an expensive active adapter.
 
If I understand this correct (and please correct me if I'm wrong), you're saying that the CPU only supports up to x16 lanes of PCIe, meaning that even if the motherboard supported 32 PCIe lanes, they could never be utilised because the CPU can only manage 16 at most?

If the motherboard had 32 lanes, only 16 of them would come from the CPU directly, everything else would come from the chipset. But nobody puts together a motherboard that moves slots between the cpu and the chipset. That's a lot of complexity that nobody wants. In this board, the x16 slots are connected to the CPU, which is typical (but not always, sometimes you'll have a chipset fed mechanical x16, electrical x1 or x4 slot)

So going with the more simple approach like you suggest, would you advise getting something like this? My understanding is that this would go into one of the PCIe slots, but of course only one M.2 slot on this card could be utilised (due to no bifurcation support)?

Yeah, the 4 slot one is way more expensive than the single slot one though; I thought they'd be closer in price. Just get the single slot and move on with your build.

Alternatively, would it be worth looking at putting an M.2 NVMe adapter card into one of the 6 SATA ports on the motherboard? Would that be easier/more reliable?

What bitnick said. This won't work unless you have SATA m.2 drives, which you don't.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
A simple $10 m.2 to pci-express would not work here?

you are missing a single m.2 slot from what I understand.

You can use software raid (would it not be the recommended way everything time, Wendel from Level1 seem very confident that hardware raid is death), if you have windows 10 pro easy as a couple of click, selecting your drive and then between just a combined drive or raid 0,1,5

P.S. You can pool harddrive together to be presented as a single in your OS if it is the main reason you want to raid them (probably not as you seem to want 8 out of 12tb)
 
Last edited:
A simple $10 m.2 to pci-express would not work here?
Could you send me a link to an example?
you are missing a single m.2 slot from what I understand.
Correct.
You can use software raid
Can you explain this a bit further? I don't know what the difference between software and hardware raid is - I haven't set it up yet and am pretty new to this.

You're saying that software raid is better than hardware raid. Is software raid possible with 3x NVMe SSDs (and with 1 of those SSDs connected via a PCIe splitter)?

you seem to want 8 out of 12tb)
That's right. I want to set the three drives up in RAID 5, if possible.
 
Could you send me a link to an example?
I personally used one of those:
https://www.amazon.com/GLOTRENDS-Ad...d=1701809047&sprefix=m.2+to+pci,aps,83&sr=8-5

It was plug in play, just installed the drive on it, put it on the x1 slot and I saw the drive in the bios.

Can you explain this a bit further? I don't know what the difference between software and hardware raid is - I haven't set it up yet and am pretty new to this.
I am really no expert, I always pool my drive without any raid (does not have need for redundancy, raid is not really a good backup if that only the need, it is just some say for a very long time (stuff like ZFS took over):

Wendel goes in details:

View: https://www.youtube.com/watch?v=l55GfAwa8RI

If you are on windows:

How to set up RAID 5 storage with parity on Windows 10​

https://pureinfotech.com/setup-raid-5-windows-10/
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
put it on the x1 slot
Sorry if I'm wrong here, but I thought NVMe SSDs saturate up to 4 lanes quite easily? So why did you put it in a x1 slot? Does it perform a lot worse than if you gave the SSD 4 lanes?

I always pool my drive without any raid
Are you saying that you pool your drives and then mirror them as backup? What is your personal backup solution?

If you are on windows
I'm building a home NAS, so will either use UnRaid, TrueNas or Proxmox.
 
Sorry if I'm wrong here, but I thought NVMe SSDs saturate up to 4 lanes quite easily? So why did you put it in a x1 slot? Does it perform a lot worse than if you gave the SSD 4 lanes?
It is a PCIe X4 adapter sorry I am so use at 16x that I thought i was looking at a x1, I seem to be able to reach max speed of my slow cheap drive a bit over 3000 mbs).

Are you saying that you pool your drives and then mirror them as backup? What is your personal backup solution?
Yes, my backup solution is a really simple linux automated rsync between 2 pool of drives. A lot more and more on a git server for important stuff.

I'm building a home NAS, so will either use UnRaid, TrueNas or Proxmox.
I would look into using the file system to do the job of a raid (like zfs), those solution come with everything already out of the box, those solution tend to be exactly what you want:

https://unraid.net/zfs-pools-rc3

If you want to play, learn etc... you can use ZFS to create some redundancy and so on.

Or you can simply have 2 pool of drive of similar size (if you want to backup everything, you can have 3 if you think have a lot of unimportant files you do not mind to save space) and do a simple sync from time to time, it tend to be the most robust way at the cost of less automation and less can keep the system running during a drive failure (things home user usually never need).
 
It is a PCIe X4 adapter sorry I am so use at 16x that I thought i was looking at a x1, I seem to be able to reach max speed of my slow cheap drive a bit over 3000 mbs).


Yes, my backup solution is a really simple linux automated rsync between 2 pool of drives. A lot more and more on a git server for important stuff.


I would look into using the file system to do the job of a raid (like zfs), those solution come with everything already out of the box, those solution tend to be exactly what you want:

https://unraid.net/zfs-pools-rc3

If you want to play, learn etc... you can use ZFS to create some redundancy and so on.

Or you can simply have 2 pool of drive of similar size (if you want to backup everything, you can have 3 if you think have a lot of unimportant files you do not mind to save space) and do a simple sync from time to time, it tend to be the most robust way at the cost of less automation and less can keep the system running during a drive failure (things home user usually never need).
I think I am fairly committed to setting the 3 drives up in a way that allows for a single drive failure without any disruption.

Further down the line, I would look to add further drives, which would add more usable space as well as more redundancy. I am going to back the whole array up on Backblaze from day 1.

I'll maybe go with ZFS in UnRaid. But I don't quite understand how ZFS replaces the benefits of RAID 5 for my use case. The whole reason I am interested in RAID 5 is because it allows for one of the drives to fail.

Am I right in thinking that RAID 5 and RAID Z5 are identical in terms of how they function, with Z5 being ZFS' version of a RAID 5 configuration? I'd happily go with that.
 
I'll maybe go with ZFS in UnRaid. But I don't quite understand how ZFS replaces the benefits of RAID 5 for my use case. The whole reason I am interested in RAID 5 is because it allows for one of the drives to fail.
that one of the big feature of ZFS:
https://docs.oracle.com/cd/E53394_01/html/E54801/gcfof.html

you can decide how many drive you can loose (versus how much space you loose) before loosing data in your pool, there is setting to mimic raid-5, raid-6 level of redudancy for people that want those.

A I feel like in the rest of the message you did reach that conclusion, yes ZFS for a NAS and ability to have a disk fail it is a really common usage for it, for home user, enterprise, etc...
 
Is software raid possible with 3x NVMe SSDs (and with 1 of those SSDs connected via a PCIe splitter)?
I don't know much about UnRaid, but both Proxmox and TrueNAS are "software raid" solutions. Both support the ZFS equivalent of RAID5 (called RAID-Z1 under ZFS; "1" for one drive of parity data). And both will work fine with your SSDs connected as described. (The only thing they don't support is drives connected via a card that's doing RAID in hardware.)

There are pros and cons with different ZFS pool layouts. Here is a good resource to learn more. In short, RAID-Z1 lets you get some redundancy with a moderate storage efficiency cost (33 % wasted using three drives, as you know) and a rather large loss of IOPS (you'll only get the IOPS of a single disk).

Also note that you cannot add a single disk to an existing RAID-Z1 "vdev". You'll need to either destroy the pool and recreate it as a 4-disk (or whatever) RAID-Z1 vdev, or you can add another complete RAID-Z1 vdev (i.e. three more disks). Again, read more in the linked document.
 
that one of the big feature of ZFS:
https://docs.oracle.com/cd/E53394_01/html/E54801/gcfof.html

you can decide how many drive you can loose (versus how much space you loose) before loosing data in your pool, there is setting to mimic raid-5, raid-6 level of redudancy for people that want those.

A I feel like in the rest of the message you did reach that conclusion, yes ZFS for a NAS and ability to have a disk fail it is a really common usage for it, for home user, enterprise, etc...
Cool. So to conclude, RAID Z5 should be possible to set up with 3x NVMe SSDs (with 1 of those SSDs connected via a PCIe x4 adapter?
 
I'll maybe go with ZFS in UnRaid. But I don't quite understand how ZFS replaces the benefits of RAID 5 for my use case. The whole reason I am interested in RAID 5 is because it allows for one of the drives to fail.

Am I right in thinking that RAID 5 and RAID Z5 are identical in terms of how they function, with Z5 being ZFS' version of a RAID 5 configuration? I'd happily go with that.

ZFS has single drive, mirrors, and raidz1-3

raidz1 is more or less equivalent to RAID5, lose one disk and your data is intact, lose two disks and the vast majority of your data is gone. raidz2 allows losing two disks, raidz3 allows losing three disks. There is no raidz4 or raidz5, although you can mix and match mirrors and raidz if you want to get real complex. With three disks and ZFS, your options are really stripe and lose any disk, lose lots of data; mirror and you can lose two; raidz2 so you can lose two disks and performance (don't run raidz2 on a three disk pool!); or raidz1 where you can lose one disk, but there's a performance cost.

raidz1 is what you want based on your messages.
 
ZFS has RAID "built in". RAID5 like functionality can be requested when creating the zpool.

The bifurcation question only becomes relevant when you want more than one NVMe SSD in a x8 or x16 slot. If you only use one you don't need any support from the mainboard other than a slot of at least x4 lanes.

If you want more than one SSD in one x8 or x16 slot then you either need bifurcation on the board, or you need a PCIe card with a PCIe bridge chip on it.
 
Thanks, but (sorry if I didn't make this clear originally) I'm committed to using the 3x 4TB SSDs in a RAID configuration. I want 8TB usable space. So I really am just looking for responses that address my questions - both to actually achieve it as well as my pure desire to learn more about this.

Cool. So to conclude, RAID Z5 should be possible to set up with 3x NVMe SSDs (with 1 of those SSDs connected via a PCIe x4 adapter?

No matter what route you go (whether ZFS or traditional hardware RAID) I wouldn't advise this. Two drives running off the motherboard (both x4 provided by the PCH) and then whatever third party controller/adapter you end up using to get the third working is a recipe for inconsistent and even poor performance.

ZFS (technically OpenZFS but most people just call it ZFS) has poor support on Windows, I think it's basically still in a beta / release candidate stage. You want to run Linux or FreeBSD when using ZFS. You'll also probably want ECC memory for your ZFS system (which you can't use with your CPU/mobo), but that's an entirely different conversation.
 
Ugh, rather than spend all your time here, you could just buy one (or two) of these and be done:

https://a.co/d/24LXfz8
Or if you're satisfied with consumer grade hardware you could buy something like this:
https://a.co/d/0ei2pbd
or
https://a.co/d/4wK4mJZ
or even cheap knock of Chinese brands like this:
https://a.co/d/cIbBM60

There are tons of these. Especially considering you don't even care if your system is running a graphics card. Or at least, not a high end one anyway.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Update:
To recap, I bought these parts. I have listed the exact price I paid for each.

I haven't built it yet. They are all in their individual boxes. And yes, I didn't pay for the RM650x as a friend gave me it for free.

However, after some digging, I have found these parts (exact prices I have found them for listed) which would give me an ECC setup.
  • Note: the CPU in that list is the 9300, but the actual CPU I would purchase is the 9300T - the 9300T isn't available to include on pcpartpicker.
This would mean selling what I have just bought and going for this ECC-compatible setup.

My use-case is building my first DIY home server, with the aim of using it primarily as media storage and music streaming, and occasional video streaming.

3x NVMe SSDs means that I will be using a PCIe to M.2 NVMe adapter card in one of the PCIe slots. I would have to do this for either build.

I haven't decided which OS to go with yet (i.e UnRaid, TrueNas, Proxmox). In any case, I'm looking at setting it up in a RAIDZ1 configuration.

Anyway, my question is: is it worth me selling what I've just bought and getting the parts for the ECC build instead? Also, would be great to get a sanity check and confirmation that these parts will work together. Seems like they will.




P.S. Apologies in advance if this isn't the right way to tag other users:









 
Last edited:
This will allow you to run 4x NVME in a single PCIe 3 x16 slot.
https://a.co/d/56yeqKr

HighPoint 4-Port M.2 SSD7105 PCIe Gen3 Bootable NVMe RAID Controller for Windows & Linux Systems​

It's not a real common use these days.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
This will allow you to run 4x NVME in a single PCIe 3 x16 slot.
https://a.co/d/56yeqKr

HighPoint 4-Port M.2 SSD7105 PCIe Gen3 Bootable NVMe RAID Controller for Windows & Linux Systems​

It's not a real common use these days.
Thanks, but wow that's expensive. I really only need 1 additional M.2 NVMe slot.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Anyway, my question is: is it worth me selling what I've just bought and getting the parts for the ECC build instead? Also, would be great to get a sanity check and confirmation that these parts will work together. Seems like they will.

Here is my recommended reading for ECC and ZFS. I've done two custom built servers so far in my adventures in ZFS and ultimately decided to use ECC in both cases.

Just random advice from my personal experience. I'm familiar with both Proxmox and TrueNAS Core/Scale.

Proxmox is a type 1 hypervisor aimed at spinning up LXC containers and VMs. Managing zpools is harder since you're relying all on CLI. You have to assign resources to each LXC or VM whereas iocage jails or docker containers you don't. TrueNAS Core/Scale on the other hand, has much easier management for automated smartctl tests and a nice GUI for pretty much whatever you need to do with your zpools. TrueNAS Core is based on FreeBSD and the plugin system is deprecated/unmaintaned (even though they still advertise it), so if most of the software you want to deploy has FreeBSD ports available, creating iocage jails works great if you want to learn FreeBSD. Bhyve VMs are fine too in Core, but not as good as Qemu especially if you do passthrough on any devices. Enter TrueNAS Scale, based on Debian Linux. You can easily deploy docker containers, which is less hassle than iocage jails and has more supported software. You can also do Qemu KVM on Scale. So ultimately I don't know what you're trying to achieve so any one of them is valid. I don't see a need for unRAID, which always lagged in supporting ZFS (now it does though) in favor of Btrfs or whatever and costs money when you have free alternatives with essentially feature parity.

Storing absolutely everything on NVMEs/SSDs is something I've never done. I personally think a nice spinning rust (hard drives) zpool is great for most types of data storage. And then you can have applications/vms/databases (stuff that benefits from very fast speeds) on SSD zpools. What you're doing is cool and will give 8TB of very fast storage for everything, I just hope you can get it setup right so all three NVMEs can achieve good speeds. If one of them is performing slow, it bogs down the entire zpool.

Good luck on your endeavors.

This will allow you to run 4x NVME in a single PCIe 3 x16 slot.
https://a.co/d/56yeqKr

HighPoint 4-Port M.2 SSD7105 PCIe Gen3 Bootable NVMe RAID Controller for Windows & Linux Systems​

It's not a real common use these days.
ZFS requires direct access to any disks. If you run it through a RAID controller you're bound to encounter issues. In this case, I'm not sure that will work. Typically on LSI SAS/SATA HBAs this can be achieved through "IT Mode" firmware rather than the normal hardware RAID mode firmware. That NVME HBA would need to support this type of mode for use with ZFS. I believe bifurcation would also need supported on the PCIe slot to present all the drives "direct" to the machine.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Thank you for all the advice, this is really useful.

I don't see a need for unRAID, which always lagged in supporting ZFS (now it does though) in favor of Btrfs or whatever and costs money when you have free alternatives with essentially feature parity.
As I understand it, unRAID is a bit more user-friendly for beginners?

In terms of my use-case, it's really just going to be media storage and music streaming, and occasional movie streaming. Probably using Plex, Jellyfin or Enby.

Storing absolutely everything on NVMEs/SSDs is something I've never done.
I ended up with 3x SSDs just because I happened to get a good price on them, so am keen to see if I can make it work.

ZFS requires direct access to any disks. If you run it through a RAID controller you're bound to encounter issues. In this case, I'm not sure that will work. Typically on LSI SAS/SATA HBAs this can be achieved through "IT Mode" firmware rather than the normal hardware RAID mode firmware. That NVME HBA would need to support this type of mode for use with ZFS. I believe bifurcation would also need supported on the PCIe slot to present all the drives "direct" to the machine.
I've noticed some people refer me to RAID controllers, and others refer me simply to PCIe to M.2 NVMe adapter cards like this. Would the latter allow for the necessary direct access?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Anyway, my question is: is it worth me selling what I've just bought and getting the parts for the ECC build instead?
I like the idea of ECC, but I've never been willing to pay for it for personal builds. From experience at work, the vast majority of our servers were assembled, racked, provided to us, setup, run for 3+ years, and returned with no ram errors and no other hardware errors. Otoh, some machines did show ECC errors, in different patterns: sometimes just one correctable error somewhere in the middle of the run, sometimes fine for 2 years, then one or two correctables a day, sometimes fine and then thousands of correctables per minute (which doesn't take down the machine, but makes it run so slow you wish it did), sometimes 2 one day, then 100 the next, then 10000 the next, then it gets the ram swapped. Less often, a detectable but uncorrectable error, we'd try those machines again, some would be fine, some would fail again within 24 hours and get a ram swap.

Anyway, my point is, if your ram continues working the whole time, you didn't really need ECC, and if you have lots of ram errors, your system is probably going to crash quickly... But if you have a small number, you won't know you needed ECC unless you have ECC.

ZFS without ECC is probably less safe than ZFS with ECC. But it's still safer than most filesystems. Additionally, there's plenty of avenues for corruption that ECC and ZFS can't protect against --- if your data is corrupted in memory by wild writes from kernel bugs or DMA from a device before it makes it to the ZFS writing routines, you'll have a bad time later and not know.
 
Don't do hardware RAID. Even without ZFS software RAID is better these days.

You can mitigate the need for ECC by running memtest86 on a regular basis. You could do a monthly safety thing where you scrub the pool and then memtest a bit.
 
Don't do hardware RAID. Even without ZFS software RAID is better these days.

You can mitigate the need for ECC by running memtest86 on a regular basis. You could do a monthly safety thing where you scrub the pool and then memtest a bit.
This is the consensus among everyone and I have taken it on board already. Has anything I've said made you think I am going to do hardware RAID? Maybe what I'm saying is hardware RAID without me realising?:
  1. These parts
    1. a. Plus something like this to connect the 3rd SSD to the mobo
  2. Plus any other drives required by the respective OS (TrueNas, unRAID, etc) to configure the SSDs
  3. RAIDZ1 on the 3x SSDs
Is this hardware RAID?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Is this hardware RAID?
No RAIDZ1 is ZFS, a filesystem at the software level doing the job, which will tend to be the superior solution (at least with consumer hardware, even arguably with a lot of enterprise hardware I think).

For stuff like ECC, it depends what you are trying to do if you just want to learn and no actual data will be stored you do not need to spend the money and even if you want a movie NAS regular ram is perfectly fine, it is going a bit blind here, because if all you want to do is learn about zfs or raid, there not really a bad/good way, making mistake would be a great way to learn to start with and for the goal of learning depending at which level not using a already pre-made solution like TrueNas would be the way, if it is to have a NAS and just learn a little bit using a pre-made solution like TrueNAS/OMV/etc... would be the way to go.

Also maybe it is useless to make a solution that do any performance increase of m.2 SSD if you are on gigabit, but for learning purpose that would not matter.

If the NAS is for big file (compressed backup, movies, etc..) a single regular modern HDD can saturated a gigabit port (110-115 MB/s or so) on sustained work, a single m.2 speed would obviously be way over the gigabit, a single good m.2 can more than saturate 1.15 GB/s for a long time.
 
Last edited:
No RAIDZ1 is ZFS, a filesystem at the software level doing the job, which will tend to be the superior solution (at least with consumer hardware, even arguably with a lot of enterprise hardware I think).

For stuff like ECC, it depends what you are trying to do, it is going a bit blind here, because if all you want to do is learn about zfs or raid, there not really a bad/good way, making mistake would be a great way to learn to start with and for the goal of learning depending at which level not using a already pre-made solution like TrueNas would be the way, if it is to have a NAS and just learn a little bit using a pre-made solution like TrueNAS/OMV/etc... would be the way to go.

Also maybe it is useless to make a solution that do any performance increase of m.2 SSD if you are on gigabit, but for learning purpose that would not matter.

If the NAS is for big file (compressed backup, movies, etc..) a single regular modern HDD can saturated a gigabit port (110-115 MB/s or so) on sustained work, a single m.2 speed would obviously be way over the gigabit, a single good m.2 can more than saturate 1.15 GB/s for a long time.
Ok cool. I think your mention of hardware RAID when I had not suggested it in the thread made me second-guess my assumptions. But by the sounds of it, I'm not doing any hardware RAID here.
 
Back
Top