Anyone hear about NZFS (from flexraid guy[s])

Not sure how credible he is with remarks like:

"Mirroring is crap and greatly inferior to multi-parity in all facets including protection level, space utilization, and ultimately cost."
 
Its words so far. As I said on another thread, the coder is talented, no doubt about that, but he doesn't finish anything, or at least so far. I assume now he going commercial he'll find some way to get help so he can get a released and supported FlexRAID at version 2.0 whilst allowing him to develop NZFS.

If / when he delivers, I don't think it will be a "compact car" - it does sound quite well-specified (to me anyway). It's one thing to build a parity system on top of an established file system - its another to build a file system itself.
 
Its words so far. As I said on another thread, the coder is talented, no doubt about that, but he doesn't finish anything, or at least so far. I assume now he going commercial he'll find some way to get help so he can get a released and supported FlexRAID at version 2.0 whilst allowing him to develop NZFS.

If / when he delivers, I don't think it will be a "compact car" - it does sound quite well-specified (to me anyway). It's one thing to build a parity system on top of an established file system - its another to build a file system itself.

from what I read, it says you'll need to format it using an existing filesystem, NTFS, EXT, etc.
 
I dont really understand Flexraid, can someone explain?


Question 1) It says that Flexraid runs on top another filesystem, such as NTFS, EXT, etc. It also says that you can pull out one disk and read that disk in any other OS:
"A drive taken from a FlexRAID pool is fully readable outside of the pool and on any other computer"
How can that be? If the disk is using EXT, how can it be read in an Windows PC using only NTFS? I dont get this claim.


Q2) It says
"Better power saving features (only the disk where the data resides needs to be active)"
Thus, the data only resides on some disks, not all disks. Say I have 12 disks in Flexraid pool, but a movie is spread onto 2 disks only. How can I get redundancy in this case? Assume one of these disks crashes, then my movie is lost? I dont understand the point of this. In raidz3 I can loose any three disks, because the data is evenly distributed to every disk. But not in Flexraid. How is the data distributed to the disks in Flexraid?


Q3) No raid provides data integrity. There is lot of research on this. Why does he state that?
http://www.openegg.org/2012/01/16/nzfs-news-zero-scrubbing-zero-maintenance-design/
"Virtually every RAID implementation relies on scrubbing as part of its maintenance to ensure continuous data integrity."
Here is raid problems:
http://en.wikipedia.org/wiki/RAID#Problems_with_RAID


Q4) FlexRaid / NZFS does not need a separeate hardware raid card, no? It is software raid?


I also wonder how he will implement checksums. Normally, you need to alter the data structures to hold a checksum value. But he stores the checksum somewhere else on the disk. It is good that he tries to give better data protection, just as MS ReFS does and Btrfs and Hammer does.

NZFS? Not ZFS? Just as GNU is not Unix? But Gnu is a strict clone of Unix, it is even source code compatible with Unix (Windows is not source compatible with Unix). Does this mean that NZFS will be a clone of ZFS?
 
1. You can put the drive to another PC which uses the same OS/supports the same filesystem.
2. The data on Flexraid is not spreaded over the array as with the normal raid/zfs but is basically just normal (independent) disks with a separated parity drive(s). You can loose as many drives as your parity count. But if you loose more, you don't loose the array, just those. Also only the drive that is reading/writing is active, others can be put to standby and thus save on power.
3. You can have a snapshot raid (live will come in 2.0?) which means you have a backup of the time you created that snapshot. Every actions done later will not be saved. If you have mostly read/stream archive (which most of us home users do) this is tolerable.
I hope I summed it up correctly since i dont have any setup of that kind ;)
 
FlexRAID in realtime RAID mode (there is also snapshot RAID), is akin to RAID4, except that you can have several parity units. Files are only on one drive at a time, so if your movie is only one file, it's only on one drive. If it's a DVD folder, it could be spread, depending on your parameters. When reading, the parity drive(s) doesn't need to be accessed, and neither do the drives other than the one where the movie is.

I've been using the storage pool feature of flexraid for some time and it's great when it works, but I've been experiencing problems several times, usually when upgrading (which you're forced to do since beta versions have an expiring date). Right now it's broken, so I went on the forums and discovered it was being commercialized. Honestly, what I could let slide for a free product, I couldn't for a paid for one, and I doubt the guy behind flexraid can manage what he's trying to do, NZFS when flexraid isn't finished.
 
I've been using the storage pool feature of flexraid for some time and it's great when it works, but I've been experiencing problems several times, usually when upgrading (which you're forced to do since beta versions have an expiring date). Right now it's broken, so I went on the forums and discovered it was being commercialized. Honestly, what I could let slide for a free product, I couldn't for a paid for one, and I doubt the guy behind flexraid can manage what he's trying to do, NZFS when flexraid isn't finished.

This is the thing, when it comes to your data, not sure how one can afford to let things slide, whether its free or commercialised. Suppose it depends how much value the data is, but then I guess one wouldn't be going to all this hassle if it wasn't of some value :)
 
2. The data on Flexraid is not spreaded over the array as with the normal raid/zfs but is basically just normal (independent) disks with a separated parity drive(s). You can loose as many drives as your parity count. But if you loose more, you don't loose the array, just those.
I dont understand this. So the data is not evenly distributed to every disk in the pool? Instead, only some of the disks get the data. How many disks get the data? Always two disks? What is the point of this design?
 
This is the thing, when it comes to your data, not sure how one can afford to let things slide, whether its free or commercialised. Suppose it depends how much value the data is, but then I guess one wouldn't be going to all this hassle if it wasn't of some value :)

Well, any data you value shouldn't be put on beta software. For small setups, I'd rec. windows 8 storage spaces when it comes out, but since you can only have one parity drive, anything more than 8 drives and you're starting to tempt fate. I'm looking to start archiving my blurays and it's going to need A LOT of storage. Probably 100TB+
 
This is the thing, when it comes to your data, not sure how one can afford to let things slide, whether its free or commercialised. Suppose it depends how much value the data is, but then I guess one wouldn't be going to all this hassle if it wasn't of some value :)

The beauty of flexraid is that it doesn't touch the data.
 
Well, any data you value shouldn't be put on beta software. For small setups, I'd rec. windows 8 storage spaces when it comes out, but since you can only have one parity drive, anything more than 8 drives and you're starting to tempt fate. I'm looking to start archiving my blurays and it's going to need A LOT of storage. Probably 100TB+

Personally I have full backups, I would only use parity on top of that. As for win8, it will be beta for a few years, it's from microsoft after all :D

I dont understand this. So the data is not evenly distributed to every disk in the pool? Instead, only some of the disks get the data. How many disks get the data? Always two disks? What is the point of this design?

You can chose folder priority or balanced space. In folder priority mode, flexraid will try to keep stuff put in the same folder on the same drive. Of course, if a drive is full (you can set at what amount of free space full is), it will continue on another drive. Balanced space is self explanatory.
 
You can chose folder priority or balanced space. In folder priority mode, flexraid will try to keep stuff put in the same folder on the same drive. Of course, if a drive is full (you can set at what amount of free space full is), it will continue on another drive. Balanced space is self explanatory.
Ok, that is coolt. It seems that Flexraid collects data on some disks. Thus the data on those few disks are vulnerable. What happens if two disks crashes in my pool. Can I loose all data on those disks?

Is there something similar to raid-6?
 
Is there something similar to raid-6?

Think more RAID-4 but without the striped data. The theory is since files aren't striped no matter how many disks you lose you'll at least the data on the disks you do have is still valid (like I've ripped my 600 DVDs, I lose 2 disks but now I only need to rerip 50 of them instead of all 600). Also if you just need to do a read of a single file (like watching a movie) you only need to spin up that single disk.
 
Flexraid runs "above the OS" you have 2 ways to run it : A) Pure Parity B) RealTimeRaid

PureParity is simple, it calculates parity on "another drive" :
Drive1 : Files A B C D
Drive2 : Files E F G H
Drive3 : Files I J K L
Drive4 : Files M N O P
DriveP : Parity 0101

Basically you MANUALLY tell it to scan Drives 1 to 4 and then creates a Parity of your data on DriveP.

It's fault tolerant (1 Failure) => you can rebuild the failed drive using the parity
And Disaster Proof => If 2 drives die, you lose "at worse" these 2 drives (this is a HUGE plus).

Also can add a Drive5 Drive6 without any problem (again very cheap & convenient).

The downside is that everything is "MANUAL" you have to tell it to update the Parity after every update/change.
If you forget to do it, your data is not totally safe. Its perfect for a media server.

RealTimeRaid works kinda like a Raid4/5/6 but it's a bit funky to understand. (pooling & folder meta-management).
BUT it does not strip data so at worse you lose the "dead drives"
 
Last edited:
dastral, you've done a fantastic job of explaining the different modes of FlexRAID. Wanna go 3 for 3 and explain this NZFS for us? :)
 
I've been trying to figure out what he intends to make NZFS and reading his blog posts hasn't helped much. At first it sounded kind of like doing a block level RAID layer (like MD in linux), but with some more ZFS like features (as he said: checksum, ZIL, de-dup, copy-on-write). But in his most recent blog post he's talking about NZFS being able to both "RAID under filesystem" and "RAID within filesystem". I've got no idea what he means by the second part unless he intends to write an entirely new filesystem.
 
The beauty of flexraid is that it doesn't touch the data.

I know but you are still relying on FlexRAID to give you some benefit i.e. the parity files it has been created to help you recover from a failure in a disk with actual data on it. If you can't rely on it then what use is it? Of course, you can't rely on software 100% but my appetite for risk doesn't run to using beta sporadically supported s/w with a track record of never completing a project to a single version release.
 
Unfortunately, this seems to be a risk with any single-developer project. Look what happened to zfsguru when the dev dropped off the net for several months (not blaming him, just pointing out the risk.) Like I said earlier too, the credibility is not that great when you are throwing around comments about competing architectures being crap, etc...
 
thinking of switching over from zfs to flexraid for media files

I like the sound of less power and files being recoverable if drives fail past redundancy

just about to see how it goes with ubuntu / btrfs

I recon 16x 3 tb drives should be a good test for it :p

would btrfs help protect from datarot ???
 
^^
FlexRAID has silent datarot detection. Data which has suffered silent corruption can be restored from the parity.
 
^^
FlexRAID has silent datarot detection. Data which has suffered silent corruption can be restored from the parity.
It has? How can you be sure? Just because one guy says so? Are there any research on this? Most storage solutions does not even have datarot detection. And almost no one (except ZFS) has silent datarot detection.

To guarantee that NZFS has datarot detection, without any research is a hasty remark. And to guarantee that NZFS has silent datarot detection is a unwise remark.

When you have seen research on NZFS datarot detection abilites, and silent datarot detection abilities, you can say so. But until then, I would not trust someone that says so. NZFS is running ontop a normal filesystem, and normal filesystems does not detect datarot, even less silent datarot. It is a very brave statement "NZFS detects not only datarot, but it detects silent datarot too! How do I know? Because the author says so".


There are guys in cryptography that have created cryptosystems, but they do not say their crypto is safe. Just because they can not break their own crypto, doesnt mean it is safe. You need 3rd party research from others to conclude it is safe.

It is not serious to say NZFS is safe against datarot, and safe against silent datarot without some backup. The only reason people say it is safe, is because the author has datarot detection on his wishlist.
 
It has? How can you be sure? Just because one guy says so? Are there any research on this? Most storage solutions does not even have datarot detection. And almost no one (except ZFS) has silent datarot detection.

To guarantee that NZFS has datarot detection, without any research is a hasty remark. And to guarantee that NZFS has silent datarot detection is a unwise remark.

When you have seen research on NZFS datarot detection abilites, and silent datarot detection abilities, you can say so. But until then, I would not trust someone that says so. NZFS is running ontop a normal filesystem, and normal filesystems does not detect datarot, even less silent datarot. It is a very brave statement "NZFS detects not only datarot, but it detects silent datarot too! How do I know? Because the author says so".

No need to attack the previous poster. Nowhere did he even mention NZFS. He specified FlexRAID.

Testing data integrity doesn't exactly require a research project, its not rocket science. CRC checksumming and hashing have been around for decades. Its quite easy to verify if FlexRAID detects corruption. Change or delete a byte in a file with a hex editor and watch FlexRAID's verify/validate process pick it up. You can do the same with SnapRAID, or anyone multitude of murmur3 or md5 based hashing tools. Hate to burst your bubble but ZFS didnt invent data integrity.

Again, try not being condescending. People might be more open to the message. Whatever it is.
 
Last edited:
Testing data integrity doesn't exactly require a research project, its not rocket science.
Well, CERN, NetApp, Oracle, etc - all do heavy research on data integrity. Maybe you think it is easy to provide data integrity, but lot of large companies dont agree with you.


CRC checksumming and hashing have been around for decades.
This is true. Hard disks have had checksums for eternity. And still they corrupt data.
Researchers at CERN concludes:
w3.hepix.org/storage/hep_pdf/2007/Spring/kelemen-2007-HEPiX-Silent_Corruptions.pdf
"checksumming? - not necessarily enough
-end-to-end checksumming (ZFS has a point)"

"silent corruptions are a fact of life
- first step towards a solution is detection
- elimination seems impossible"

In another study CERN concludes "silent corruption are seen everywhere, even on very expensive enterprise storage systems".

CERN researchers conclusion? Checksums are not sufficient. You need end-to-end checksums. Not ordinary checksums. Read the research. Do you really think it as easy to add checksums, and then you are done? See what Amazon writes about how difficult it really is to provide data integrity:
http://perspectives.mvdirona.com/20...ErrorsCorrectionsTrustOfDependentSystems.aspx
"Every couple of weeks I get questions along the lines of “should I checksum application files, given that the disk already has error correction?” or “given that TCP/IP has error correction on every communications packet, why do I need to have application level network error detection?”"


Its quite easy to verify if FlexRAID detects corruption. Change or delete a byte in a file with a hex editor and watch FlexRAID's verify/validate process pick it up. You can do the same with SnapRAID, or anyone multitude of murmur3 or md5 based hashing tools.
Look, if all these large companies spend much time and money on data integrity, do you really think it is easy to provide data integrity? "Just add some checksums, and then you are done" - is that it? Have you read the research? Have you seen how difficult it is to provide data integrity?


Hate to burst your bubble but ZFS didnt invent data integrity.
Again, try not being condescending.
Some would say that this is a condescending remark from you. But, agreed. I just dont like when I read lot of research on this, and see how much money and time the large companies invest - and then some guy comes and say "I rely on NTFS, and I have added checksums - so my solution is safe". If it was that easy, there would be no need for research from Amazon, NetApp, EMC, Oracle, CERN, etc on this matter that apparently is very difficult. Except for some people that believes that it is easy. Again: read the research and see yourself how extremely difficult it is.
 
^^
FlexRAID has silent datarot detection. Data which has suffered silent corruption can be restored from the parity.
Would like to add that this is also possible with other snapshot RAID systems as well: SnapRAID, disParity.
Would also like to add that this only works if you catch the data rot BEFORE you update the parity. If you update after data rot, your bad data is now considered good and this point is moot.
 
Would like to add that this is also possible with other snapshot RAID systems as well: SnapRAID, disParity.
Would also like to add that this only works if you catch the data rot BEFORE you update the parity. If you update after data rot, your bad data is now considered good and this point is moot.

I don't think that's the case. I believe the Update task will only calculate parity for the files whose date has changed. Data rot doesn't change the date, but the bits in the data. So parity for those corrupted files should remain unchanged and can be recovered from should there be datarot.
 
I don't think that's the case. I believe the Update task will only calculate parity for the files whose date has changed. Data rot doesn't change the date, but the bits in the data. So parity for those corrupted files should remain unchanged and can be recovered from should there be datarot.

It is more complicated than that. If you add or change data on a drive that has a parallel block of data (if this were standard RAID, it would be in the same stripe) to the block of data on another disk that had a bit flip, then that bit flip could be incorporated into parity on the next update. But at least with snapraid, I think it does verify the checksums on the blocks that it reads to create parity, so I don't think parity will become corrupted in that case.

And since these programs maintain a separate hash (checksum) on the data at either the block or file level (snapraid: block, flexraid: file, disparity: file), it will still be possible to detect the bit flip if you run a verify. And then it should be possible to restore from parity, and then verify the checksum.
 
Last edited:
Well, CERN, NetApp, Oracle, etc - all do heavy research on data integrity. Maybe you think it is easy to provide data integrity, but lot of large companies dont agree with you.

These companies do heavy research on data integrity?
It has? How can you be sure? Just because their website say so? Do you have any proof on their research? Are you sure they are not just marketing to sell more NetApp boxes? More Oracle software? Are you sure these are not just marketing to justify their insanely high cost of equipment to corporations?
 
It is more complicated than that. If you add or change data on a drive that has a parallel block of data (if this were standard RAID, it would be in the same stripe) to the block of data on another disk that had a bit flip, then that bit flip could be incorporated into parity on the next update. But at least with snapraid, I think it does verify the checksums on the blocks that it reads to create parity, so I don't think parity will become corrupted in that case.

And since these programs maintain a separate hash (checksum) on the data at either the block or file level (snapraid: block, flexraid: file, disparity: file), it will still be possible to detect the bit flip if you run a verify. And then it should be possible to restore from parity, and then verify the checksum.

I was only talking about how FlexRAID deals with datarot. Thanks for the info though.
 
Well, CERN, NetApp, Oracle, etc - all do heavy research on data integrity. Maybe you think it is easy to provide data integrity, but lot of large companies dont agree with you.

People can argue theoreticals until they're blue in the face but don't forget context. In the context of what most people on forums like these are doing with their storage its mostly personal data, videos, photos. If you're storing scientific, industrial data, or data that business and finance depends on then sure knock yourself out with research studies and protecting against one-in-a-billion scenarios. These applications demand industrial strength solutions. But for home data, getting hung up on these kinds of theoreticals approaches obsessive and most people don't care - hard enough getting most people to even consider backup.

The parity protection that solutions like snapraid and flexraid offer is excellent and provides great protection for home and SOHO use, especially with the proper application of scheduled scrubbing.
 
Last edited:
I was only talking about how FlexRAID deals with datarot.

Problem is you were wrong for all types of snapshot RAID. Or at least so incomplete as to be completely misleading.

Here's a very simple example:

D=data drive
P = parity drive

Code:
DDP
101  initial data and parity (even)
001  random bit flip on first drive, parity NOT updated yet
011  data added or changed on 2nd drive, parity NOT updated yet
011  parity updated

110  parity should be 0 if the first bit had not flipped
Note that parity is wrong, since if the first bit had not randomly flipped, then after adding data on the second drive, parity should be 0.

Although as I said before, I think some snapshot RAID programs may verify the checksum on old data before calculating parity during an update for new or changed data. If so, the example above would not occur.
 
Last edited:
Problem is you were wrong for all types of snapshot RAID. Or at least so incomplete as to be completely misleading.

Here's a very simple example:
Excellent point, and example.
Although as I said before, I think some snapshot RAID programs may verify the checksum on old data before calculating parity during an update for new or changed data. If so, the example above would not occur.
SnapRAID does it right, and would not be fooled.
 
These companies do heavy research on data integrity?
It has? How can you be sure? Just because their website say so? Do you have any proof on their research? Are you sure they are not just marketing to sell more NetApp boxes? More Oracle software? Are you sure these are not just marketing to justify their insanely high cost of equipment to corporations?
How I can be sure on research from these companies? Well, as I said, I read research papers, I said that. Do you think I made up this? Made up all the research papers? I already linked to research from CERN and Amazon. But here are 21 research papers on data corruption and a PhD thesis, from NetApp researchers:
http://pages.cs.wisc.edu/~laksh/

More research:
www.cs.wisc.edu/adsl/Publications/corruption-fast08.pdf

Googles research on disk corruption
research.google.com/archive/disk_failures.pdf

Oracle research on data corruption:
www.usenix.org/event/lsf07/tech/petersen.pdf

Comp sci researchers on ZFS:
http://www.zdnet.com/blog/storage/zfs-data-integrity-tested/811

IBM also does research on data corruption, it was IBM who proposed the DIF10 standard to combat data corruption on disks. And many more companies does research on this.

I have plenty more links. You see, I collect research papers on data corruption and READ them, and follow the data corruption scene. Why do you think I can say that data integrity is very difficult? Is it because I made this stuff up? Or maybe because I have followed the data corruption research for years, and read research papers? Somebody should tell all these researchers and companies that they only have to add a checksum to be safe, and then they can lay off all the researchers and engineers trying to design and build safe solutions.



People can argue theoreticals until they're blue in the face but don't forget context. In the context of what most people on forums like these are doing with their storage its mostly personal data, videos, photos. If you're storing scientific, industrial data, or data that business and finance depends on then sure knock yourself out with research studies and protecting against one-in-a-billion scenarios. These applications demand industrial strength solutions. But for home data, getting hung up on these kinds of theoreticals approaches obsessive and most people don't care - hard enough getting most people to even consider backup.

The parity protection that solutions like snapraid and flexraid offer is excellent and provides great protection for home and SOHO use, especially with the proper application of scheduled scrubbing.
I agree on this, and I have written this myself. I have recommended people to use Snapraid / flexraid if they are using a media server, instead of a ZFS solution. I agree with parts of JoeComp(?) says about FlexRaid / Snapraid being a sufficient solution for media servers. Personally, I would prefer a pooled raid system instead of dabbling with umpteen separate disks (on which disk do I have this file? Which disk has the newest version?). I have several individual disks, and when I pooled them life got so much easier. So I prefer a raid solution personally, because it scales up to several TB.

But I react when people say some home brewn solutions that rely on NTFS provide data integrity. Just read the research, and see how difficult it is! I agree that snapraid / flexraid are safer than just using a filesystem, but it is a very brave statement to say they provide data integrity. Even NetApp have problems with that, and they spend tons of money on data corruption. And Oracle. And EMC. And CERN. etc etc. What do you think a researcher would say if he heard about "I add checksums just as hard disks do, and then my solution provides data integrity" ? If it was as simple as adding a checksum, why all this research? Why do CERN write "checksums are not enough"? Conclusion: it is highly non trivial to provide data integrity.

Flexrad / Snapraid / NZFS and other checksummed solutions are safer than running only filesystem, but just some checksums does not provide data integrity. Even Oracle writes in a white paper that ZFS only provides 99.9999999999999% data integrity, even enterprise ZFS is not 100% safe says Oracle. I have a hard time believing these solutions are safer than ZFS, which is designed from ground up to combat data corruption.

When people try to make others believe things that are not true, you should react. Instead you should say these home brewn checksummed solutions are "safer" than only running NTFS (this is most probably true). And these solutions might even be safer than running hardware raid - I dont know because I have not seen research on these home brewn solutions compared to hw-raid. But to say they are safe, is a very bold statement. And untrue. So please dont say so, and we dont have these debates.
 
Does FlexRAID work well? Yes. Does the developer make me suspicious of the product? Yes. Does the developer have my money? Yes. FlexRAID works because it covers a specific niche that more and more people are wanting, and it's a simple and effective product. It has a good user interface with simple design that most "mediocre" savvy tech users can understand and setup in an hour. With that being said, FlexRAID users are different class than people that use ZFS, people I consider just more [H] about their data. How will a FlexRAID user or user around that level translate to NZFS? Probably not well because if your data is that important, you would have taken that next step up to ZFS and therefore learned *nix.

Bottom line, in my opinion, NZFS is essentially for people that hear about ZFS, want its benefits, don't want to lean *nix, and want to stick with Windows+NZFS for its GUI. It seems to essentially be a stopgap between those that want more, but don't have (or are willing) to invest the time necessary.
 
Does FlexRAID work well? Yes. Does the developer make me suspicious of the product? Yes. Does the developer have my money? Yes. FlexRAID works because it covers a specific niche that more and more people are wanting, and it's a simple and effective product. It has a good user interface with simple design that most "mediocre" savvy tech users can understand and setup in an hour. With that being said, FlexRAID users are different class than people that use ZFS, people I consider just more [H] about their data. How will a FlexRAID user or user around that level translate to NZFS? Probably not well because if your data is that important, you would have taken that next step up to ZFS and therefore learned *nix.

Bottom line, in my opinion, NZFS is essentially for people that hear about ZFS, want its benefits, don't want to lean *nix, and want to stick with Windows+NZFS for its GUI. It seems to essentially be a stopgap between those that want more, but don't have (or are willing) to invest the time necessary.
Agreed. Well said.
 
Bottom line, in my opinion, NZFS is essentially for people that hear about ZFS, want its benefits, don't want to lean *nix, and want to stick with Windows+NZFS for its GUI. It seems to essentially be a stopgap between those that want more, but don't have (or are willing) to invest the time necessary.

No, I think you have the appeal of snapshot RAID quite wrong there.

Snapshot RAID offers many features and benefits for certain types of home media fileservers that ZFS does not offer. The fact that the data checksumming on ZFS is more automatic than with snapshot RAID solutions is so far down the list of concerns that it barely qualifies for most people who need a home media fileserver.
 
No, I think you have the appeal of snapshot RAID quite wrong there.

Snapshot RAID offers many features and benefits for certain types of home media fileservers that ZFS does not offer. The fact that the data checksumming on ZFS is more automatic than with snapshot RAID solutions is so far down the list of concerns that it barely qualifies for most people who need a home media fileserver.

Again, from someone that uses FlexRAID yet has considered ZFS, my perspective simply comes down to the consumer that NZFS is trying to attain; I am not talking about the specifics of the product, just who is going to buy it. As far as a home media center using FlexRAID's snapshot mode vs. ZFS, the main feature and benefit that FlexRAID has over ZFS is convenience. Really, anything FlexRAID can do, ZFS can do too, but ZFS is a lot more complex, cumbersome if you just want media, with a lot of features that may be of no use, and with maybe a thousand fold in magnitude of trouble if you have no *nix experience.

The question the consumers asks is whether I need 1) an average 180 pound human to lift a 20 pound box that takes 1 hour to train (i.e., FlexRAID) or 2) a 300 pound Viking on crack to lift the same box that takes 400 hours to train (i.e., ZFS). Sometimes you need more heavy lifting, sometimes you don't want to spend 100 hours learning another language, sometimes you want something between. NZFS, in this absolutely terrible analogy, would be the 250 pound weightlifter that takes 2 hours to train; convenient yet strong. Whether you'll ever be lifting more than 20 pound boxes is up to the consumer but it's a small middle ground that NZFS is trying to hit.
 
Back
Top