http://www.openegg.org/2012/01/07/announcing-nzfs-not-zfs/
from what I can see, it seems to be a user friendly ZFS alternative.
from what I can see, it seems to be a user friendly ZFS alternative.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
http://www.openegg.org/2012/01/07/announcing-nzfs-not-zfs/
from what I can see, it seems to be a user friendly ZFS alternative.
Its words so far. As I said on another thread, the coder is talented, no doubt about that, but he doesn't finish anything, or at least so far. I assume now he going commercial he'll find some way to get help so he can get a released and supported FlexRAID at version 2.0 whilst allowing him to develop NZFS.
If / when he delivers, I don't think it will be a "compact car" - it does sound quite well-specified (to me anyway). It's one thing to build a parity system on top of an established file system - its another to build a file system itself.
I've been using the storage pool feature of flexraid for some time and it's great when it works, but I've been experiencing problems several times, usually when upgrading (which you're forced to do since beta versions have an expiring date). Right now it's broken, so I went on the forums and discovered it was being commercialized. Honestly, what I could let slide for a free product, I couldn't for a paid for one, and I doubt the guy behind flexraid can manage what he's trying to do, NZFS when flexraid isn't finished.
I dont understand this. So the data is not evenly distributed to every disk in the pool? Instead, only some of the disks get the data. How many disks get the data? Always two disks? What is the point of this design?2. The data on Flexraid is not spreaded over the array as with the normal raid/zfs but is basically just normal (independent) disks with a separated parity drive(s). You can loose as many drives as your parity count. But if you loose more, you don't loose the array, just those.
This is the thing, when it comes to your data, not sure how one can afford to let things slide, whether its free or commercialised. Suppose it depends how much value the data is, but then I guess one wouldn't be going to all this hassle if it wasn't of some value
This is the thing, when it comes to your data, not sure how one can afford to let things slide, whether its free or commercialised. Suppose it depends how much value the data is, but then I guess one wouldn't be going to all this hassle if it wasn't of some value
Well, any data you value shouldn't be put on beta software. For small setups, I'd rec. windows 8 storage spaces when it comes out, but since you can only have one parity drive, anything more than 8 drives and you're starting to tempt fate. I'm looking to start archiving my blurays and it's going to need A LOT of storage. Probably 100TB+
I dont understand this. So the data is not evenly distributed to every disk in the pool? Instead, only some of the disks get the data. How many disks get the data? Always two disks? What is the point of this design?
Ok, that is coolt. It seems that Flexraid collects data on some disks. Thus the data on those few disks are vulnerable. What happens if two disks crashes in my pool. Can I loose all data on those disks?You can chose folder priority or balanced space. In folder priority mode, flexraid will try to keep stuff put in the same folder on the same drive. Of course, if a drive is full (you can set at what amount of free space full is), it will continue on another drive. Balanced space is self explanatory.
Is there something similar to raid-6?
The beauty of flexraid is that it doesn't touch the data.
It has? How can you be sure? Just because one guy says so? Are there any research on this? Most storage solutions does not even have datarot detection. And almost no one (except ZFS) has silent datarot detection.^^
FlexRAID has silent datarot detection. Data which has suffered silent corruption can be restored from the parity.
It has? How can you be sure?
It has? How can you be sure? Just because one guy says so? Are there any research on this? Most storage solutions does not even have datarot detection. And almost no one (except ZFS) has silent datarot detection.
To guarantee that NZFS has datarot detection, without any research is a hasty remark. And to guarantee that NZFS has silent datarot detection is a unwise remark.
When you have seen research on NZFS datarot detection abilites, and silent datarot detection abilities, you can say so. But until then, I would not trust someone that says so. NZFS is running ontop a normal filesystem, and normal filesystems does not detect datarot, even less silent datarot. It is a very brave statement "NZFS detects not only datarot, but it detects silent datarot too! How do I know? Because the author says so".
Well, CERN, NetApp, Oracle, etc - all do heavy research on data integrity. Maybe you think it is easy to provide data integrity, but lot of large companies dont agree with you.Testing data integrity doesn't exactly require a research project, its not rocket science.
This is true. Hard disks have had checksums for eternity. And still they corrupt data.CRC checksumming and hashing have been around for decades.
Look, if all these large companies spend much time and money on data integrity, do you really think it is easy to provide data integrity? "Just add some checksums, and then you are done" - is that it? Have you read the research? Have you seen how difficult it is to provide data integrity?Its quite easy to verify if FlexRAID detects corruption. Change or delete a byte in a file with a hex editor and watch FlexRAID's verify/validate process pick it up. You can do the same with SnapRAID, or anyone multitude of murmur3 or md5 based hashing tools.
Some would say that this is a condescending remark from you. But, agreed. I just dont like when I read lot of research on this, and see how much money and time the large companies invest - and then some guy comes and say "I rely on NTFS, and I have added checksums - so my solution is safe". If it was that easy, there would be no need for research from Amazon, NetApp, EMC, Oracle, CERN, etc on this matter that apparently is very difficult. Except for some people that believes that it is easy. Again: read the research and see yourself how extremely difficult it is.Hate to burst your bubble but ZFS didnt invent data integrity.
Again, try not being condescending.
Would like to add that this is also possible with other snapshot RAID systems as well: SnapRAID, disParity.^^
FlexRAID has silent datarot detection. Data which has suffered silent corruption can be restored from the parity.
I don't think that's the case. I believe the Update task will only calculate parity for the files whose date has changed. Data rot doesn't change the date, but the bits in the data. So parity for those corrupted files should remain unchanged and can be recovered from should there be datarot.
Well, CERN, NetApp, Oracle, etc - all do heavy research on data integrity. Maybe you think it is easy to provide data integrity, but lot of large companies dont agree with you.
It is more complicated than that. If you add or change data on a drive that has a parallel block of data (if this were standard RAID, it would be in the same stripe) to the block of data on another disk that had a bit flip, then that bit flip could be incorporated into parity on the next update. But at least with snapraid, I think it does verify the checksums on the blocks that it reads to create parity, so I don't think parity will become corrupted in that case.
And since these programs maintain a separate hash (checksum) on the data at either the block or file level (snapraid: block, flexraid: file, disparity: file), it will still be possible to detect the bit flip if you run a verify. And then it should be possible to restore from parity, and then verify the checksum.
Well, CERN, NetApp, Oracle, etc - all do heavy research on data integrity. Maybe you think it is easy to provide data integrity, but lot of large companies dont agree with you.
I was only talking about how FlexRAID deals with datarot.
DDP
101 initial data and parity (even)
001 random bit flip on first drive, parity NOT updated yet
011 data added or changed on 2nd drive, parity NOT updated yet
011 parity updated
110 parity should be 0 if the first bit had not flipped
Excellent point, and example.Problem is you were wrong for all types of snapshot RAID. Or at least so incomplete as to be completely misleading.
Here's a very simple example:
SnapRAID does it right, and would not be fooled.Although as I said before, I think some snapshot RAID programs may verify the checksum on old data before calculating parity during an update for new or changed data. If so, the example above would not occur.
How I can be sure on research from these companies? Well, as I said, I read research papers, I said that. Do you think I made up this? Made up all the research papers? I already linked to research from CERN and Amazon. But here are 21 research papers on data corruption and a PhD thesis, from NetApp researchers:These companies do heavy research on data integrity?
It has? How can you be sure? Just because their website say so? Do you have any proof on their research? Are you sure they are not just marketing to sell more NetApp boxes? More Oracle software? Are you sure these are not just marketing to justify their insanely high cost of equipment to corporations?
I agree on this, and I have written this myself. I have recommended people to use Snapraid / flexraid if they are using a media server, instead of a ZFS solution. I agree with parts of JoeComp(?) says about FlexRaid / Snapraid being a sufficient solution for media servers. Personally, I would prefer a pooled raid system instead of dabbling with umpteen separate disks (on which disk do I have this file? Which disk has the newest version?). I have several individual disks, and when I pooled them life got so much easier. So I prefer a raid solution personally, because it scales up to several TB.People can argue theoreticals until they're blue in the face but don't forget context. In the context of what most people on forums like these are doing with their storage its mostly personal data, videos, photos. If you're storing scientific, industrial data, or data that business and finance depends on then sure knock yourself out with research studies and protecting against one-in-a-billion scenarios. These applications demand industrial strength solutions. But for home data, getting hung up on these kinds of theoreticals approaches obsessive and most people don't care - hard enough getting most people to even consider backup.
The parity protection that solutions like snapraid and flexraid offer is excellent and provides great protection for home and SOHO use, especially with the proper application of scheduled scrubbing.
Agreed. Well said.Does FlexRAID work well? Yes. Does the developer make me suspicious of the product? Yes. Does the developer have my money? Yes. FlexRAID works because it covers a specific niche that more and more people are wanting, and it's a simple and effective product. It has a good user interface with simple design that most "mediocre" savvy tech users can understand and setup in an hour. With that being said, FlexRAID users are different class than people that use ZFS, people I consider just more [H] about their data. How will a FlexRAID user or user around that level translate to NZFS? Probably not well because if your data is that important, you would have taken that next step up to ZFS and therefore learned *nix.
Bottom line, in my opinion, NZFS is essentially for people that hear about ZFS, want its benefits, don't want to lean *nix, and want to stick with Windows+NZFS for its GUI. It seems to essentially be a stopgap between those that want more, but don't have (or are willing) to invest the time necessary.
Bottom line, in my opinion, NZFS is essentially for people that hear about ZFS, want its benefits, don't want to lean *nix, and want to stick with Windows+NZFS for its GUI. It seems to essentially be a stopgap between those that want more, but don't have (or are willing) to invest the time necessary.
No, I think you have the appeal of snapshot RAID quite wrong there.
Snapshot RAID offers many features and benefits for certain types of home media fileservers that ZFS does not offer. The fact that the data checksumming on ZFS is more automatic than with snapshot RAID solutions is so far down the list of concerns that it barely qualifies for most people who need a home media fileserver.