Migrating from FreeNAS to Windows Server 2012 Storage Spaces

ePlus

n00b
Joined
Mar 8, 2013
Messages
9
For the last two years I have been running a FreeNAS 6TB RAID5 ZFS server.

Recently it has come to my attention the Storage Spaces feature in Windows Server 2012 which got me intrigued...

Is my understanding that you cannot simply re-attach the ZFS pool to Server 2012 Storage Spaces?

What I am trying to find out is if anyone has migrated from solutions such as FreeNAS, FlexRAID, SnapRAID, unRAID, DrivePool, etc to Server 2012 Storage Spaces?
 
ZFS is a different file system than NTFS (which is what your array would be with Storage Spaces), which is the main reason you couldn't simply attach the array in Windows Storage Spaces. You would effectively have to backup all your data as those drives would have to be completely wiped.

If your ZFS solution (FreeNAS) is working good, I wouldn't recommend switching to the Parity Space in Storage Spaces. Read performance is pretty good, but write performance is far from good. With some tweaking you can get it to acceptable levels (I would say around 100-130Mbps writes, or so), which is probably not as fast as what you have with your ZFS array.
 
Thanks for the reply tycoonbob. Fortunately there isn't too much data, so I can back it up and restore it.

I've just come across Windows Storage Server 2012 and wondering if anyone has any experience with it?
 
There are several threads on Storage spaces here. Read them, contains useful information for people in your situation. And then you maybe you could please summarize your experience in migration, here? So other people can benefit from your lessons...
 
Thanks for the reply tycoonbob. Fortunately there isn't too much data, so I can back it up and restore it.

I've just come across Windows Storage Server 2012 and wondering if anyone has any experience with it?

It's basically a purpose trimmed version of Server 2012 standard. Does Iscsi finally out of the box, software RAID is OK, but from my research, Storage Spaces shouldn't be trusted. I did a 3 drive storage space to test, and I had all sorts of hell when I tested things by booting up with one of the member drives unplugged.

Honestly I would stay with ZFS unless you have a compelling reason to ditch it.
 
One of the drives behind the move to Server 2012 is the potential to use it for multiple purposes. VPN, monitoring, etc.

Also I have to spend some time every few months to upgrade sabnzbd, sickbeard, couchpotato, time which I don't really have. :(

I have been doing a little bit of reading on SnapRAID, Drive Bender and FlexRAID but I am not sure in the long run how they will perform in the sense of having an application on top of a system managing the storage pools. What happens if support stops for that application by the developer?

From searching around the forum, a lot of discussions and linked reviews are from as early as March last year in regards to Server 2012 Storage Spaces. So I am not sure if reading such old reviews are still a clear representation of today's situation?

I came accross these last night: http://blogs.technet.com/b/mspfe/ar...rity-storage-spaces-might-perform-slowly.aspx and http://social.technet.microsoft.com...storage-spaces-designing-for-performance.aspx - but as they are coming from MS, no doubt the articles may be biased?
 
I have been doing a little bit of reading on SnapRAID, Drive Bender and FlexRAID but I am not sure in the long run how they will perform in the sense of having an application on top of a system managing the storage pools. What happens if support stops for that application by the developer?

What happens if support stops for application is nothing, because those are JBOD pooling solutions - your drives retain their individual formatting.

If you care about your data then stay away from Storage Spaces at least until SP1
 
Why would you want to downgrade from ZFS to NTFS? If you want multi-purpose server, use virtualisation. But keep the storage on ZFS; it is the only way to properly protect your data.
 
Storage Spaces doesn't require NTFS. In fact, I just recently had to downgrade TO NTFS because my Storage Space, which was ReFS formatted, could not be used as an NFS share.

Parity with Storage Spaces is even more dismal than tycoonbob suggests. Your read speeds are decent, but your write speeds are flat out god-awful. With 6x3TB drives in a parity space, write speeds were under 25MB/s. Not a typo. Twenty-five-freaking-megs-per-second.

ZFS is a better solution all around. In fact, it's what I would be using right now if HyperV and Solaris 11.1 worked together nicely with a virtual SCSI controller. Instead, I had to surrender parity and use a Simple volume. That's actually pretty nice. 825MB/s read, 850MB/s write. 'Course, if a drive fails, now I get to restore from backup instead of just swapping out the bad one.
 
But ReFS is pretty much in development and very beta quality. NTFS is usable and very stable, but has poor protections making it suitable to pre-2000 storage only. Windows just lacks modern storage technologies today that are actually usable.

But instead of HyperV, what about ESXi + ZFS if you want virtualisation? Many people run in this kind of configuration. However, personally I prefer ZFS not being virtualised.
 
But ReFS is pretty much in development and very beta quality. NTFS is usable and very stable, but has poor protections making it suitable to pre-2000 storage only. Windows just lacks modern storage technologies today that are actually usable.

But instead of HyperV, what about ESXi + ZFS if you want virtualisation? Many people run in this kind of configuration. However, personally I prefer ZFS not being virtualised.

With ESXi, you can pass through entire HBA (or RAID cards with the RAID flashed or turned off, like the M1015), and point that entire controller at the ZFS O/S - assuming your CPU and mobo support VT-d. (Pretty much any Sandy+ that isn't a K/unlocked).

I am using that functionality right now for JBOD (2 x 16 bay JBODs) and passing a buttload of 3TB drives as individual NTFS volumes for my home network.

As an aside, if I wanted to go ZFS with that setup (probably 12 3TB drives, WD greens, individual NTFS volumes right now), how many drives would I need to buy to get started with ZFS (Probably RaidZ2), and is it plausible to read in each 3TB drive's NTFS contents to the ZFS-hosting O/S, wipe the 3TB drive I just read in, and add that wiped one to the pool?

(I have an existing ZFS for VM storage, and some other things, but this migration thread actually has me thinking.). No dedupe or anything insane, I don't even think a Zero-intent drive.
 
But instead of HyperV, what about ESXi + ZFS if you want virtualisation? Many people run in this kind of configuration. However, personally I prefer ZFS not being virtualised.
Unfortunately, this machine is my workstation AND datastore for my ESXi hosts, so whatever I use has to be Windows based. It's actually the reason I got into Storage Spaces myself. I've done plenty of benchmarking, and my final verdict is that Windows flat out sucks as a NAS. NFS speeds are dismal. iSCSI speeds are worse. Storage Spaces with parity has retarded write speeds. The only good thing about Windows Server 2012 and NAS is SMB multipathing. Assuming Oracle adds support for it in to the next version of ZFS, I'm going to be very happy. Until then, the cross-platform aspect of my network has me wincing at gigabit wire speeds and lusting after the cheap 40Gb/s controllers on eBay.
 
Interesting stuff guys!

My current setup is FreeNAS installed on a 80GB hdd and then 4x2TB make up a RAID-Z.

99% of the storage use is read-only and I do have to admit, in 2 years since the FreeNAS server has been running non-stop, I had to hard shutdown twice, both times kernel panic kicked in for some reason.

What I am really looking at is for a way to make better use of the actual server than just occasionally serving content a few times a week.

So like some people mentioned, ReFS is not as stable as NTFS? Indeed as you mentioned odditory, waiting for SP1, seeing feedback then and making a decision.

Though I am intrigued by having Server 2012 and FlexRAID or SnapRAID running and seeing how flexible those setups are. I think that a few users on the forum have this setup running...
 
What I am really looking at is for a way to make better use of the actual server than just occasionally serving content a few times a week.
Other than serving files, what else would you like your server to do?
 
Well what I have been wanting to do without having an additional server is running a VPN, owncloud, VM Host...

As the usage on my NAS is not even measurable is that low, why not use it for other purposes?
 
Well what I have been wanting to do without having an additional server is running a VPN, owncloud, VM Host...

As the usage on my NAS is not even measurable is that low, why not use it for other purposes?

IMHO, VPN/NAT/Firewall belongs on a separate edge box. An atom would do fine. Get that box rooted and exploited, and they aren't at your server.

ESXi with something like the (LSI) M1015 (IT Mode) JBOD PCIe adapter passed through should do the trick. That's what I use. 32 drives / card with 2 expander-chassis, more if you buy expanders. You don't need mega bandwidth, so there's an almost infinitely expandable (SATA over SAS) solution for you.

Then any additional RAM/Proc you have on that box is gravy to install whatever you want, to test, play with, or use. The box can go down and you still have internet, DHCP, and DNS for the rest of the house, and VPN to get in to fix it if need be.
 
Interesting stuff guys!

My current setup is FreeNAS installed on a 80GB hdd and then 4x2TB make up a RAID-Z.

99% of the storage use is read-only and I do have to admit, in 2 years since the FreeNAS server has been running non-stop, I had to hard shutdown twice, both times kernel panic kicked in for some reason.



So like some people mentioned, ReFS is not as stable as NTFS? Indeed as you mentioned odditory, waiting for SP1, seeing feedback then and making a decision.

Though I am intrigued by having Server 2012 and FlexRAID or SnapRAID running and seeing how flexible those setups are. I think that a few users on the forum have this setup running...

If I were M$, I would eliminate 'storage spaces' via SP1, and send that team packing, and either start from scratch or just buy out some medium startup venture, like they did Executive Software for their built in disk defragmenter.

SS is an inefficient NIGHTMARE IMHO. ReFS should be in the next release, not pushed out half baked as it is, with little to no industry acceptance.

Now, putting those features into a desktop OS, or maybe at worst a "Home Server " product, fine. But embedding them into their enterprise product, I see as a bad move. After reading and playing with it, I cannot and will not deploy it at any clients or even for my own needs. It's almost software RAID with all the downsides of HW RAID built in.
 
Im using the 2012 essentials and couldnt be happier.

Everything is working very fast and my LSi-9260-8i worked out of the box and somehow all the drivers was the newest after updating some times.

SS is doing great so far, no problems detected.

My only grip is with NFS server (my media player cant see the files in the directory :confused: !)

Also, installed startbutton to get rid of that metro interface :p
 
Im using the 2012 essentials and couldnt be happier.

Everything is working very fast and my LSi-9260-8i worked out of the box and somehow all the drivers was the newest after updating some times.

SS is doing great so far, no problems detected.

My only grip is with NFS server (my media player cant see the files in the directory :confused: !)

Also, installed startbutton to get rid of that metro interface :p

Drop a drive out of your pool, and reboot. Then try and recover with a blank identical drive. I was unable to do it 3 out of 4 configurations I tried.
 
IMHO, VPN/NAT/Firewall belongs on a separate edge box. An atom would do fine. Get that box rooted and exploited, and they aren't at your server.

ESXi with something like the (LSI) M1015 (IT Mode) JBOD PCIe adapter passed through should do the trick. That's what I use. 32 drives / card with 2 expander-chassis, more if you buy expanders. You don't need mega bandwidth, so there's an almost infinitely expandable (SATA over SAS) solution for you.

Then any additional RAM/Proc you have on that box is gravy to install whatever you want, to test, play with, or use. The box can go down and you still have internet, DHCP, and DNS for the rest of the house, and VPN to get in to fix it if need be.

I agree that VPN/owncloud should be segregated from NAS activities, but I am not wanting to fork out any cash on anything to be honest...

If I were M$, I would eliminate 'storage spaces' via SP1, and send that team packing, and either start from scratch or just buy out some medium startup venture, like they did Executive Software for their built in disk defragmenter.

SS is an inefficient NIGHTMARE IMHO. ReFS should be in the next release, not pushed out half baked as it is, with little to no industry acceptance.

Now, putting those features into a desktop OS, or maybe at worst a "Home Server " product, fine. But embedding them into their enterprise product, I see as a bad move. After reading and playing with it, I cannot and will not deploy it at any clients or even for my own needs. It's almost software RAID with all the downsides of HW RAID built in.

I think because of this, still being beta I will probably do a straight upgrade to NAS4free and take advantage of the new features from that upgrade.

Im using the 2012 essentials and couldnt be happier.

Everything is working very fast and my LSi-9260-8i worked out of the box and somehow all the drivers was the newest after updating some times.

SS is doing great so far, no problems detected.

My only grip is with NFS server (my media player cant see the files in the directory :confused: !)

Also, installed startbutton to get rid of that metro interface :p

Thanks for the insight. Why don't you use NTFS? Are you using non-Windows systems to access the shares?
 
I agree that VPN/owncloud should be segregated from NAS activities, but I am not wanting to fork out any cash on anything to be honest...



I think because of this, still being beta I will probably do a straight upgrade to NAS4free and take advantage of the new features from that upgrade.



Thanks for the insight. Why don't you use NTFS? Are you using non-Windows systems to access the shares?

NFS is pretty much the standard outside Windows. Samba can be placed to allow Windows or other SMB clients - they are network sharing protocols. The underlying disk filesystem can be NTFS, ExtX, or even iSCSI backed. Likely if windows, he using NTFS.
 
Thanks for the clear up Sean.

I know I asked this earlier on, but what is your take on having Windows (2012) + FlexRAID/Drive Bender/SnapRAID/etc. configuration?

My concern is the utility integrating nicely with the underlying OS. Also as some of the these tools out there are fairly new, my worry is that a new update will require a new pool rebuild/etc!
 
Thanks for the clear up Sean.

I know I asked this earlier on, but what is your take on having Windows (2012) + FlexRAID/Drive Bender/SnapRAID/etc. configuration?

My concern is the utility integrating nicely with the underlying OS. Also as some of the these tools out there are fairly new, my worry is that a new update will require a new pool rebuild/etc!

How about this - this thread has been kind of up and back - what do you exactly have going on now for hardware, OS, filesystems, and type of data (I know it's home use, I'm assuming media of some sort), and what are important factors (redundancy, low cost per GB, etc.).

In general, the solutions that operate like Drive Extender that leave underlying NTFS volumes intact that you can take them and read them in case of breakup of the set or as you point out, the product ends development/support.
 
My setup is as follow:

Some Pentium 4 I think
3 GB RAM
1 x 80GB (OS Drive)
4 x 2TB (ZFS Pool)
FreeNAS 7.3
RAID Z (5) ZFS serving as one SMB share
90% media (.avi/mkv/etc) and rest individual workstation backups (docs, pics)

Redundancy is important as I do not want to loose the data. Though this feature of being able to read the disk(s/ pool) in another workstation by just re-attaching one or multiple hdds it is very appealing. Especially if it works between Windows environments.
 
Last edited:
My setup is as follow:

Some Pentium 4 I think
3 GB RAM
1 x 80GB (OS Drive)
4 x 2TB (ZFS Pool)
FreeNAS 7.3
RAID Z (5) ZFS serving as one SMB share
90% media (.avi/mkv/etc) and rest individual workstation backups (docs, pics)

Redundancy is important as I do not want to loose the data. Though this feature of being able to read the disk(s/ pool) in another workstation by just re-attaching one or multiple hdds it is very appealing. Especially if it works between Windows environments.

OK so you have several things I can see here. (This is all from the outside looking in and only my $0.02). First, the limits:

• The P4 will likely not support VT-d (passing through hardware to VMs with ESXi), and may not support 64-bit (some did, some didn't). Not a good candidate for VM host.

• I assume the drives are all attached to onboard SATA (probably Intel ICH controller.

• At the current hardware specs, this is probably optimally a NAS dedicated box. 3GB doesn't leave a whole lot of wiggle room even if your platform supported ESX or other virtualization.

• ZFS will recover on another machine controller independent / board independent as long as the kernel can see the controller via compiled-in support or kernel module. SO, as long as you have the disks needed, you should be able to recover/migrate to a new machine without much hassle. This includes if you wanted to go with a newer platform and run ESXi - you could pass through the entire controller to the ZFS-hosting guest OS and import the array.

• If you have a spare X8/X16 slot, consider the M1015. It's an LSI controller re-branded to IBM's specifications, and are available fairly cheaply and in good supply (< $100 for this controller, and it works very well for many applications). You can then move the controller along with all drives between machines and even virtual machines.

• ZFS can be processor intensive, and so I say the P4 era proc should be just enough for that. I wouldn't try to split its duties any further.

• Windows 2012 requires x64-bit hardware. Again, some of the P4's supported it, some didn't, and regardless, BIOS support is another issue. The latest version of Windows Server that comes in 32-bit flavors is 2008 (non-R2).

• RAM: I would rather see 3GB booting a dedicated distro than a Windows Server installation of any sort, except possibly a "Core" installation that comes without the GUI, but be prepared to trim services you don't need/want auto-starting.

So then I ask: what is this box not doing that you want it to do? With regard to VPN, you could install those packages on this box (against my way of doing things, but there's always more than one right answer), using the packages, since it is after all a FreeBSD box. Then you would need to port-forward GRE and the PPTP control ports, if you want the quick and easy VPN. Racoon for IPSec. That duty should be fine on this box. Once you are VPN'd into this box, you basically have your own cloud - what services were you looking for? I can point you in the right direction for adding packages to do what you want to do.

What are you using for a router right now? Do you have static IP?

Basically, I think you are on the right track as it stands, given your hardware configuration. If you wanted a neat little gateway appliance, you could fire up PFSense (again a BSD distro but very easy to work with, supports very powerful VPN, IPSec, packet scrubbing), and I have a stack of Pentium 3 1GHZ Small form Factor machines with 512MB of RAM that I have routed 24-48Mbit/second of WAN through without stressing. If you wanted to pay shipping, I could easily ship you one of these machines before they go for the scrap heap. Here you can control what ports on what IPs, vlan support, plugins like tiny web server, web caching/proxying server - you would just need to add a 2nd NIC (I think I have some Intel PCI nics yet)...

Are you using SMB sharing, meaning WIndows clients, exclusively with FreeNAS? What are you not getting from FreeNAS that you want to get from it? If it's lightweight stuff, we can get you into packages that will get this stuff done.

As an aside, what type of 2TB drives are you using? (Brand, Model)?
 
First of all thanks for the overwhelming information and effort!

On reading carefully your post, ESX pass-through to the HDDs would have been nice. The HDDs are Samsungs F4s from memory (I do not have remote access). One other reason for the whole drastic move to a different OS was that I am well aware that my version of HDDs requires an emergency firmware upgrade as apparently any smart query causes bad blocks...

I do actually have another box with a Core 2 Duo, 4GB RAM, in a small-form-factor case which I actually have ESXi 4.5 on it but I just wasn't prepared to have yet another system draining power if it's not really used. Unfortunately as it is in a SFF case I cannot put in it the 4 HDDs, so that's out of the question.

For the router, yeah, it's an old Netgear custom rom one so I am not going to hold my breath on VPN on it.

I believe that most of my needs can be achieved from the version of NAS4free and the required packages - which being based on FreeBSD 9.1, should be much easier to install and configure the additional packages.

Thank you very much for the offer of one of your spares, unfortunately I do not live where the NAS is currently located at and I visit - once a year...

Yes, the NAS is solely for SMB sharing to Windows clients. Reason why I went for FreeNAS was because it had ZFS and to my understanding data corruption was greatly reduced in case of an unexpected power down.
 
I agree that VPN/owncloud should be segregated from NAS activities, but I am not wanting to fork out any cash on anything to be honest...



I think because of this, still being beta I will probably do a straight upgrade to NAS4free and take advantage of the new features from that upgrade.



Thanks for the insight. Why don't you use NTFS? Are you using non-Windows systems to access the shares?

My mediaplayer is lagging using SMB, so i thought give NFS a shot, but so far didnt work.

Drop a drive out of your pool, and reboot. Then try and recover with a blank identical drive. I was unable to do it 3 out of 4 configurations I tried.

Ill try this weekend !
 
Unfortunately, this machine is my workstation AND datastore for my ESXi hosts, so whatever I use has to be Windows based.
Hmm... I am running Solaris on bare metal. Ontop Solaris I have installed VirtualBox. Inside VirtualBox I am running Windows. So I do all work in virtualized Windows, and use Solaris mainly as ZFS storage. It is easier to work in Windows or Linux - both of them are virtualized. Solaris is more the backend and I dont touch Solaris. Windows and Linux works great in VirtualBox. Virtualized Mac OS X does work, but not that great - laggy graphics. In windows I can play older FPS games without problems, such as Quake3, etc.

The reason I run ontop Solaris, is because I prefer to run the back end on bare metal. I dont really trust running virtualized Solaris in ESXi...
 
Aah, but I want to play the newer games! I've got a GTX690, and I love putting it to use!

Why not? In my testing, it's been perfectly stable.
If you are a gamer then you can not run virtualized Windows to game. Actually, I have an external 3.5" disk only for gaming new FPS games. I plug it in and reboot when I want to game - which does not happen too often.

I dont really like running virtualized Solaris. Why? Just my taste. The more layers, the unstabler/ineffecient it should be. I prefer to run on bare metal. For instance, when you talk about large Stock Exchanges, the never run virtualized. They always run on bare metal.
 
Back
Top