Virtualized NAS, any experience?

evandena

Limp Gawd
Joined
Apr 17, 2006
Messages
274
I've built my new server with the following specs:

Intel x3430
8GB DDR3 ECC RAM
3x 750GB HD (RAID-5)
Dell Perc 5/i

I'm running XenServer 5.6 (beta) as a hypervisor, and would like to run a NAS as a VM. So far I have Nexenta installed, but the performance is pretty mediocre. On a gigabit network, I can only get about ~10 MBps through CIFS. A Server 2008 VM will pull about 30-50 MBps. Not the greatest, I know, I'm sure my Perc isn't tuned perfectly.

I'm starting to think Nexenta might not be the best choice for a virtualized NAS. Does anyone have any experience or suggestions? It will mainly be used for CIFS for my home media and music server, but I'd like the speeds to DVD copies between the download server and the NAS.

Thanks
 
Are you sure you don't want a dedicated VM? Virtualization can kill your performance in this case; as its latency-sensitive. You're also using a hardware controller and no plain SATA controller. Does your motherboard have no onboard SATA? I would recommend to use those instead.

CIFS is slow, use NFS when possible. If you really have to use CIFS and care about performance, be ready for some heavy tuning testing out many settings. Generally it helps alot if you make sure your hardware isn't at fault; bad protocols like CIFS are often very latency-sensitive and offer poor performance if latencies rise too high.
 
Do you mean a dedicated machine?
Ideally I'd like to keep my entire home lab running on this one box. It's beefy enough to do everything I want.

I'm also not sure why I would want to use my motherboards SATA controller when I have a dedicated RAID card. The speed performance there alone is pretty big.

I understand CIFS is pretty crappy, but with a vanilla 2k8 VM I am able to bust out 50 MBps through CIFS. I also tried FTP with the Nexenta, but it's not much better.
 
Well you can try FreeNAS; its pretty quick to setup. It uses Samba though, which has its own performance issues. But may be worth a shot.

About RAID; you may want to test which is faster; software RAID5 or hardware RAID5; i think the driver in FreeNAS can do a pretty good job. But i'm unsure whether the fact that you're virtualising the NAS operating system has any affect on performance. In Virtualbox you can give the VM physical access to the disks; i'm unsure how this works with something like Xen.
 
My work file servers are all virtualized. Windows 2008 Server running on Hyper-V. On large files, I get between 60-100mb/second and that's during the work day with heavy file access and Exchange access. This is running RAID 6 with SATA drives. Operating in a VM does not have to be a performace hinderance - its your underlying hypervisor and hardware that are the bottlenecks.

At home, my vSphere farm has Xeon X3363 2.83ghz QCs and 4x500gb in RAID 6 on a Perc 6/i. iSCSI will max gigabit on all except FreeBSD. On Windows file sharing, 70-80mb/sec is typical on a large file.

Your hardware should be doing better.
 
Maybe stupid, but I'm using WHS in a Windows Server 2008 R2 Hyper-V VM. Last 4TB+ copy I did happened at 101MB/s with the VM only having one dedicated NIC. I also have OpenFiler and FreeNAS working decently in Hyper-V (see: http://www.servethehome.com/the-big-whs-it-multi-tasks-thanks-to-hyper-v-virtualization/), but I have not been having a lot of luck with unRaid or OpenSolaris (I can get them mostly working but I wouldn't consider them great solutions at this point.)
 
For the life of me I cannot get vSphere to "format" the datastore, so I'm stuck with Xenserver. At least 5.6 beta is out now which allows memory over commit.

I'll keep tinkering with it.
 
Maybe stupid, but I'm using WHS in a Windows Server 2008 R2 Hyper-V VM. Last 4TB+ copy I did happened at 101MB/s with the VM only having one dedicated NIC. I also have OpenFiler and FreeNAS working decently in Hyper-V (see: http://www.servethehome.com/the-big-whs-it-multi-tasks-thanks-to-hyper-v-virtualization/), but I have not been having a lot of luck with unRaid or OpenSolaris (I can get them mostly working but I wouldn't consider them great solutions at this point.)

is that your site?
 
That's what I'm starting to think.... but there are so many options. Trial and error time.

I'm liking Nexenta, so hopefully I can get it to work.
 
It is. Not anything remotely exciting, but I use it mostly to keep links/ pictures/ information handy. Saves re-typing stuff repeatedly and remembering image tags/ image URLs.

too modest. i really like your blog, patrick. good to see you experimenting different NAS solutions because I go back and forth myself wondering what I might be missing out on with some of those solutions. Historically I've just run raid arrays, but for media storage I think the holy grail is non-striped RAID4 similar to what unRAID is doing (one or two parity disks protecting a bunch of non-striped data disks) and something about unRAID rubs me the wrong way. Flexraid also creeps me out in a few ways. And if the WHS v2 beta is any indication, MS also have no intention of modernizing WHS with any parity protection of the drive pool. And I really want to avoid Linux/FreeBSD because I want the ability to remove a drive and read the NTFS data on standalone PC. So there's not yet a perfect way to parity-protect NTFS drives without resorting to homegrown/hobby software, that I'm aware of at least. Maybe some day.

I've really been getting more into Hyper-V lately; granted I'm a big fan of ESX and ESXi but I recognize anyone in the I.T. business owes it to themselves to learn Hyper-V inside and out, because let's face it it's going to be a major force in virtualization for the simple reason it comes bundled with 2008 Server and can be enabled with a few clicks and a reboot. A strategy not unlike IE getting bundled with Windows and basically killing Netscape back in the day. Not that VMWare is going anywhere, it still is thee defacto enterprise grade solution, but Hyper-V is going to have a huge installed base and growth whether we like it or not.

That and many people don't feel like running a whole 'nuther box for ESXi, and in cases where all you have is one 2008 server and want to virtualize a few machines, it's easy to just turn on and go.

I've had a WHS virtualized for some time, it works nicely though I don't do all that much with WHS, and now I'm experimenting with virtualizing my pfSense router/firewall with a quadcore Intel NIC, so far it's running beautifully - wish I'd done this sooner!
 
Last edited:
Wow, I have been following you for some time now.

Yea Hyper-v is how i have had my WHS setup for about a year now. Its the best way to do it IMO.

Been Playing with WHS2 alot in a VM and its been pretty stable, yet feature incomplete.
O and on WHS2 you still have a 2TB volume and 32 drive limit :( As of right now.
 
that's going to be short-sighted of MS if they don't change the 2TB volume limit because > 2Tb 4K drives are going to be released well within WHS v2's lifecycle. maybe they're thinking they'll deal with it only when they have to, and then putting out a patch or service pack to address it, but still sucks for people with raid arrays in the meantime.

@nitrobass: have you played with SCVMM (System Center Virtual Machine Manager) at all? once you're running 3-4+ VM's it's worth a look, makes things like cloning VM's *much* easier. i got tired of all the hassle with cloning VM's manually so I installed SCVMM and besides a real "Clone VM" option it's got tons more features than the built-in Hyper-V manager in 2008 R2. worth a look- I grabbed it off technet.
 
Last edited:
that's going to be short-sighted of MS if they don't change the 2TB volume limit, because > 2Tb 4K drives are going to be released well within WHS v2's lifecycle.

Yeah definitely.
It is much more robust than the WHSv1. Although i attribute that to a 2008R2 code base.
 
I'm a little hesitant about Hyper-V strictly to the lack of memory overcommit. With only 8GB of RAM, I would like a higher density than physically allowed.

pjkenned, have you by chance tested Nexenta?
 
are you talking about Hyper-V not dynamically allocating memory like ESX/ESXi? Funny you mention, I had a "hey wait a minute" moment just last night wondering why it wasn't handling memory more intelligently. One of the killer features of VMware for sure. I suspect it has to do with the hypervisor cohabitating / running on top of the windows kernel. Too bad MS didn't see fit to make their bare metal Hyper-V *actually* bare metal with a tiny footprint like ESXi rather than merely Server 2008 stripped down to just Hyper-V and made available for free.

OTOH, it's cheaper to buy more ram than spin up a whole new ESX server if you're going to be running Win2008 server anyway.
 
Last edited:
I'm a little hesitant about Hyper-V strictly to the lack of memory overcommit. With only 8GB of RAM, I would like a higher density than physically allowed.

pjkenned, have you by chance tested Nexenta?

I haven't tested it yet, but will in the future. If you can see from the Hyper-V window, having that many VM's running just takes lots of ram. That's why I built the server upon an i7/ X58 based platform. I'm also dedicating my i3-530 rig to testing and actually installed virtualbox on the i5-650 Windows 7 "box" (well it currently is open air since it doesn't have a case).

Personally, one MAJOR issue I'm having with Hyper-V and some of the other distirbutions is the virtual SCSI controller. Most of the installs above don't like it.

Another thing about ESXi is that I was going to build my new box around ESXi but it didn't work with the Asus P6T7 WS Supercomputer. I know Realtek NIC's aren't great, but there's really little reason for both the NICs to be an issue as well as the chipset. On the other hand, Hyper-V/ 2008 R2 works with... well everything (more or less) since you can use Windows 7 drivers.

Anyway, busy week this week at work. I did post a quick update to that virtualization link above with the 7 OS's I have running in Hyper-V VM's right now (FreeNAS, OpenFiler, Ubuntu, WHS, CentOS, EON, and unRaid). I will give some initial thoughts soon, but it is going to be one of those 80 hour weeks at work.

Finally, thanks for the kind words guys! They are very much appreciated.
 
I haven't tested it yet, but will in the future. If you can see from the Hyper-V window, having that many VM's running just takes lots of ram. That's why I built the server upon an i7/ X58 based platform. I'm also dedicating my i3-530 rig to testing and actually installed virtualbox on the i5-650 Windows 7 "box" (well it currently is open air since it doesn't have a case).

Personally, one MAJOR issue I'm having with Hyper-V and some of the other distirbutions is the virtual SCSI controller. Most of the installs above don't like it.

Another thing about ESXi is that I was going to build my new box around ESXi but it didn't work with the Asus P6T7 WS Supercomputer. I know Realtek NIC's aren't great, but there's really little reason for both the NICs to be an issue as well as the chipset. On the other hand, Hyper-V/ 2008 R2 works with... well everything (more or less) since you can use Windows 7 drivers.

Anyway, busy week this week at work. I did post a quick update to that virtualization link above with the 7 OS's I have running in Hyper-V VM's right now (FreeNAS, OpenFiler, Ubuntu, WHS, CentOS, EON, and unRaid). I will give some initial thoughts soon, but it is going to be one of those 80 hour weeks at work.

Finally, thanks for the kind words guys! They are very much appreciated.

Thanks pjkenned. I'll keep tweaking Nexenta. I really like it so far, so I don't want to give up that easy. It would just be nice to know that someone else is able to run it with better results in a somewhat similar virtualized environment.
 
are you talking about Hyper-V not dynamically allocating memory like ESX/ESXi? Funny you mention, I had a "hey wait a minute" moment just last night wondering why it wasn't handling memory more intelligently. One of the killer features of VMware for sure. I suspect it has to do with the hypervisor cohabitating / running on top of the windows kernel. Too bad MS didn't see fit to make their bare metal Hyper-V *actually* bare metal with a tiny footprint like ESXi rather than merely Server 2008 stripped down to just Hyper-V and made available for free.

OTOH, it's cheaper to buy more ram than spin up a whole new ESX server if you're going to be running Win2008 server anyway.

The footprint of Windows 2008 R2 Core isn't really any bigger than ESXi. Full 2008 R2 server is just a bit bigger than ESX.

The dynamic memory feature of vSphere is wonderful, but I think MS was focusing on the Live Migration feature for R2. I guess they figure that RAM is cheap and it is. Still, I opted for the $495 vSphere Essentials bundle over more RAM for my home servers. :)

I'm betting that Hyper-V R3 or whatever will have dynamic memory and PCIe device passthrough. They seem to be going for the features that will get the most use in the most environments.
 
The footprint of Windows 2008 R2 Core isn't really any bigger than ESXi. Full 2008 R2 server is just a bit bigger than ESX.

The dynamic memory feature of vSphere is wonderful, but I think MS was focusing on the Live Migration feature for R2. I guess they figure that RAM is cheap and it is. Still, I opted for the $495 vSphere Essentials bundle over more RAM for my home servers. :)

I'm betting that Hyper-V R3 or whatever will have dynamic memory and PCIe device passthrough. They seem to be going for the features that will get the most use in the most environments.

They have partnered with Citrix for Desktop virtualization and is giving away free licenses if you switch from VMware View.

MS is getting serious about virtualization, so i think we will see a big step forward with Server 2011/2012
 
I'm more interested in the overcommit, so I can assign 12GB of RAM even though the server has 8GB. Similar to thin disk provisioning.
 
Back
Top