Anyone Clustering Hyper-V R2?

wingnut691983

Weaksauce
Joined
Oct 7, 2009
Messages
78
Just currious if anyone else out there is running a Hyper-V R2 cluster for live migration.

I just got ours going a couple months ago and am using it along with MSVMM08 R2 and its totally sweet. Definately a ESX competitor now.

Currious to hear how others like/dislike it.
 
I'm about to purchase four SuperMicro servers and move all of our physical servers over to VM's.

I'm getting dual quad-core, 48gb of ram on each server, and then putting the storage in an iSCSI SAN with dual gigabit switches and NIC teaming. Hopefully it will all work ok.

Were there any guides that you followed to make it work like you want it to?
I too am curious to hear if others like/dislike this setup and live migration.
 
Supermicro? Dude, get a dell ;)

Suprisingly, there really arent that many good guides out there, I'd reccomend you spend about a solid week reading up if you arent familiar with it as far as required networking, protocols and etc. There are a lot of good blog posts out there about it but no real great walkthrough, but its not rocket science. You will need to have a deep understanding of CSV (Clustered shared volumes) if you hope to have any success.

What iSCSI san are you planning to use? How many VMs? If you are using 4 servers with dual quads and 48gb, I'd expect you would have at least 100vm's because thats a lot of horsepower.

I'd consider myself an expert on it now and really love it so if you have any questions let me know.
 
We will be moving about 30 machines over to VM's. Some of them are pretty intense, so that's the reason. All of our current servers are Dell servers, but I've had some issues lately with some of the ones we've purchased, so basically doing whitebox servers are fine. That stuff is easy to do, so no issues there.

Right now, we're looking at getting the Dell MD3220i iSCSI SAN with about 14tb of storage.
 
We use the Dell MD3000i with a MD1000 expansion enclosure (all SAS) and I love it. So I'm sure you'd be happy with that.

For virtualization, I would advise strongly against whitebox servers. Do what you want but buying prebuilt from a reputable company (dell, hp, or even Sun) has great value in virtualization because they will support that the hardware will be compatible and work properly. Spend a few extra bucks and go that route and you will be a happier person down the road.

We have 50+ dell servers some only a month old, others 6+ years old and they have an exceptional track record.
 
For virtualization, I would advise strongly against whitebox servers. Do what you want but buying prebuilt from a reputable company (dell, hp, or even Sun) has great value in virtualization because they will support that the hardware will be compatible and work properly.

For VMware i would agree.
Hyper-v Not so much. All it has to do is 1) Run windows 2008 R2 2) CPU has to support hardware virtualization. Supermicro will make sure of that.
 
Ha! For the price of one dell you can get three supermicro boxes with the same specs. Dell support is useless IMO, but that's just been our luck. :(
What level of Dell support did you have? My experience is that if you have Silver it's almost like consumer level Dell support with shorter wait times, whereas with Gold they treat you very well. In one instance, I had an issue with one server with Silver support but was so annoyed at how I was getting the runaround that I called back with the service tag of an identical server with Gold support just so I'd get the part quicker :eek:.
 
What level of Dell support did you have? My experience is that if you have Silver it's almost like consumer level Dell support with shorter wait times, whereas with Gold they treat you very well. In one instance, I had an issue with one server with Silver support but was so annoyed at how I was getting the runaround that I called back with the service tag of an identical server with Gold support just so I'd get the part quicker :eek:.

lol, I guess I've yet to have any issues big enough with supermicro over the last five years to warrant the Dell premium. However if it's in the budget, why not I guess. :)
 
What level of Dell support did you have? My experience is that if you have Silver it's almost like consumer level Dell support with shorter wait times, whereas with Gold they treat you very well. In one instance, I had an issue with one server with Silver support but was so annoyed at how I was getting the runaround that I called back with the service tag of an identical server with Gold support just so I'd get the part quicker :eek:.

We have silver on most of our stuff, and on the san, mostly NBD 5x10 parts. I've had good experience with them. Most things I know what is wrong anyway so I either get on the chat (some things ig makes you call, such as the san) and tell them and they send the part the next day.

But of all of our servers I have only had to make about 2 calls in the past 2 years and they were both dead hard drives, no brainer. No hardware issues other than that on any of them, even the old ones.
 
I will say that my PE 2800 series have been the most reliable. Four of them in production (24/7/365) since 2004, only a single SCSI drive failure, luckily we got a drive overnighted. No data loss and now down time. If all of them were like that I'd be all sorts of happy. Again our flip of the coin on the other stuff I guess. Just did not always turn out as well as our 2800's. :(
 
Just wanted to say I just put up a VMWare environment. Had Supermicro servers that were only a few months old but retired for their use. Ended up swapping out a lot of parts and spending some more money to get it to work with VMWare. So if you're going the ESX/ESXi route, buy HP, Dell, or something that says VMWare Compatible (they are very good these days about stamping that on the product before you buy).

If I had gone with Hyper-V, I'm sure I wouldn't have had any problems. But for whiteboxes even if you get it to run, certain features (specifically with VMWare) will not work with certain hardware. By all means, if you're going Hyper-V, I think the cheap whiteboxes are a good solution as long as you can boot to Windows and it supports Virtualization in bios.

Dragging on a little more on the VMWare specifics - Fault Tolerance and overall VMotion performance greatly depends on the proper hardware being supported. It's really picky, so watch out.
 
Hyper V has come a long way and you seem to be using it in the most optimized way to get the most performance but Hyper V is targeted for small and maybe medium business at best. It is no competition to ESX in the enterprise market. It is a very cost effective way to compete with ESXi but until they can compete with the enterprise features of Vmware they will they will remain a nice addition to SmB
 
Hyper V has come a long way and you seem to be using it in the most optimized way to get the most performance but Hyper V is targeted for small and maybe medium business at best.

Yeah go tell that to my clients :rolleyes:

Hyper-v may not have ALL the features ESX4 has but it has the most important ones.
It also has one major feature that no other environment has.....its FREE.
 
Yeah go tell that to my clients :rolleyes:

Hyper-v may not have ALL the features ESX4 has but it has the most important ones.
It also has one major feature that no other environment has.....its FREE.

Well if you are in a large Enterprise which is where I said Vmware is far beyond the technology of Hyper-V and you are going to base your whole virtual infrastructure on the fact that it is free then you have problems that no debate here is going to solve. Hyper V has no answer to Distributed Virtual Switching, storage level vMotion, unused memory allocation and memory deduplication, as well as vendor cooperation. I dont see virtual Cisco Nexus switching anytime soon for Hyper-V. These are VERY important features in large environments!
 
There is never a need to go OEM if your requirements are clearly pre-defined, and if you purchase wisely and have the skill you can likely outperform an OEM setup as they are built to suit a wide variety of needs not necessarily built for a specific project.

SuperMicro is actually highly compatible with VMWare, a quick search of the VMWare HCL using the 'partner' supermicro returned a ton of results.
(http://www.vmware.com/resources/compatibility/search.php)

Back on topic: I've been using VMM2008R2 over a gigabit network to connect about a dozen hosts and its been freakin awesome. 1 Server has about 4 TB of storage - setup the VM Library share on that server and can spin out a VM from an image to any of the hosts in about 10 minutes. The integration with powershell and sysprep has made the setup a no brainer for our needs.
 
Last edited:
Back
Top