Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Pretty sure there is a Cisco 1000v free version for vSphere too; although, to use it, you do have to have distributed switches. Perhaps that was what was meant, vs cisco's pricing.
What about VirtualBox? *dons flame suit*
Until I figure out a few hardware things, I'm stuck on VirtualBox. Mainly wireless adapter compatibility.
HyperV 2012R2 is better than ESX5.5.
HyperV 2012 and below, ESX is better.
That's kindof a broad brush statement. I think the comparison is more nuanced than that (and thank Child Of Wonder for pointing out a lot of those nuances).
For some users your claim here might be true. For others, not so much. It depends on a lot of details about your application, your requirements and your experience.
why the xen hate?
Child,
Thinking of moving from napp-it as my primary storage to Server R2. Thinking of using 4 2TB hard drives as a 3+1 with tiered storage spaces with two columns, with two 120 gig ssd's as a tier then passing it back to esxi via iscsi. Mainly for home experimentation, but I have 5 family members at home who share internet/file storage etc, and I'm finding it a pain in the ass to do permissions correctly on ZFS when it would be ten times easier on R2. Thoughts? Not overly concerned about speed, and I figure this will be fairly fast for it's intended use.
Child,
Thinking of moving from napp-it as my primary storage to Server R2. Thinking of using 4 2TB hard drives as a 3+1 with tiered storage spaces with two columns, with two 120 gig ssd's as a tier then passing it back to esxi via iscsi. Mainly for home experimentation, but I have 5 family members at home who share internet/file storage etc, and I'm finding it a pain in the ass to do permissions correctly on ZFS when it would be ten times easier on R2.
*snip*
/me is waiting for storage to pounce
/me is waiting for storage to pounce
Virtual FC Cards for VMs
That one is hyper-v only.
True, Hyper-V offers an actual virtual FC HBA so the guest OS thinks it has a FC card and is natively accessing FC storage while VMware is using NPIV to allow you to present a physical RDM directly to the guest but it still appears as a SCSI disk to the guest.
I'll change the post to reflect that.
I'll second the "don't do the DNS trick" recommendation here - it's a bad idea and some really goofy things can happen if you try. Also, NFS is stupid fast these days - especially on 10G, so there's not a performance problem for lacking MPIO. NFSv4 will fix this, in the next release of ESXSTORAGE
VMware Storage Requirements
- Hardware RAID, local drive, SAN, or USB/SD flash to boot from
- Minimum of 1GB boot device, 5.2GB required for VMFS volume and 4GB scratch space
- VM Datastores can be local hardware RAID, local disk(s), SAN, or NFS NAS
https://pubs.vmware.com/vsphere-55/...UID-DEB8086A-306B-4239-BF76-E354679202FC.html
Hyper-V Storage Requirements
- Hardware or software RAID, local drive, or SAN to boot from
- Minimum of 32GB boot device for Windows with full GUI, 8GB minimum for Hyper-V Core but 20GB recommended
- VM Datastores can be local hardware or software RAID, local disk(s), SAN, or SMB3 share
- Technically you can find a way to boot Hyper-V from USB flash or SD but I wouldnt recommend it
http://technet.microsoft.com/en-us/library/dn610883.aspx
How They're the Same
MPIO, Including Round Robin
Storage Offloading (VAAI for VMware and ODX for Hyper-V)
Storage VMotion (requires vCenter in VMware)
Block Protocols - FC, FCoE, iSCSI
Thick and Thin Virtual Disks
Pass-through Disks and Shared Virtual Disks for VM Clustering
Use NPIV to present FC LUNs directly to guest (VMware presents as physical RDM which guest sees as SCSI disk, Hyper-V as virtual FC card guest uses to access FC LUNs just like a physical server)
VM Snapshots
Using Labels to Identify Datastores Based on Performance, SAN, etc. (requires vCenter for VMware and VMM for Hyper-V)
How They're Different
File Protocols - NFS3 vs SMB3
- VMware uses NFS as a file protocol for datastores while Hyper-V uses SMB3. NFS v3 is the same NFS weve all come to know and love while SMB3 is a new protocol introduced with Windows 2012.
- NFS v3 does not support any sort of MPIO. Youll still want to provide at least 2 uplinks to the vSwitch your NFS vmkernel lives on but it wont load balance across those two uplinks unless you change things up like mounting datastores with different IPs (NFS Datastore1 mounts via IP1, NFS Datastore2 mounts via IP2 on the NAS, and so on) or DNBS Round Robin (which I wouldnt do since I dont want NFS relying on DNS) but youll still only get a single uplinks speed when accessing a single datastore, even if you use a vDS and LACP. Because of this, 10Gb networking is definitely a big plus when using NFS. Compared to SMB3, however, NFS is very simple to setup and manage.
you can encrypt VMFS as well, assuming the array supports native encryption, or you have an inline encryption device (I'll point out that I really don't recommend inline devices - it's quite amusing when they go sideways- SMB3 does perform load balancing and path failover. The more network adapters you throw at SMB3 on the client and NAS, the more bandwidth you can get. For example, in my lab each Hyper-V host has 4x 1Gb dedicated connections for access to my SMB3 NAS which also has 4 links. Because SMB3 actually load balances the traffic across all 4 NICs, I can get 4Gb of bandwidth. Inside a VM, I can read and write at 4Gb to its virtual disk (yay all SSD NAS!). If one NIC or path goes down, Ill still get 3Gb without interruption.
- SMB3 support is still emerging on a lot of 3rd party storage products and even those that support it may not support all the features yet. Netapp comes to mind in that they support path failover but not load balancing yet (at least, as of 4 months ago when I last checked). Also, you may experience some quirks when trying to set up the 3rd party SMB3 storage. EMC VNX supports SMB3 but it isnt as simple as creating a share. Youll need to go into the CLI of the VNX to enable some features and create the share in a specific way. On top of all this, youll need to ensure share and NTFS permissions are all set properly. Youll also want to use SMB Multi-channel Constraints (Powershell commandlet) to limit which interfaces are used to access the SMB3 shares otherwise if your NAS is also serving the storage on the management subnet your host uses, it will use that path to access the NAS as well.
- To make matters worse, some 3rd party products have difficulty working with VMs living on SMB3 shares. Up until several months ago, Veeam backups didnt work properly if you used SMB3 storage exclusively. Even some of Microsofts own products, like Windows Server Backup, wont work. You also cant perform a P2V or V2V directly onto a SMB3 share. Youd first have to convert the server and store on a block device then Live Storage Migrate it to a SMB3 share.
- Both NFS and SMB3 support offloading such as VAAI and ODX which enables supported storage arrays to handle certain tasks rather than the host, like cloning files.
http://www.vmware.com/files/pdf/techpaper/VMware-NFS-Best-Practices-WP-EN-New.pdf
http://technet.microsoft.com/en-us/library/jj134187.aspx
http://blogs.technet.com/b/yungchou...s-server-2012-hyper-v-over-smb-explained.aspx
Block Protocols VMFS vs CSV
- In the block protocol arena, I feel VMware has a big advantage here. The VMFS file system was built specifically for virtualized workloads on block storage. Windows still uses NTFS, which is a great file system but it wasnt built with virtualization in mind. As such, Microsoft had to create Cluster Shared Volumes (CSV), so NTFS could be shared between multiple Hyper-V hosts. CSVs are basically a file system over top of NTFS so Hyper-V can use it as a shared block datastore.
- A CSV works by allowing all the members of a Hyper-V cluster simultaneously read and write to a shared block device but one of the cluster members owns the metadata of the file system. This works fine under normal conditions with only a very small performance hit for a cluster member writing to a CSV it does not currently own. However, if access to a LUN is lost by a cluster member or during certain operations (both initiated intentionally or unintentionally) it can cause the CSV to go into Redirected Mode. This means all access to the block device MUST go through the cluster member that owns metadata. Essentially the other cluster members access the block device via SMB over the CSV network. As you can imagine, performance in this scenario is very poor. Bear in mind, when Redirected Mode occurs has been lessened in 2012 R2 vs earlier versions of Hyper-V but it is still a consideration whereas it is not in VMware.
- CSVs do have two advantages: CSV Encryption and Caching.
o CSVs can be encrypted using Bitlocker which natively comes with Windows Server. This can be helpful if your company requires everything to be encrypted. With Hyper-V you can do so right through the OS rather than encrypting at the guest level or using a 3rd party solution.
Separate licensing as well again, and some limitations in the product (great for VDI though!)o You can also use host RAM as read cache on a CSV. This works great for avoiding VDI boot storms or simply taking some of the IO off the storage array. Technically you can allocate up to 80% of the hosts RAM as cache as youd like but Microsoft doesnt recommend more than 64GB. Bear in mind, this amount of cache is per CSV, so if you set Cache to 2GB and your cluster has 4x CSVs, then each host will allocate 8GB of cache (2GB times 4 CSVs).
o VMware does not offer RAM caching unless you purchase VMware Horizon (formerly View) in which it is designed for helping combat boot storms.
- Both VMware and Hyper-V support offloading such as VAAI and ODX for block datastores as well so long as the storage supports it.
http://technet.microsoft.com/en-us/library/dn265972.aspx
http://technet.microsoft.com/en-us/library/dn265972.aspx
vFRC or local SSD swap caching
- VMware does not offer their own local RAM caching, but they do offer local SSD caching, called vFRC. This feature is available only in Enterprise Plus, but enables you to use local SSD space as read cache for VMDKs. vFRC is enabled on a per VMDK basis so youll need to manually manage which VM and VMDK get how much cache. Its a powerful tool if you want to accelerate the reads on some VMs and keep their heavy IO off the storage array.
- In VMware you can also use local SSD to for VM swap files. This way if a host runs out of RAM and is forced to use swap space to serve VM that swap can come from local SSD and not shared storage. When VMs are forced to swap on shared storage it kills performance. At least this way, the VMs will still suffer a performance hit from having to swap, albeit to fast SSD, but it wont affect every other VM on the shared storage whose hosts are NOT swapping.
- Hyper-V does not offer local SSD caching but you can manually select where a VMs swap file is to go which could be local SSD if you wanted but that same local path needs to exist on all the hosts in the cluster.
https://pubs.vmware.com/vsphere-55/...UID-07ADB946-2337-4642-B660-34212F237E71.html
https://pubs.vmware.com/vsphere-55/...A85C3EC88.html?resultof=%22%73%77%61%70%22%20
vSAN
- VMware offers an add-on product called vSAN which enables you to use local SSD and hard drives in the hosts as a shared datastore. This eliminates the need for a shared storage array and is an excellent product.
Dead as a doornail, and lets hope the door doesn't hit it on the way out- VMware even offers a product called the vSphere Storage Appliance (lopo can correct me here but I think its eventually going away) which uses virtual appliances to virtualize the hosts storage to leverage it as a shared datastore whereas vSAN actually runs in the hypervisor itself. It, too, is an add-on product.
- As of now, Microsofts official stance is that they do not believe in hyper-convergence because compute and storage resources do not scale the same. Their focus is on the Scale Out File Server cluster which works great as a highly available SMB3 storage option for Hyper-V virtual machines but is not hyper-convergence (like Simplivity or Nutanix). 3rd parties like Starwind do offer products that enable hyper-convergence on Hyper-V but MS has no official plans to offer anything of their own.
http://www.yellow-bricks.com/2013/08/26/introduction-vmware-vsphere-virtual-san/
https://pubs.vmware.com/vsphere-55/...UID-7DC7C2DD-73ED-4716-B70D-5D98D02F545B.html
VMware Storage IO Control and SDRS
- VMware offers two cool storage features: Storage IO Control and Storage DRS. Storage IO Control acts as a Quality of Service mechanism for all the VMs accessing a datastore. By using shares, you can grant certain VMDKs higher priority over others for when a datastore is experiencing periods of high latency (30ms is the default). This feature can be highly beneficial by curtailing noisy neighbors from hogging all the IO on a datastore and choking out the other VMs. Hyper-V offers nothing like Storage IO Control except the ability to set minimum and maximum IOPs on each virtual disk.
- VMware also has Storage DRS. Like regular DRS, Storage DRS can automatically assign VMs to datastores based on available capacity and can automatically move VMs between datastores based on performance inbalance. You can create a cluster of block or file datastores (cant mix and match block and file) so when you Storage VMotion or create a datastore, you can simply point at the cluster datastore resource and let SDRS decide where it should go. However, bear in mind that in some scenarios, such as when using a storage array that does tiering, you dont want the automated VM Migrations to occur since this will appear as hot data to the storage array causing it to needlessly tier data that doesnt need to be. You can also use Storage DRS to put a datastore in maintenance mode and, like a host in maintenance mode, all the VMs on the datastore will automatically be evacuated so you can be sure nothing is running on it.
- Hyper-V does offer the ability to label datastores and assign them to a cloud. It will also assign new VMs to the datastore with the most available free space out of the datastores contained within that label, but it does not take performance into account nor does it monitor datastore performance and proactively migrate VMs around to balance the load.
http://www.yellow-bricks.com/2010/09/29/storage-io-fairness/
http://www.yellow-bricks.com/2012/05/22/an-introduction-to-storage-drs/
Software RAID
- VMware will not install to software RAID or fake RAID. For most hardware this isnt an issue since many servers come with hardware RAID of some sort. Windows does support software RAID if youre using supported drives and Windows itself can create software RAID after installation so you can mirror your boot disk.
VMware boot from SD/USB flash
- VMware can install to SD cards or USB flash disks. This is very convenient when you dont want to waste hard drives on the ESXi host and once ESXi boots, its just running in RAM anyway so even if the flash card/drive fails, ESXi will continue to run it just cant boot up again. While you can install Windows on the same media, I would strongly advise against it. Even Hyper-V core is more disk intensive than ESXi and performance in the host OS will suffer. Being able to boot to SD or USB Flash is a great bonus with VMware.
Converting disks from Thick to Thin and vice versa
- Both hypervisors offer thin and thick provisioned virtual disks. However, only VMware allows you to change a virtual disk from thick to thin or thin to thick while a VM is powered on by using Storage VMotion. In Hyper-V the VM has to be powered off to perform the conversion.
Hyper-V Differencing Disks
- Hyper-V does offer a type of virtual disk that VMWare does not: the differencing disk. A differencing disk is really just a snapshot of a parent virtual disk. You can use a differencing disk to test changes on a VM without actually affecting the real data. When youre done, just delete the differencing disk. There is a performance hit for using a differencing disk just like for snapshots and you dont want to keep it around too long as the more writes occur, the bigger the differencing disk gets. It can be handy for VDI deployments, though, if the storage array can handle the load and youre not using them as persistent desktops.
- VMware Horizons linked clone technology is similar to Differencing disks but can only be used for VDI deployments, not to mention purchasing Horizon.
http://technet.microsoft.com/en-us/library/hh368991.aspx
CBT Changed Block Tracking
- VMware has a feature called Changed Block Tracking or CBT. Many backup products rely on CBT to tell them what blocks have changed in a VMDK since the last backup so the VM can be backed up much more efficiently and without needing the software to scan the VMs file system. Hyper-V has nothing like CBT right now and must rely on 3rd party storage filter drivers to perform the same task. This works, but adds another layer of complexity to Hyper-V and yet another 3rd party add-on that can fail. Sometimes these 3rd party drivers can even cause a CSV to go into Redirected Mode which will really hurt performance on the cluster.
http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1020128
VHD/VHDX disks
- One cool thing about VHD and VHDX disks is that theyre easily mountable in any modern Windows OS. Simply go to Disk Management, right click it, and choose to mount a VHD then browse to its location. Very easy way to connect up a VHD and grab data out of it.
Various Hyper-V Storage Weirdness
- Cant mount a local ISO or one from a SAN datastore in Hyper-V like you can in VMware. Must either be in the host, on a network share, or in a Library Share in VMM, and when you do use a Library Share youll need to set up Constrained Delegation in AD for the Library server so the hosts can mount the ISO without copying it locally first! Much easier to mount up an ISO in VMware.
- Cant hot add a SCSI controller to a VM in Hyper-V but have been able to in VMWare for a long, long time.
- Hyper-V still has virtual IDE controllers required to boot a Generation 1 virtual machine. Hyper-V has Gen 1 and Gen 2 VMs, something analogous to VMwares virtual hardware versions. If a VM is Generation 1 it must boot from a virtual IDE disk. Only Windows 8/2012 or newer guests OSs can be Generation 2 VMs which can use virtual SCSI disk boot.
- When you Live Storage Migrate a VM to another datastore, the folder on the old datastore isnt deleted. First noticed this in Windows 2012 and figured it would be corrected in 2012 R2 but it wasnt. Doesnt affect anything but does make it confusing when you look at the folder structure inside a datastore.
The Web client is the way of the future with VMware. I don't use anything but Windows so how well the web client works on an Apple I have no clue.
As for Hyper-V that will likely always be Windows management clients and tools only. Same goes for clients to provision their own VMs, Appcontroller uses Silverlight.
Thanks for the feedback. I'm happy people are finding these interesting and helpful. At least I get to pour all this information in my head out somewhere. Was hoping I would get to work with Hyper-V and VMware both in my job I recently started but it's not turning out that way.
Nothing about SR-IOV in networking? Last I checked (which was quite a while ago) VMware didn't support migrating guests with SR-IOV enabled where Hyper-V did. I think Hyper-V was easier to get SR-IOV working on as well but that was a long time ago when BIOS and drivers were still being updated to support it.
Nothing about SR-IOV in networking? Last I checked (which was quite a while ago) VMware didn't support migrating guests with SR-IOV enabled where Hyper-V did. I think Hyper-V was easier to get SR-IOV working on as well but that was a long time ago when BIOS and drivers were still being updated to support it.