Experinces and opionons on VSA?

vumemu

n00b
Joined
Aug 19, 2012
Messages
9
Hi,

Was looking for anyone's experiences with using VMWare VSA. Was looking to build two server nodes for small business and was wondering if anyone had any thoughts or experiences with running VSA.

Also be interested to know of the HP Lefthand is any good as a VSA.

Was looking to spec something like following:
2x Dell R520, 8x SAS 600gb 10k, 48 gb ram. Raid 6, H710 controller
Vmware Essentials Plus Kit

Thanks!
 
I can't answer your questions in regards to VSA but I can offer a cheaper (free) alternative for virtualized shared storage. Have you considered FreeNAS?
 
VSA is included in almost all vSphere license levels now so it's "free"...and actually supported.

VSA works fine and is the perfect solution to a setup like that.

EDIT: Also VSA isn't just a simple storage appliance. It replicates data amongst the nodes so you can fail a disk or whole node and still be up and going.
 
To add on to NetJunkie's response, the included VSA with the latest release of vSphere 5.1 is much better than the "version 1" appliance.

Some of the changes:
1. RAID 5/6 support
2. Increased Capacity, and increase capacity on the fly.
3. Remote Office support
4. You can now run vCenter on the VSA.
 
I wouldn't want to run FreeNAS on a system I'm supporting for a client. Especially because I can't figure out why my I keep losing iSCSI on my FreeNAS box in my lab. I either have to restart iSCSI on the box or the entire box. Needless to say, ESXi hosts aren't excited about having their storage ungracefully removed from underneath them.

I don't have any experience with the VSA yet, but I'm considering it for the next phase of my virtualization project at work.

Make sure you have enough resources leftover to run your VM load after the VSA. If I recall correctly (check the docs), it will allocate half of that machine's CPU and RAM to the VSA.

Also, I believe you can't change the configuration (number of disks or hosts) once the VSA is initialized, so make sure you've got everything the way you want it. [Looks like this may be relaxed in 5.1 per Vader's post above]

With two hosts each with 8x600GB disks in R6, you'll have approx 3.6TB of usable disk after the full replication in a 2 node setup.

Is that enough? Far more than enough? I'd consider R10 if ~2.4TB would be enough.
 
VSA is included in almost all vSphere license levels now so it's "free"...and actually supported.

VSA works fine and is the perfect solution to a setup like that.

EDIT: Also VSA isn't just a simple storage appliance. It replicates data amongst the nodes so you can fail a disk or whole node and still be up and going.

I'm probably overlooking the link for the new licensing, what versions is it included with? We're using the SvSAN right now, but if I can switch over to the VSA under my Enterprise Plus licensing I will. If it's still a $3500 license though, I'll stick with the SvSAN.
 
I wouldn't want to run FreeNAS on a system I'm supporting for a client. Especially because I can't figure out why my I keep losing iSCSI on my FreeNAS box in my lab. I either have to restart iSCSI on the box or the entire box. Needless to say, ESXi hosts aren't excited about having their storage ungracefully removed from underneath them.

I don't have any experience with the VSA yet, but I'm considering it for the next phase of my virtualization project at work.

Make sure you have enough resources leftover to run your VM load after the VSA. If I recall correctly (check the docs), it will allocate half of that machine's CPU and RAM to the VSA.

Also, I believe you can't change the configuration (number of disks or hosts) once the VSA is initialized, so make sure you've got everything the way you want it. [Looks like this may be relaxed in 5.1 per Vader's post above]

With two hosts each with 8x600GB disks in R6, you'll have approx 3.6TB of usable disk after the full replication in a 2 node setup.

Is that enough? Far more than enough? I'd consider R10 if ~2.4TB would be enough.
Because the BSD target is shite. ~shrug~

IT's never worked well, and probably never will. It also causes massive corruption. Don't use it.
 
I can't answer your questions in regards to VSA but I can offer a cheaper (free) alternative for virtualized shared storage. Have you considered FreeNAS?

totally unsupported, as neat as it can be.

Personally, I've just gone to Ubuntu 12.04 + knfsd for all my lab fast-storage. Works great, and fast as hell too for everything I need.
 
Lefthand has horrible performance issues with high IO db's.

Ditto. Had similar issues. Had an HP rep explain something about the backend replication between nodes needing more bandwitdh and came back with 10GbE cards for the 2 node system. Installed and reconfigured, got some but not a great deal of improvement. The client ended up swapping them out for Equallogic and has been happy.
 
Yea my homelab is small by comparison havnet had any issues with it... one thing i bitch the most about with the Lefthand is the Failover node needs 2 b/c of the local storage thing... but thats me
 
Sorry. I was wrong. VSA is now included in the Acceleration Kits..not just individual licenses. So if you started with an AK you should be able to get it.
 
Lefthand has horrible performance issues with high IO db's.

Ditto. Had similar issues. Had an HP rep explain something about the backend replication between nodes needing more bandwitdh and came back with 10GbE cards for the 2 node system. Installed and reconfigured, got some but not a great deal of improvement. The client ended up swapping them out for Equallogic and has been happy.

If you're putting a high-IO DB on a VSA, you're doing something very very wrong.

Just because I'd love to see a viking longboat crewed by pandas orbit the moon, doesn't mean it's a good idea to strap ping-ping and some lumber to a Saturn V.
 
If you're putting a high-IO DB on a VSA, you're doing something very very wrong.

Just because I'd love to see a viking longboat crewed by pandas orbit the moon, doesn't mean it's a good idea to strap ping-ping and some lumber to a Saturn V.

Well things like budgets and unforseen or unplanned growth does happen. Not everyone has the luxury of planning things ahead or having a budget that does that. Unfortunatley 95% is get a hand out and a make it work statement.

VMVsa I am succesfully running High IO DB with out an issue. Lefthand is also marketed as a SAN not a VSA.
 
Sorry. I was wrong. VSA is now included in the Acceleration Kits..not just individual licenses. So if you started with an AK you should be able to get it.

Oh, that makes more sense, we've just been upgrading our licenses since 2.5 with maintenance, it was hard enough getting them to Enterprise Plus during the 4.0 upgrade. Can't see them throwing in the VSA for free unfortunately.
 
If you're putting a high-IO DB on a VSA, you're doing something very very wrong.

Just because I'd love to see a viking longboat crewed by pandas orbit the moon, doesn't mean it's a good idea to strap ping-ping and some lumber to a Saturn V.

Absolutely! :D Some of us get called in to clean up the mess.
 
Well things like budgets and unforseen or unplanned growth does happen. Not everyone has the luxury of planning things ahead or having a budget that does that. Unfortunatley 95% is get a hand out and a make it work statement.

VMVsa I am succesfully running High IO DB with out an issue. Lefthand is also marketed as a SAN not a VSA.

There is a physical LHN device and the VSA - both can do quite a bit, it just depends on how you optimize them.

And I'll grant budgets, but it's a VSA still ;) Pinto, NASCAR, all the other comparisons out there :p
 
I've got several Lefthand VSA setups at smaller sites and they work great. There are some gotchas that aren't really documented everywhere that can cause problems but properly setup I've had no issues.

Well I take that back, the update/upgrade process is idiotic especially if you don't have internet access where the management console is installed.

Did they address the scaling issues for the VMware VSA? If i remember right you could only have 2 or 3 nodes max?
 
Well things like budgets and unforseen or unplanned growth does happen. Not everyone has the luxury of planning things ahead or having a budget that does that. Unfortunatley 95% is get a hand out and a make it work statement.

VMVsa I am succesfully running High IO DB with out an issue. Lefthand is also marketed as a SAN not a VSA.

Let's not mix terms. There is a LeftHand VSA and a LeftHand appliance.

Both suck. :)

There is very little actual unforseen growth. Lack of budget is one thing..but if something is just dropping out of the sky to be provisioned it's the org's internal structure that is broken. Fix that. Then fix the infrastructure.
 
Let's not mix terms. There is a LeftHand VSA and a LeftHand appliance.

Both suck. :)

There is very little actual unforseen growth. Lack of budget is one thing..but if something is just dropping out of the sky to be provisioned it's the org's internal structure that is broken. Fix that. Then fix the infrastructure.

I wish you could come out to Vancouver Island and see the messes here. This is why I am trying to get the hell of the island to work in the oil industry once more. Van Island is bizaro land as far as economics and skill go.
 
Let's not mix terms. There is a LeftHand VSA and a LeftHand appliance.

Both suck. :)

There is very little actual unforseen growth. Lack of budget is one thing..but if something is just dropping out of the sky to be provisioned it's the org's internal structure that is broken. Fix that. Then fix the infrastructure.

Oh I dunno, we pushed 200k IOPS to one :D
 
Like the OP I'm specing a 2 host setup and I'm questioning whether to use local storage in each server with the VMware's VSA or go diskless with a Synology DS1512+ which comes highly recommended.

If we leave cost out of the discussion, can users of both the VSA and DS1512+ give me some guidance on what option to choose and what to consider. Thanks.
 
After reading a number of threads on VMware's VSA forum I'm leaning more towards diskless servers with a DS1512+.

Apparently one VSA requirement is 4 NICS per server and those aside I feel that there is more complexity involved vs using a NAS.
 
Well...the 4 NIC requirement is a good thing. Remember, those hosts will be hosting storage for others across those NICs too. If you buy your licensing as part of an Accelerator Kit you'll get the VSA included and can try it out. But I consider 4 NICs to be the minimum on any NAS connected vSphere hosts.

The downside to a DS1512+ (and I love my DS1511+) is no redundant power supplies.
 
Another thing I read about the VSA is the complication of running virtualised vCenter in a two host setup with the VSA. In the thread I linked it seems that you would need a third physical server for vCenter to deal with maintenance and failover unless I'm mistaken.

NetJunkie have you played with the VSA in your lab? Would you recommend it over a DS1512+?
 
Let's not mix terms. There is a LeftHand VSA and a LeftHand appliance.

Both suck. :)

Out of interest why - against comparable priced systems of course. I assume the VMware VSA is at the level of the HP VSA now, or maybe surpassed? I don't like the way the replication in the HP Lefthand's is over the "client facing" NICs - is the VMware VSA better in segregating traffic?
 
Really good feed back here thank you everyone.

I am pretty close to locking in the following spec now with a 2x host cluster. The thought of VSA now included in the licensing seems like a good option for small business solution. Especially when you consider that you are running two VSAs so they are redundant instead of one single qnap, iomega etc nas. Even more so now that you can run Vcenter inside the VSA. :) so you can really get away with a 2 node cluster.



2x Dell R520
- 8x 600gb Raid 6 with H710
- Xeon 2.40 Ghz
- Intel Quad Port NIC
- Redundant PSU
- Internal SD with 2gb SD card

Still working on a switch possible go for 2x Cisco 2960s 24 port gigabit.
 
Oh do you need a hardware raid controller card for VSA or does VSA do that for you at the software level? My preference of course has always been on hard raid controller.

Anyone know?
 
so you can really get away with a 2 node cluster.

Just keep in mind that with a 2 node cluster and virtual vCenter on the VSA you will need a third physical system to host the VSA Cluster Service!

vCenter Server Running on the VSA Cluster – Considerations

A vCenter Server instance running in a virtual machine is supported on a two-node cluster configuration when the VSA Cluster Service is running outside of the VSA storage cluster (on an external system or GuruPlug).

SOURCE

Oh do you need a hardware raid controller card for VSA or does VSA do that for you at the software level? My preference of course has always been on hard raid controller.

Anyone know?

I believe that the VSA does not do any software RAID. It "just" shares the local storage between hosts and keeps a replica of a host's datastore on another one. Therefore, to provide disk redundancy you will need a hardware RAID controller.

P.S. The following videos might be useful.
 
cheers Joshu

Thanks for the note. I will throw in a R320 for management then and probably run Veeam on that as well for backups.

Going to go through the videos now. Thanks
 
Be interesting to see the price differences between:

2x hosts + 3rd physical machine + VMware licensing

vs.

2x hosts + Veeam and doing replications from one host to another

Give up vmotion/failover in lieu of simpler setup for the latter option. I did the Veeam thing, VSA wasn't out I don't believe when I made this choice.
 
Hi Marty,

Well the pricing for VMware Essentials Plus Kit is around $4582 AUD where I am.
Plus the three servers.

All up I am looking close to $26,000 without networking. (2x R520, 1x R320, vSphere, Support)


How has your setup been running with Veeam and replication? HAve you done any DR testing with it? Level of downtime? This kit will be pitched for a production environment. Are you using the free hypervisor with Veeam vm running on one of the hosts?

Thanks!
 
Veeam- once setup right (95% user error) it's been pretty stable. Latest versions of Veeam are a ton better than v4 days.

I've done complete DR testing excluding actually failing over the entire network and having users work in the DR environment. Networking quirks are the toughest part, data is there and the servers work.

For me I've got a offsite DR network I replicate to, business requirement. Not sure how VSA would play into that equation. 1x production host, 2x standby hosts so to speak.

I did this in 2010 on HP servers. 60GB RAM, 8x 10k SAS drives etc (7.2k in the stand by servers) ~ $25,000 excluding the 3rd DR server (another $10k). With VMware essentials I have no yearly support fees, per incident support as needed (used once).

Downtime: It's a manual fail over if needed. RPO 24 hours at most, RTO ~ 1 hour

Using Vmware Essentials (needed for Veeam to work) and yea Veeam is a VM.

Based on your pricing listed, either I overpaid for the hardware or your getting a good deal. I know Dell is cheaper than HP by a good margin. At your pricing VSA'ing the mess and buying Veeam on the side makes sense as Veeam is cheap compared to the rest.
 
Back
Top