Time for a HW Upgrade, What are real world performance differences?

mikecLA

Weaksauce
Joined
Jan 20, 2011
Messages
101
I'm running a production Dell R710 box, ESXi 5
Dual X5570, 2.9 Gig 4 cores per socket
96 Gig DD3 Ram
Perc 6/i controller
6 300GB SAS drives in a RAID 10 Array with 2 hot spares.

My first thought is the biggest improvement will come from SSD Storage. It's not worth upgrading the R710, since I'd have to stick an H700 card in it to get 6Gbps instead of 3 (which is cheap) but then when new drives are added, the cost difference to go to a 620 1u with a newer CPU, etc... is negligible. Plus, VmWare beyond 5.5 is not supported on the R710.

The R620 would have 128GB DDR3, 8 x 800 GB 6Gbps SSD, Dual E5-2667 V2 3.2 8 Core.

The R630 will take the same CPU, but V4 instead. CPU cost is an extra $300 over the V2. Considering these were over 2K each when released, that's negligible now.

Where costs start to go up is having to use DDR4 Ram instead of DDR3, and the consideration of 12Gbps drives as opposed to 6Gbps.

Another factor in favor of the V4 chip is Esxi 7 is supported, whereas it only goes up to 6.5 on the V2. But... I've been relatively happy with 5.1 and do not need all the extra crap thrown in. It'd be nice if they offered vMotion in the essentials package, but I'm not paying 5K for it. $500 is fair enough for what it does and to unlock the crippled API's to be able to back up.

So the main questions are:
1) How big of a difference will moving to 12Gbps drives as opposed to 6Gbps make?
2) Increase in performance for DDR4 vs DDR3?
3) For S&G, how big of a difference would NVMe drives make over the other two options?

I'm going to continue to keep the SSD's in a 6 drive RAID 10 array with 2 hot spares. I understand most people use RAID5, but with such low storage needs, the cost difference is negligible. That's one of the reasons I'm wondering if the jump to 12Gbps will be noticeable.

The VM Workload is a mix. A few CENTOS boxes running webservers, a mysql server, pfsense firewall, and one Windows box running IIS / SQL Server, which will eventually ported over to LAMP to avoid MS licensing issues if/when the Windows boxes need upgrades. They are 2008 Server R2 and would just be copied over for now.
 
Honestly not sure you will see a huge performance on 1 and 2. Nvme can however give you a huge boost so at 6gbps you are effectively limited to 550mb\s vs pcie 3 nvme which can hit over 2gb\s. It really depends where you bottlenecks are though if you get a tangible increase. Personally I would isolate storage to different pools and spread at speed rates to the task.
 
ESXi 6.5/7 will still run on older hardware, pending on drivers, the "supported" is only if you actually need VMware support.

Pending on load, you may see little difference in performance at all really? How much are you using in terms of actual CPU load and Ram on your current box? SSD/NVMe will make things snapier for sure, but again, unless you are doing heavy I/O you may not notice it. (MySQL should mostly be running all in ram)
 
Can confirm that vSphere 6.7 will run on it - have three in my lab. I think moving to SSD, etc. only makes sense if you try and get dense on the host(s), where IOPs factor more than throughput. If you're just running a handful of VMs it's probably not worth investing into that box.

If you do upgrade to a newer box, don't go less than Rx30 if you're sticking with Dell so that you have vSphere 7.x support - we really drew the line on CPUs this time around. Alternatives would be IBM/Lenovo System x3650 M4/M5, HPE 5xx Series Gen9+, or HPE Gen10 hardware. Could also build a SuperMicro or go with Intel NUCs as well. Good luck!
 
Back
Top