Synology DSM 4.2 introduces HA to all x86 models

-Dragon-

2[H]4U
Joined
Apr 6, 2007
Messages
2,316
http://www.synology.com/releaseNote_enu/package_HighAvailability.php?lang=enu#general

Bellevue, Washington—Synology America Corp. announced the launch of the public beta for DSM 4.2- the latest advance in Synology’s award-winning NAS operating system. The new DSM version brings improvements for both home and business users at no additional cost.

...

DSM 4.2 also includes a number of offerings catering specifically to business users:

• Synology High Availability (SHA) is now available on all x86-based models, allowing even smaller businesses to minimize the risk of downtime
That puts the DS213+, DS412+, DS713+, DS1512+, and DS1812+ as HA options under $1000 per unit, considering that in DSM 4.1 the cheapest option is around $5k per unit that's a pretty substantial drop in the entry price.

I have a second DS1512+ on the way and a bunch of 3TB seagates I plan to put into a 2x4 drive SHR-2 (synology's RAID-6) arrays for 6TB usable storage that would require at least 6 drives to fail at the same time for there to be any data loss or even downtime.
 
Sweet! I'm buying a pair of 1812's and was just going to have one backup to the other.
 
<insert rant about RAID/HA isn't backup here even though HA with Time Backup will cover about 99% of the usage situations that you'd need backup for>

I'm probably still going to get one of the 400 series to backup the HA cluster to eventually but it's a ways down my list now
 
Ha isn't a good means of backing up? I know RAID isn't. I can stick with my original plan. Have the data copy automatically from one to the other.

This is for my home btw, not a business.
 
Last edited:
The thing about backup is there's always more you can do, the question is how much do you want to be protected against. HA will protect you against numerous simultaneous HD failures, protects against general HW failures as well, and since it's NAS is more resilient to damage done by malware and viruses and such since it's not physically attached to a computer. HA + Time Backup/Windows Previous Versions can even protect against most of the damage malware could do since you could always roll back.

Things HA can't protect you against would be for instance if you had an iSCSI target that wasn't being backed up some other way and that server file system got corrupted or got a virus which wiped it out, HA would just be replicating the destruction of that data instantly, so without some other form of backup that data would be lost. Additionally if you have 2 units in your house and your house gets hit by a meteor, or tornado, or the pipes burst in your server room, or burns down, or gets hit directly by lightning or some other "act of god" which causes the simultaneous and complete destruction of both of your units, then HA is again rendered useless.

Using the second unit as scheduled backup would give you protection against the whole corrupted VM but in the event of a data loss event you lose anything done since the last backup and the backup unit can't act as a server without some manual reconfiguration, so you're hard down until you "restore" to the backup or fix the primary device and restore to it.

HA would probably be better for your usage scenario but data isn't REALLY backed up if it only exists in one geographic location, though they do have amazon glacier support in the new version as well which seems fairly cheap if you're not storing a whole heck of a lot, if you are storing a whole lot there's always crashplan
 
Some notes so far:
  • The two boxes do not have to be identical contrary to what I had been led to believe. The passive server just needs to have as much or more available storage as the active server.
  • [strike=0]SHR (and SHR-2) don't seem to be supported with HA, you need to use real RAID(1/5/6) if you want drive redundancy within the nodes themselves.[/s]*
  • SHR (and SHR-2) don't seem to be supported with HA, you need to use real RAID(1/5/6) if you want drive redundancy within the nodes themselves.**
  • All the share services are down during the initial replication, so if you have an existing device in service and are thinking of adding a second unit as a HA option be prepared for some down time while the two units sync. It's almost 3 hours per TB for all the data you have allocated to volumes and raw block LUNs, regardless of how empty or full they are with real data.
  • One of the NICs is required for the cluster to communicate (and should be a direct connection with no router between) so the days of 200MB/s+ iSCSI MPIO transfers are over.
*[strike=option]I found out it does support it but it has to be setup on both boxes before you link them, I had RAID-5 as a test on my first unit and it wouldn't link with a second unit that was using SHR.[/s]
**I was right the first time, swore I saw a webpage on synology.com that said SHA supported SHR but of course can't find it now
 
Last edited:
I love my DS713+. Granted its only a two drive NAS server but I could always expand that with a DX513. I mainly use it as a media and file server. The "download station" is beyond awesome. I am really impressed with the flexibility it offers. It does so much that I named mine "SKYNET". For $1k bucks you get 2x4TB and a kick ass nas server.

As a systems admin and would love to use them in our network. However, we are currently using other solutions... for now.

I feel like a ad now.
 
I'm wanting to see how HA affects performance so obviously first step was to get some non-HA baselines. Using 3TB Seagate Barracuda's I did various tests using the iscsi target and crystalmark:

iscsi_results_small.png


The top row is 100GB thin file based LUNs the bottom row is 500GB block LUNs, only did the first test on unit 1 as it was busy testing other things after that. I did some RAID-5 testing on unit 2 that isn't shown but it mirrored the U1 results, obviously parity overhead gives some speed disadvantage, Synology Hybrid RAID seemed to hold up better for the file based LUNs but a bit worse at block LUNs. It's worth noting 4k performance was much much higher on the file basedLUNs, not sure how that would hold up once thin LUNs started fragmenting but definitely worth noting. Did some share based testing too which was all in the 50-60MB/s range for the most part with a few peaking at 80MB/s. Overall was a bit disappointed with the single unit performance was hoping to see it capping out the gig link, seems CPU might be limiting factor.

As I noted above HA does support SHR I was just using a funky setup playing around at first.

edit: Crap I don't even know how accurate these results are now, I was watching the resource monitor on the synology during further testing and I noticed while the network transfer rate was what the above said but the disk access was practically nothing, probably because the 2GB data set for these tests is smaller than the 4GB RAM that I stuck in my DS1512's... damn caches be fucking up all my shit. Not as bad as when I initially did iSCSI testing on my server and thought the synology was pushing 200+ MB/s over a 16GB file copy... turns out a 16GB VHD fits quite nicely in 32GB of free RAM that server had.

I'm testing jumbo packets now, they seem to be helping greatly.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
When I went up to 9x4000 to compensate for the 4GB RAM my DS1512's have I got these results... seems the extra RAM still helps with write caching but over 4GB files negate the read caching

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 59.952 MB/s
Sequential Write : 66.593 MB/s
Random Read 512KB : 46.565 MB/s
Random Write 512KB : 53.132 MB/s
Random Read 4KB (QD=1) : 0.613 MB/s [ 149.8 IOPS]
Random Write 4KB (QD=1) : 8.632 MB/s [ 2107.3 IOPS]
Random Read 4KB (QD=32) : 1.127 MB/s [ 275.2 IOPS]
Random Write 4KB (QD=32) : 26.201 MB/s [ 6396.6 IOPS]
Test : 4000 MB [M: 0.1% (0.1/99.9 GB)] (x9)
Date : 2013/02/10 21:20:02
OS : Windows 8 [6.2 Build 9200] (x64)
 
Found a whitepaper that actually answered a lot of my questions under one of the XS units, specifically storage manager limitations with HA:


5.3 Storage Manager Limitations
  • Once a high-availability cluster has been formed, Storage Manager will no longer be able to perform certain actions: Edit or expand volume and iSCSI LUN (block-level) size.
  • Change RAID types.
The following actions will remain available after the formation of the high-availability cluster:
  • Expand RAID Groups by adding or replacing hard disks (only for RAID Groups for multiple volumes or iSCSI LUNs).
  • Create, delete, or repair volumes and iSCSI LUNs.
  • Change iSCSI LUN (file-level) size and location.
  • Change iSCSI LUN target
 
I was making a big old chart of different HA performance metrics but I seem to be getting really inconsistent results... Seems to be either some glitch in the Synology OS in general or something with the beta but sometimes I get REALLY bad write speeds on the block LUN, like 4MB/s bad, other times I get relatively inconsistent results from everything like one CM run will be 99MB/s and the next will be 64MB/s (using 9 passes). I might need to wait longer between tests could be something to do with the cache...

The general consensus was that file LUNs seem to perform better than block LUNs for most things, esp if you load your DS up with RAM, I had to do 4000MB file sizes to get non cache read performance metrics.

If I was getting poor performance reads and writes were generally in the upper 50s to 70MB/s, good performance was 85+MB/s sequential and 512k, 4k random was around 1-2MB/s if there were lots of cache misses and 10+MB/s if it was getting a lot of cache hits for file LUNs, block LUN was a consistent 1-2MB/s for 4k and 4kQD32, the file LUNs were like 50MB/s for 4kQD32, obviously a bit of caching going on there.

Just got in 2 more drives so I can play around with expanding a RAID-5 setup and then finally go full RAID-6 and see how that performance is
 
Back
Top