Isn't the 840 EVO lacking the caps to survive sudden power loss without loosing the contents of the internal cache? This is one of the most important features on enterprise drives that are mostly lacking in consumer SSD drives.
Proxy Arp is, as you have found out, the cause of your issues. I still can't understand why this technology even exists and even worse, is enabled. It is just a horrible crutch for bad network designs, a bit like the APIPA-protocol. The more I read about it, the more this is confirmed...
Just a side note on PPTP/IPSEC. If you use a IKEv2-based IPSEC-solution there is no issues having dynamic addresses and NAT in both ends. The problem is that many hardware firewalls, including enterprise ones, do not support this properly. A good implementation that I have used for many years...
If you can do things according to professional best practice for free and without a significantly bigger time investment or configuration overhead, I do not see your argument for "home usage" vs "enterprise requirements".
Layer 2 is never the way to go when using VPN, unless as a point to point between routers to be able to use interior gateway routing protocols. GRE over IPsec is a commonly used alternative for this.
Bridging whole networks into the same broadcastdomain over WAN is just bad network design and...
I normally would not do it this way if you want to deploy this as a VPN into your network. The smoothest networking experience will come from using VPN at the actual gateway in a small home setup.
Unless you do some magic on the VPN server you will get the following scenario:
1. To reach the...
He has a 50 mbit/s downlink. Why limit yourself to anything but the full line potential in any direction? With that said there are some new routers that can actually manage 50 mbit/s in one direction that I have seen.
I do not know any embedded solution that costs less than a computer that allows me to run OpenVPN at 100/100 mbit/s.
OpenVPN at full throughput is very important if you not only are using it for connecting to your home, but use it as a common VPN gateway for hosts in your network for sharing a...
I seems to me like the Zyxel is not a layer 3 switch. I don't know if this is important for OP, but the layer 3 support with OSPF and VRRP makes the 500X-series a lot more flexible as it can do duties both in the access and distribution layer (on paper that is, I only have experience with the...
You can use almost anything that can run linux as a router (or directly on your client if it runs linux). Use tc with HTB rate limiting, fq_codel queuing for keeping fairness and low latencies. Use iptables for extremely granular categorizing of traffic. fq_codel combined with HTB rate limiting...
Same issue here. I commented in the issue thread as I actually have gotten one annoying subtle network breakage bug fixed that way for SB200-08 already.
There are absolutely no issues with enabling SSH if you:
1. Use a decent password!!
2. Disable root
3. Use non-standard port
4. Use iptables to only allow 5 connection attempts per 10 minutes or something like that
Example of the last:
$ipt -A WAN_FORWARD -p tcp --syn --dport 22 -m...
6TB is nothing if you are going to rip 500 movies, many of which are blu-rays, unless you are going to strip away everything and encode the movies to crap. I can't really see why someone would go through the pain of ripping their entire collection and not selecting at least lossless mkv-remuxes...
I use my own offsite backup fileserver with incremental rsync snapshots with checksum verification and mail reporting ( just a python script i have made). I do backup of 6 servers on two separate sites with this. I started with about 1tb over 0,35 mbit dsl, but now I have about 17 TiB data over...
I went for 4TB WD SEs, and do not regret it. I have had bad experience with non-enterprise drives for large RAID-array home usage. My 2TB REs have been perfect, and finally WD launched a much cheaper alternative that fills the gap between Reds and RE with 7200rpm drives and 5ys warranty so I...
After having used underpowered and expensive Cisco ASAs I finally landed on a powerful router/firewall setup based on modern dual or quad intel PC with good nics, even 10gbit/s in one setup. The CPU usage of 1 gbit/s routed IP traffic with iptables firewalling/NAT on those systems is just barely...
There is no problem for home usage, only for scientific setups where an absolute guarantee that not a single bit is corrupted is needed to not invalidate the dataset or the resulting conclusions.
I have run weekly checksum verifications from calculated backup checksums of 8TB data (in the...
I use the current stable kernel 3.11.1 and btrfs utils 0.20 rc1 and have had zero issues with btrfs on my fileserver. As a bonus it is much easier in use than fdisk/LVM/mdadm too. I use it with RAID10 and compress=lzo and utilize snapshots heavily. I also have a 22TB EXT4/MDADM RAID6 array in...
If you don't need RAID5/6 at the moment (as they are not ready for general use) and can do with RAID10 you may consider using a new kernel and use btrfs which in it's current state actually is very suitable for large home setups as long as parity RAID is not a requirement or you depend on some...
I will save only about 170 NOK a year using 10 RED drives vs SE 4TB (50W) here in Norway (39 øre/kwh). That is about 1/13th of the price of a new WD SE 4TB drive that I currently use.
As most people that buy reds have less than 10 drives (except maybe on this forum :) ) this is a moot point...
If you have 100 drives, then of course there is no doubt that you can feel a huge difference. Generating 500W of heat less 24/7 is a big issue in a home.
Nice system by the way. That must be one of the largest home setups I have seen :)
Unless you have several tens of drives there is no chance in hell that using Reds in place of a standard 7200 rpm drive is the cause of your cooler house.
Lets say you have ten drives. Then you will be saving about 40-50W compared to SEs in a RAIDed 24/7 NAS build. In other words you are...
I just bought 4 x 4TB SEs too (just before the price suddenly increased by 20%). I have 11 WD RE 2TB drives in that chassis too.
I do not understand why someone would pick a drive for NAS purposes based on completely irrelevant things in the long run like a couple of watts saved or some db's...
I prefer Debian because it is more thorougly tested as a whole. It is the little things that are mainly of concern when using it for server purposes like Ubuntu shipping with open-vm-tools that fails to compile the kernel modules on their shipping default kernel 3.5, not having a real...
EXT4 works fine with filesystems above 16TB, but you need kernel version > 3.7 and e2fsprogs > v1.42.6 for online resizing to more than 16TB given that you created the filesystem with the 64bit flag.
If you are creating a fresh filesystem > 16TB I think you actually can create it out-of-the-box...
I would have used RAID6 on 15 drives and 1 hot spare, not RAID50 in one or the other combination. Better space utilisation and added reliability.
With so many concurrent writes (64 writers) I would personally have opted for RAID10 instead of the parity based RAIDs, but that may not be an option...
I agree that no scientific conclusions can be made out of my small sample (which is actually much higher as I am responsible for many servers at work, many of them which have WD RE4 drives). It is only my personal conclusion based on experiences with consumer grade drives vs enterprise drives...
I bought 8 Seagate Barracuda 2TB LP drives. 6 of them have either died or is dying (failing SMART tests and increasing read errors/sector reallocations). To their defence they are not made to withstand the vibrations in RAID setups or 24/7 usage. You can in other words blame this on "usage...
The algorithm I have implemented is something like this:
1. Create list of files in the current backup
2. Create list of files that have changed between the current backup and the previous snapshot. The list is created by running rsync in dry run mode comparing the snapshots with...
I have the same concerns for my backup as the OP regarding integrity and checksum verifications, so I thought I could share my backup setup that does this automatically for me and seems to scale very vell without additional management.
I have the following setup for my private system:
- My old...
Because I don't trust encryption to not create problems when I really need the files I don't use it for my main backup. The only reason for using it is because the disk sits in a S-ATA docking in my office and could easily be snatched by anyone coming into our office.
Regarding RAID6 and silent...
Robocopy is a very nice tool, but has some serious flaws for a backup system compared to e.g rsync.
- Doesn't support hardlinking, which means that you will have to make complete copies of all your data if you want to have things like daily snapshots of your system.
- If it detects a change in...
This research was the reason for me implementing the md5 checksuming in the first place.
But I do not completely agree with you regarding RAID, nor do the research. In normal day to day usage RAID won't do anything good as parity is not checked for every read, but during scrubbing, given a...
When I think of it, I could calculate md5sums only once a week as an example. As I use rsync with --link-dest all unchanged files are hardlinked anyway, so they point to the same area on disk. If the check from last week validates it is safe to presume that the files from the last nightly backup...
In a RAID6 setup the monthly mdadm raid-scrubbing will find and resolve most bitrot caused by harddrives, unless it is of massive proportions. I also run countinous SMART-monitoring with daily selftests of every drive and weekly long selftests to catch any errors appearing.
This will only catch...
Config backup:
I have several Linux-servers running, both physical and virtual. Every night the configuration is backed up by the fileserver using a rsync script. Only edited non-default files are backed up. This also serves a documentation and versioning purpose.
Storage backup:
Everyone...
A full NAT table = no room for new connections. In other words you will experience that browsing slows down to a halt because you are only able to download new stuff from a page when old connections gradually clears. Entering a modern web page can easily spawn 50 new connections because of...
What does truecrypt do if you get a silent bit error on one of your drives containing a large truecrypt file? Is the whole truecrypt file corrupt?
Something to think about before making one huge 40TB truecrypt file.