so i noticed last night that i had a drive failure on tuesday - oddly enough after my eth0 card went down then right up... i figured maybe a spike or a quick flicker of power, so i decided to hotremove and hotadd... 2.5 hours (of 3 hours) into recovery a second drive died...dammit... so i was playing with some commands (raid5 linux software raid) and manage to force one of the drives to clear it's faulty status and now have the raid system in recovery with still one faulty drive. How is this possible? i was under the impression that 2 drives dead in raid5 meant loss of all data. Or is the data gonna be messed up once it gets done recovering?
rats, guess it's time to get some more drives in the server to replace the supposed dead ones...
rats, guess it's time to get some more drives in the server to replace the supposed dead ones...