Western Digital Ships 24TB & 28TB Hard Disks, Declares "Total Supremacy"

I know nothing about Snapraid or Drivepool, but I am going to have to read up now.

I have always been a huge ZFS fan. It's JBOD does its magic in software, and is reportedly one of the most reliable RAID-like solutions out there due to how well it combats bit-rot.

I believe TrueNAS Core (formerly FreeNAS) still has a web based GUI implementation of ZFS ontop of a barebones FreeBSD install. I haven't used it in ages though, instead favoring the DIY approach using ZFS from the command line.

I currently have my main pool configured as follows:
Code:
 state: ONLINE
  scan: scrub repaired 0B in 10:02:09 with 0 errors on Sun Nov 12 10:26:14 2023
config:

    NAME                                               STATE        READ  WRITE CKSUM
    pool                                               ONLINE       0     0     0
      raidz2-0                                         ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
      raidz2-1                                         ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
    special  
      mirror-4                                         ONLINE       0     0     0
        Inland Premium NVMe 1TB                        ONLINE       0     0     0
        Inland Premium NVMe 1TB                        ONLINE       0     0     0
        Inland Premium NVMe 1TB                        ONLINE       0     0     0
    logs  
      mirror-3                                         ONLINE       0     0     0
        Intel 280GB Optane 900p                        ONLINE       0     0     0
        Intel 280GB Optane 900p                        ONLINE       0     0     0
    cache
        Inland Premium NVMe 2TB                        ONLINE       0     0     0
        Inland Premium NVMe 2TB                        ONLINE       0     0     0

errors: No known data errors

So, essentially ZFS 12 drive RAID60 equivalent on the hard drives. A three way mirror of 1TB Gen 3 MLC NVMe drives for small files and metadata, two mirrored Optanes for the log device (speeds up sync writes) and two striped 2TB Gen 3 NVMe drives for read cache.

It works pretty well for me. Well, maybe all except the read cache. The hit rate is atrocious on those. Despite being 4TB total that read cache does very little for me.
That's a lot of "documentary" storage
 
That's a lot of "documentary" storage

It's a mix of raw photo and video files from my camera hobby (which now is pretty dormant), disk image dumps of multi TB drives written over the network from clients I'm working on, or just as a form of "snapshot" of a current state in case something on the client breaks, the entirety of my media library (mostly self ripped blurays at native bitrate, just remuxed) TV DVR recordings (which once they fill up the dedicated 1TB mirror, the oldest recordings are pushed to the spinning drive pool to free up space for new recordings) and then all of my day to day files.

The TV recordings take up a huge chunk. A few months back I had to put my foot down. My better half started a recording for a show named "Expedition Unknown" back in like 2014 which she then proceeded to never watch. Every new recording that is added is automatically configured to skip episodes we already have, but in her wisdom (maybe unintentionally?) she changed this setting such that it recorded every single airing. It proceeded this way for almost a decade before I took a closer look.

I forget how many recordings there were when I caught on. I want to say over 3,000 episode of hour long poorly compressed mpeg2 files, at like 4 GB a piece. So that was like 12TB right there, in one damn show. I've done some housekeeping since then (including deleting all of Expedition Unknown) and I have since brought down TV recordings to about 8.2TB.

The media library (including music, film and TV episodes in as high bitrate as I can get them) currently sits at about 44TB. Older stuff in 1080p, newer stuff in 4k.

There are also snapshots dating back to inception of my ZFS use in 2012, a baseline snapshot, followed by daily snapshots that are kept for a week, weekly snapshots that are kept for a month, monthly snapshots that are kept for a year, and annual snapshots that are permanent. (all of these snapshots are taken and deleted through a custom script)

It all adds up rather quickly, especially when you are a data hoarder like me who rarely ever takes the time to delete anything. As of right now I am at about 44% capacity.

While you can fill it up if you please, ZFS performance starts suffering after 80%-85% capacity somewhere. That - IMHO - is the biggest downside of ZFS and any other Copy on Write file system.
 
It's a mix of raw photo and video files from my camera hobby (which now is pretty dormant), disk image dumps of multi TB drives written over the network from clients I'm working on, or just as a form of "snapshot" of a current state in case something on the client breaks, the entirety of my media library (mostly self ripped blurays at native bitrate, just remuxed) TV DVR recordings (which once they fill up the dedicated 1TB mirror, the oldest recordings are pushed to the spinning drive pool to free up space for new recordings) and then all of my day to day files.

The TV recordings take up a huge chunk. A few months back I had to put my foot down. My better half started a recording for a show named "Expedition Unknown" back in like 2014 which she then proceeded to never watch. Every new recording that is added is automatically configured to skip episodes we already have, but in her wisdom (maybe unintentionally?) she changed this setting such that it recorded every single airing. It proceeded this way for almost a decade before I took a closer look.

I forget how many recordings there were when I caught on. I want to say over 3,000 episode of hour long poorly compressed mpeg2 files, at like 4 GB a piece. So that was like 12TB right there, in one damn show. I've done some housekeeping since then (including deleting all of Expedition Unknown) and I have since brought down TV recordings to about 8.2TB.

The media library (including music, film and TV episodes in as high bitrate as I can get them) currently sits at about 44TB. Older stuff in 1080p, newer stuff in 4k.

There are also snapshots dating back to inception of my ZFS use in 2012, a baseline snapshot, followed by daily snapshots that are kept for a week, weekly snapshots that are kept for a month, monthly snapshots that are kept for a year, and annual snapshots that are permanent. (all of these snapshots are taken and deleted through a custom script)

It all adds up rather quickly, especially when you are a data hoarder like me who rarely ever takes the time to delete anything. As of right now I am at about 44% capacity.

While you can fill it up if you please, ZFS performance starts suffering after 80%-85% capacity somewhere. That - IMHO - is the biggest downside of ZFS and any other Copy on Write file system.
How much time do you spend on the datahoarders sub on Reddit? :D
 
That's a very nice array right there.

Thank you sir. I put a lot of thought into it over the years. It's not fully "enterprise" hardware but big chunks of it are. I'm just a guy. I don't have an "enterprise" budget.

I try to have enough RAM free to give ZFS 128GB of ARC to work with, for best performance. The hard drives are all directly attached to my LSI 9305-24i HBA (through the backplane of course) without any SAS Expander, for best performance and reliability.

I selected the dual RAIZ2 configuration for a balance of integrity, storage space efficiency and performance. This way if I lose one drive on one of the RAIDZ2 vdevs, I can resilver it while still having a disk worth of checksum data, to prevent bit rot. If I lose two drives on the same RAIDZ2 vdev, I can still resilver, but then the syste,m won't be able to check against a checksum in the process, so there is a small risk of bit rot. More than two drives gone on any one of the disk vdevs and the whole pool is trash, and I'll be restoring from backups. My plan is to never let more than one fail at a time, and quickly replace it when it does. I don't have hot spares, but I have though about it.

Special VDEV's (for accelerating small files and metadata) are relatively new in the grand scheme of things, only having been added to ZFS a few years ago. I added three mirrored NVMe drives in this capacity about two years ago. I went with a three way mirror, as if I lose the metadata, the whole pool is trash, so I wanted the same fault tolerance I have on the other VDEV's, which is, I can lose two drives and still work.

The SLOG (or separate log device, or Separate Zero Intent Log) is a stranger one. Usually it is embedded in main pool unless you specifically add separate drives for it. Many people call it a write cache, but while it does speed up some writes, that isn't strictly accurate. It only comes into play for sync writes. For async writes, as soon as data is received in RAM, it is reported back to the sender as having been received, and is committed to non-volatile disk offline. Sync writes come in when you want to protect that flying data in RAM from being lost if there is a power outage or crash before the full write is complete. For most file storage this doesn't matter. If your system goes down mid transfer, you have a partial file anyway, and you are probably going to delete and replace it. For databases and VM drive images - however - having those last writes complete can mean the difference between a corrupted and a readable database/drive image.

The SLOG devices are just separate drives for the log that is otherwise embedded in the main pool. This SIGNIFICANTLY speeds up the log, if you choose the right disk. During normal operation, the LOG is only written to, never read from. Every write identified as a sync write by the host (unless the subvol you are writing to is configured to override and treat all writes as either sync or async) is written in quick shorthand to the LOG, after which it is treported as committed to the sender. The system then continues to commit that write from RAM to the main pool as it would with an async write. The only time you ever read from the slog is on startup, after a crash or power-outage. Then the system will attempt to reassemble the shorthand in the LOG device, and integrate it into the pool before mounting it again.

The only time you'll ever lose data from a failed SLOG is if the system crashes and the SLOG fails at the same time. I used a mirror here as it is best practice, but in truth, I probably didn't have to. This risk is very small.

Ideal disks here have very low write latency, as that will determine how quickly the writes are committed to the slog, and thus speed up sycn writes across the entire pool. Most consumer SSD's are utterly useless in this role. Almost as bad as spinning rust. Battery backed RAM drives (like ZeusRAM), Optane drives and soime Enterprise SSD's are really good though. Low write latency, as well as either capacitor or battery backing to allow the drive to finish the in flight rights at crash/power out time are key attributes.

The last one is the read cache. It is just there to cache and speed up reads of data that exists elsewhere. It caan be completely lost without any loss of data. So I decided to just stripe two 1TB drives for this role, for maximum size and performance. Though as I mentioned, it is near useless in my application. The cache that is going to get a hit, is doing so in RAM. While there is 2TB of cached data on those two striped NVMe drives, I am currently seeing a 0% hit percentage. I'm not sure if it is actually 0, or just close enough to 0 that it has been rounded down, but either way, that is pretty bad :p

Edit: Actually, that data on the cache hits and misses is off. I just looked and it is telling me I have a 100% hit rate on the RAM cache which can't be right.

Then I realized the version of the arcstat script I have only seems to be pulling the last second or so of data, and there isn't much going on on the server right now. There is a way to pull all stats since boot, but I can't remember what it is. That said, I remember it being really bad long term. Like under 1% bad. but that was back when I only had 512GB of cache, so maybe it is a little better now. I don't know. I just don't have specifics right now.
 
Last edited:
How much time do you spend on the datahoarders sub on Reddit? :D

I try to spend as little time as humanly possible on Reddit. I absolutely hate that site. I find the user interface close to unusably bad, with all the upvoting and downvoting shit leading to a disconnected and incomprehensible mess. And if there is a way to post more than one picture in a post, I haven't figured it out yet. It is so bad.

I prefer various forums and mailing lists. I have lots of time on the ZFS on Linux mailing list, the Servethehome forums, these forums, but I also learned most of what I know from the FreeNAS forums back when I used FreeNAS.
 
A man of culture I see. JBOD + Snapraid + DrivePool is the way for home media. There isn't anything else GUI based with Snapraid-like functionality that I'm aware of, but restoring is trivial, and definitely recommend practicing a few times.

You could start a thread in Storage subforum about replacing snapraid, and will get plenty of recommends- they'll boil down to either ZFS, TrueNAS, unRAID. But I'd avoid ZFS since striping introduces unnecessary risk for home media IMO, nevermind the inability to easily expand one disk at a time, nor different sized disks, plus you lose individual drive spinup/spindown. UnRAID is good and probably closest to Drivepool+Snapraid in functionality.
ZFS, TrueNAS, unRAID were mentioned a few times, but would like to stay Windows based, not an OS at this time.
Media does take up the most space on this machine, but also is for backup for multiple other machines around the house/family. The most important data is backed up to external/offline drives and offsite storage, but that takes up less than 4TB.
Software RAID preferred to save me from having to re-rip all the media. I remember losing bunch of the bad Seagate drives in a very short time and losing like 300 rips. Learning not to hoard as much now and not wasting time ripping as much as I used to. Replacing the drives more so because they are 6 to 7 years old. Not wanting to push my luck.
Will look into the newest SnapRAID and learn a little more and do some practice recoveries.
Thanks.
 
I decided to ditch basically all of the media on my NAS. I moved to the minimal TV service (it didn’t cost much more on top of having gbit), so I don’t need space for DVR. I trashed all the old movie rips - haven’t watched them in 10 years anyway. I trashed most of mp3s as I haven’t listened to any of those in more than 10 years. I’m down to about 1.5TB of stuff I want to keep, and another TB of space for VMs, and 300GB scratch space for Boinc.

Maybe someday I’ll want all the space again, but it’s really nice to consolidate to a NVME mirror, a SATA ssd mirror, and the one old velociraptor
 
Wait, they're offering SMR disks at higher capacities? Not that long ago the attempt to cheapen manufacture despite the limitations of SMR meant people were scrambling to ensure the new discs they were buying and/or pulling out of externals were CMR. I thought, unless things have changed considerably (which is possible I suppose) that anyone buying a 8TB-16TB much less a 24TB+ drive would accept the limitations of SMR for their use cases. I am a bit curious of the difference between UltraStar and WD Gold in this case though,

As long as SMR stays in the data center where it belongs it's fine. For storing backups, older media on your social network of choice (the most recent stuff probably needs to be on flash or ram for performance), and the like you're looking at something that's close to being write only storage anyway so having to accept gigabyte size blocks isn't a problem. It was only ever an issue when WD's engineers didn't force the management and sales droids who wanted to sneak it into consumer devices to have to use it on their work machines first to realize how horrifically unsuited it was for those use cases.
 
I decided to ditch basically all of the media on my NAS. I moved to the minimal TV service (it didn’t cost much more on top of having gbit), so I don’t need space for DVR. I trashed all the old movie rips - haven’t watched them in 10 years anyway. I trashed most of mp3s as I haven’t listened to any of those in more than 10 years. I’m down to about 1.5TB of stuff I want to keep, and another TB of space for VMs, and 300GB scratch space for Boinc.

Maybe someday I’ll want all the space again, but it’s really nice to consolidate to a NVME mirror, a SATA ssd mirror, and the one old velociraptor

Farewell fellow data hoarder.
 
I know nothing about Snapraid or Drivepool, but I am going to have to read up now.

I have always been a huge ZFS fan. It's JBOD does its magic in software, and is reportedly one of the most reliable RAID-like solutions out there due to how well it combats bit-rot.

I believe TrueNAS Core (formerly FreeNAS) still has a web based GUI implementation of ZFS ontop of a barebones FreeBSD install. I haven't used it in ages though, instead favoring the DIY approach using ZFS from the command line.

I currently have my main pool configured as follows:
Code:
 state: ONLINE
  scan: scrub repaired 0B in 10:02:09 with 0 errors on Sun Nov 12 10:26:14 2023
config:

    NAME                                               STATE        READ  WRITE CKSUM
    pool                                               ONLINE       0     0     0
      raidz2-0                                         ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
      raidz2-1                                         ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
    special 
      mirror-4                                         ONLINE       0     0     0
        Inland Premium NVMe 2TB                        ONLINE       0     0     0
        Inland Premium NVMe 2TB                        ONLINE       0     0     0
        Inland Premium NVMe 2TB                        ONLINE       0     0     0
    logs 
      mirror-3                                         ONLINE       0     0     0
        Intel 280GB Optane 900p                        ONLINE       0     0     0
        Intel 280GB Optane 900p                        ONLINE       0     0     0
    cache
        Inland Premium NVMe 1TB                        ONLINE       0     0     0
        Inland Premium NVMe 1TB                        ONLINE       0     0     0

errors: No known data errors

So, essentially ZFS 12 drive RAID60 equivalent on the hard drives. A three way mirror of 2TB Gen 3 MLC NVMe drives for small files and metadata, two mirrored 280GB Optanes for the log device (speeds up sync writes) and two striped 1TB Gen 3 NVMe drives for read cache.

It works pretty well for me. Well, maybe all except the read cache. The hit rate is atrocious on those. Despite being 2TB total that read cache does very little for me.
Isn't raid 60 known for good read hits in general, the small cache on the drives might be enough? Also do you really need that many nvme drives for cache? One big raid 1/5/6 on the nvme cache pool might be better optimized for writes if you're not having issues with read cache.
 
View attachment 614185

Western Digital made both enterprise storage planners and dedicated porn hoarders swoon today with the availability of 24TB CMR hard disks, while production on 28TB SMR hard disks ramps up during enterprise trials.

The new lineup of 3.5-inch 7200 RPM hard drives includes Western Digital's Ultrastar DC HC580 24 TB and WD Gold 24 TB HDDs, which are based on the company's energy-assisted perpendicular magnetic recording (ePMR) technology. Both of these drives are further enhanced with OptiNAND to improve performance by storing repeatable runout (RRO) metadata on NAND memory (instead of on disks) and improve reliability.

The new drives are slightly faster than predecessors due to higher areal density. Meanwhile, per-TB power efficiency of Western Digital's 24 TB and 28 TB HDDs is around 10% - 12% higher than that of 22 TB and 26 TB drives, respectively, due to higher capacity and more or less the same power consumption.

Source

I'll never go back to that garbage

#1 Noise

#2 Constant Noise

#3 Speed

What happened Windows stopped supports HDD drives so you get that grinding sound 24x7 you disabled superfetch then Windows 10 nammed it something else.
You disabled that and something else and it may grind all day. Good long term surveillance maybe when you need a loop of a few months.
 
I'll never go back to that garbage

#1 Noise

#2 Constant Noise

#3 Speed

What happened Windows stopped supports HDD drives so you get that grinding sound 24x7 you disabled superfetch then Windows 10 nammed it something else.
You disabled that and something else and it may grind all day. Good long term surveillance maybe when you need a loop of a few months.
It's pretty quiet if you only use it as a storage drive. Can't hear mine unless something spins it up like installing a new program.
 
It's pretty quiet if you only use it as a storage drive. Can't hear mine unless something spins it up like installing a new program.

Keep it as a secondary drive, and make sure swapping is disabled on it, and it certainly can be. Though I don't trust Windows since Windows 10. It just does whatever the fuck it wants in the background, even creating and resizing partitions...

I honestly don't find hard drive ticking to be disruptive. It's a nice subtle indication that something is going on, much like an HDD LED :p

Though I could totally see how it would be annoying if it were used on an OS drive and going constantly like Windows does on OS drives these days.

I haven't used a hard drive like that in over a decade at this point though. IMHO, 2012 should have been the year of the death of the Hard Drive as a boot/OS/program drive. The fact that they still held on as late as 2017 in some big box brands is just beyond me.
 
Keep it as a secondary drive, and make sure swapping is disabled on it, and it certainly can be. Though I don't trust Windows since Windows 10. It just does whatever the fuck it wants in the background, even creating and resizing partitions...

I honestly don't find hard drive ticking to be disruptive. It's a nice subtle indication that something is going on, much like an HDD LED :p

Though I could totally see how it would be annoying if it were used on an OS drive and going constantly like Windows does on OS drives these days.

I haven't used a hard drive like that in over a decade at this point though. IMHO, 2012 should have been the year of the death of the Hard Drive as a boot/OS/program drive. The fact that they still held on as late as 2017 in some big box brands is just beyond me.
That's how it was for me, switched to SSD for OS drive around 2012 then several years later all games were SSD. Had a mech drive just for legacy storage at that point and haven't had any issues with noise. I also don't mind hearing it spin up from time to time, reminds me of the old days.
 
That's how it was for me, switched to SSD for OS drive around 2012 then several years later all games were SSD. Had a mech drive just for legacy storage at that point and haven't had any issues with noise. I also don't mind hearing it spin up from time to time, reminds me of the old days.

Yep.

I picked up my first SSD (an OG OCZ Agility 120GB) back in late 2009 or early 2010 some time (I can't remember now). The thing was like $320 for 120GB, but it was SO worth it.

I actually split that 120GB SSD down the middle, like 35GB for Linux, and 85GB for Windows. I rotated 2-3 games installed at a time on the windows partition of that SSD. There was no going back to running them off a hard drive.

I kept a couple of WD Green 2TB drives (or were they 3TB? Can't remember) in the desktop for files, media library, etc. but a year or two later I have transitioned those to a NAS. Haven't had hard drives in client machines since :p

9Well, expect for that work machine I got issued in 2019, a Dell XPS with a 2.5" laptop hard drive in it. Oh my god was that brutal.
 
I'll never go back to that garbage

#1 Noise

#2 Constant Noise

#3 Speed

What happened Windows stopped supports HDD drives so you get that grinding sound 24x7 you disabled superfetch then Windows 10 nammed it something else.
You disabled that and something else and it may grind all day. Good long term surveillance maybe when you need a loop of a few months.

Don't hold back! :p

Did you mean spinning HDD in general, or WD drives? If WD, what would you suggest for large spinners?
 
If we're going to talk client storage history, that's a different ball of wax.

In 2008, basically on the release date, I bought 6x 300gb WD Velicoraptors. I used intel matrix raid to make a 180GB raid 0 drive that I installed the OS and a select game or two on. Then the rest of the space went into a 1TBish raid 5 array.

These were the last spinner's I've ever bought for a desktop. I bought an Intel X25-E 64GB summerish 2009. It was crazy expensive and crazy fast, so I put my OS on the X25-E and I dropped 2 of the Velicoraptors from the array to free up a SATA port. Eventually I put an OCZ Vertex 2 in the open Sata slot for games, and then slowly but surely retired the velicoraptors, eventually moving them into my FreeNAS as a faster than 7200RPM storage volume.
 
I went a different route. I had very bad luck with early consumer SSDs, I could kill them at will by benchmarking ZFS over a crypto layer on them.

So I went for lots of RAM and kept platters, plus small SSDs for small things like my browser profile. I switched to platforms with registered RAM for my main workstation and it has 384 GB now, with most drives being HDDs in ZFS. Some SSDs for special services.

My gaming machine has transitioned to all SSDs just this year, and it also has 128 GB RAM. The RAM is a godsend for playing DCS even if you have the game on a WD SN850X like I do. It caches the game logic and multiple airplanes and is the only path to bearable load times. Maybe an Intel Optane drive would help, but I don't want to introduce already discontinued technology into my machine circus.
 
Seriously considering some Solidigm 61.44T U.2s. I could completely ditch the NAS thing altogether.
 
Seriously considering some Solidigm 61.44T U.2s. I could completely ditch the NAS thing altogether.
depending on your NAS, just slap a dock into it for U.2 drives,
https://global.icydock.com/products-c5-s47-i0.html

1700763534060.png
 
Seriously considering some Solidigm 61.44T U.2s. I could completely ditch the NAS thing altogether.

I was kind of curious what they run for. I couldn't find them for sale anywhere, only the smaller 15.36TB version.

Weird form factor.

Yes, it is expensive, but at ~ $1400 it's not as expensive as I was expecting.
 
Last edited:
I was kind of curious what they run for. U couldn't find them for sale anywhere, only the smaller 15.36TB version.

Weird form factor.

Yes, it is expensive, but at ~ $1400 it's not as expensive as I was expecting.
Provantage has them in 2.5” U.2. The 60 TB is a smidge under $4k USD. Marked as a special order. The 30.72 is $2.5k.

I’m actually quite tempted vs buying a bunch of 7.68 TB units.
 
Provantage has them in 2.5” U.2. The 60 TB is a smidge under $4k USD. Marked as a special order. The 30.72 is $2.5k.

I’m actually quite tempted vs buying a bunch of 7.68 TB units.

I just don't see the point. As far as I am concerned, mass storage doesn't need the speed of NVMe. It's wasted. I'm not even sure I'd do SATA SSD's on mass storage, as long as hard drives are cheaper.

For my mass storage I look at two things. Reliability, and cost per TB. That's it.

My mass storage currently consists of twelve 16TB 7200rpm Seagate Enterprise drives. When I run out of space and need an upgrade, I'll likely be buying whatever is cheapest per TB for the size of upgrade I think I need, while still being at least a little "enterprisey" (technical term). Performance literally doesn't factor in here. I'll throw in some smaller NVMe drives to help with caching and the like, but that's about as far as I will go.
 
I just don't see the point. As far as I am concerned, mass storage doesn't need the speed of NVMe. It's wasted. I'm not even sure I'd do SATA SSD's on mass storage, as long as hard drives are cheaper.
If they made that capacity as SAS, I’d do it in a heartbeat; I want to get rid of spinning storage for reliability and power consumption reasons. The wattage consumed by my current spinning storage is not insignificant; this density of SSD would let me get rid of two external disk shelves and reduce the internal drive count in the servers proper from 12 to 4. Also, rebuild times would be quite a bit better vs. a spinning disk being capped at ~120-150 MBytes/s linear write.

For NVMe I’ll need new servers, which are on my list.
 
I’ve recently been buying used Enterprise drives. SAS drives are cheap once they come off lease. So this should be great news for me in about 6-7 years.
 
I’ve recently been buying used Enterprise drives. SAS drives are cheap once they come off lease. So this should be great news for me in about 6-7 years.

Interesting.

I never thought of doing that. I just assumed they would be run into the ground at that point.

How cheap are we talking, and how reliable have you found them to be? Do you get to look at SMART data before buying?
 
Interesting.

I never thought of doing that. I just assumed they would be run into the ground at that point.

How cheap are we talking, and how reliable have you found them to be? Do you get to look at SMART data before buying?
In my experience, 90%+ rated life remaining. Reputable sellers will provide data in the listing or on request; however there are plenty of shitty sellers who manipulate the SMART data.

Price-wise 50%+ discount from original price can be had. Cheap enough to buy a cold spare or three, easily.
 
Interesting.

I never thought of doing that. I just assumed they would be run into the ground at that point.

How cheap are we talking, and how reliable have you found them to be? Do you get to look at SMART data before buying?
There was some 18tb ones the other day I believe going for $160 a piece
 
In my experience, 90%+ rated life remaining. Reputable sellers will provide data in the listing or on request; however there are plenty of shitty sellers who manipulate the SMART data.
I worked for a place for a while that kept their ERP on a 20-disk raid array. The IT guy had about 10 spare disks and bought extra used ones every once in a while. I want to say he had to replace a drive every month or two but I might not be remembering right--it was 15 years ago. He didn't seem to mind the effort, though.
 
These are amazing feats of engineering.

I’ve been trying a different use case - I picked up one of the 20tb x16s on its ~$250 sale, put all our photos/videos/scans/family data on it, then placed it in a safety deposit box at the bank.
 
depending on your NAS, just slap a dock into it for U.2 drives,
https://global.icydock.com/products-c5-s47-i0.html
Getting offtopic but be forewarned that U.2 + PCIe Gen4 + DIY is an insane compatibility rabbit hole. Meaning if you don't have the exact right mix of adapter/backplane + cables to support the high signal integrity requirements of PCIe Gen4, to the naked eye the drive will appear to be working, but underneath there could be silent corruption happening as the drive struggles to error-correct and cope with a sea of signal noise. If you're lucky, subpar connectivity may manifest as WHEA-17 errors in Event Viewer, but not always. In that IcyDock product matrix, it's only the ToughArmor MB699VP-B V3 that will properly support Gen4 U.2, and only with the exact right cable. Most of the cheapo shit on Amazon (single U.2 PCIe adapters and cables) is asking for hours of troubleshooting headaches. Some people will resort to just setting the PCIe slot to Gen3, but I've seen instances where the U.2 drive still isn't happy. By and large, enterprise U.2 drives are designed for server-class Gen4 NVMe backplanes.

Enter PCIe redrivers and retimers, and to get a decent 2 or 4 port HBA equipped with a redriver + proper cabling will be hundreds of dollars, which will tend to wipe wipe out any savings of having bought a used U.2 enterprise drive. There are some very long threads on this on Level1Techs forum, and secondarily ServeTheHome forum. It's all doable, there's just a potential time commitment involved in troubleshooting which won't be worth it to most people.
 
Last edited:
I use the M.2 to U.2 adapters just fine with my Gen4 Optane P5800Xs as my boot drives. Getting full bandwidth/performance and no weird stuff with my OSs.

I plan on using the same adapters with a Highpoint 2x M.2 RAID controller.
 
So to summarize (& what WD is saying with these drives):

Size DOES matter, hehehehe :D
 
I use the M.2 to U.2 adapters just fine with my Gen4 Optane P5800Xs as my boot drives. Getting full bandwidth/performance and no weird stuff with my OSs.

I plan on using the same adapters with a Highpoint 2x M.2 RAID controller.

Yikes. That's some serious dough invested in boot drives.

1700951450432.png


I'm not too proud to admit I'm a little jelly. I feel like there is a n "if I won the lottery I wouldn't tell anyone, but there would be signs" meme in here :p

Optane DC 5800x drives are pretty much the holy grail of high performance storage right now, and their pricing puts them out of reach for most except those Enterprise users that really need the low queue depth 4k random read performance.

I boot off a single 960GB Optane 905p, and I thought that was even bordering on silly. The slower sequential speeds (due to it being Gen3) slow some things down a little bit (but not as much as one would think) but the fantastic gains in low queue depth 4k random performance more than make up for it, IMHO.

Windows updates happen in a small fraction of the time it takes on even a Samsung 990 Pro. Traditional (non fast boot/hibernate) boots a super fast as well, as is loading programs and games that are not optimized to be sequential, Starfield benefits a lot. Greatly reduces stutter. So it is a tradeoff, but a tradeoff I am willing to make.

The Optane DC P5800x - however - is a no-tradeoff solution right now. I just can't justify dropping thousands of dollars on a boot drive. :p
 
Last edited:
Interesting.

I never thought of doing that. I just assumed they would be run into the ground at that point.

How cheap are we talking, and how reliable have you found them to be? Do you get to look at SMART data before buying?
My recent batch was 10TB for $66 each. The only real downside is that you have to get a RAID/HBA PCI-E controller to use the SAS drives. But those are down to the $20-$30 range if hard drive speeds are all you are worrying about. Plus those will connect at least 8 drives (more if you use expanders).

The SMART data shows they have been running almost straight for 5 or 6 years. Very few power cycles, but lots of hours. The 10TB drives I got were manufactured in 2018.

SAS drives are usually rated better than SATA drives too. In terms of error rate and mean time between fails. The error rate might be more relevant when you get into the 10+ terabyte range.

My first foray into SAS was 6TB for $35 each. Bought 8 of them. Working great so far.
 
Last edited:
Yikes. That's some serious dough invested in boot drives.

View attachment 616041

I'm not too proud to admit I'm a little jelly. I feel like there is a n "if I won the lottery I wouldn't tell anyone, but there would be signs" meme in here :p

Optane DC 5800x drives are pretty much the holy grail of high performance storage right now, and their pricing puts them out of reach for most except those Enterprise users that really need the low queue depth 4k random read performance.

I boot off a single 960GB Optane 905p, and I thought that was even bordering on silly. The slower sequential speeds (due to it being Gen3) slow some things down a little bit (but not as much as one would think) but the fantastic gains in low queue depth 4k random performance more than make up for it, IMHO.

Windows updates happen in a small fraction of the time it takes on even a Samsung 990 Pro. Traditional (non fast boot/hibernate) boots a super fast as well, as is loading programs and games that are not optimized to be sequential, Starfield benefits a lot. Greatly reduces stutter. So it is a tradeoff, but a tradeoff I am willing to make.

The Optane DC P5800x - however - is a no-tradeoff solution right now. I just can't justify dropping thousands of dollars on a boot drive. :p
I just have 3 of the $686 400GB P5800Xs from Provantage - that’s plenty for my boot/OS drives. I used to run the 280GB 905p’s fine.

https://www.provantage.com/intel-ssdpf21q400gb01~7ITE93AQ.htm

Once you run Optane for your OS, it’s really hard to use anything else. The 4k QD1 random reads are stupid fast on the P5800Xs.
 
Back
Top