NVME - why are disks less then 1TB not performing as well as their bigger sisters?

tived

Limp Gawd
Joined
Aug 31, 2006
Messages
204
Hi Guys,

apologies if this has been discussed before. but I wouldn't mind picking your brains again ;-)

Trying to find the best storage solution (Cost, benefits - speed, access etc...) I can fit 12x NVME in my system via two AIC cards (threadripper systems, at least that is the plan)

as I am looking at different disks I keep seeing that the 500gb version of most of the current PCIe Gen4 x4 disk is slower then their bigger sisters! Why is that?

Second question ;-)
what is the best option for OS/application boot disk? single disk Nvme or RAID (i know that there is the Intel Optane 905p, but what else do we have) i would think that 4x NVME gen4 would outrun the Optane 905p

I know there are no free lunch anywhere, so perhaps 4x RAID-0 of NVME say Samsung 980 PRO vs Optane 905p or a single NVME might read or write faster or there is something else that causes slowdown? latency - where is the holy grail ? cost vs best performance (if are probably splitting hairs between 1-3%)

I would like to hear your thoughts - also if I have missed out anything please do not hesitate to add to this, I am sure we can all benefit here

thanks

Henrik
 
Just wondering what you are doing to need so much disk bandwidth.
 
Probably has something to do with the topology/configuration. I imagine the 500gb board has fewer chips than the larger ones. Fewer chips means less spread out data, and possibly less bandwidth and iops.
 
Do not know enough to comment, but if you asking what is better, what would it be used for is probably a good thing to add to your message.

Has for the larger capacity NVME being higher performance than smaller one of the same model in general:
https://www.pcworld.com/article/2899351/everything-you-need-to-know-about-nvme.html#toc-4
  • The more NAND chips, the more paths and destinations the controller has to distribute and store data at. The smaller-capacity versions (especially 128GB and 256GB) of the same model drive are quite often slower than the larger capacity flavors.
 
Hi guys,

thanks for the reply

I have a couple of reasons for asking, one I feel that we often focus on CPU, GPU and RAM speed, but to me that is all good and well but in the real world we are moving data from point A to point B, in that process the CPU, GPU and RAM does its processes, but it all only as fast as you can move that data through.

in the past when I was working as Photo retoucher, and was working on very large files 2-10GB images, I built 8 and 12 disk SSD RAID-0 (aka 2010-2016 vintage) across multiple controllers with great success for its time. I was building Panorama's stitch together with hundreds images and somethings 1000s because of HDR blending. Doing this with single disks to multiple disks was like night and day.

So my question is if others are experiencing similar trends? I know we all use our computers for different purposes but in the end we just move data from A to B ;-)

Zepher thanks for the question - the key word is "can" fit 12 Nvme (4 on the mainboard and 2x 4 on the AIC cards) i am still some way away from that

which leads me to the usage of smaller NVME's due to cost, I don't need 12x 2TB of live storage at lightning speed! at the moment I have 2x 1TB Seagate 520 and 4x 2TB (3x Corsair MP600 + 1x Gigabyte Gen4) on the Ryzen I am getting max 13Gb/s because I can't use the AIC at 16x so I 2 on the AIC and 2 on the motherboard, but this should improve when I build the Threadripper 3960x

Thanks Nobu

Thanks LukeTbk i'll have a read of the link

I am struggling to find good information when googling... but I have missed things before :unsure: ;)
 
I can speak somewhat authoritatively on this. Nobu has it right:
Probably has something to do with the topology/configuration.
Much like memory controllers on CPUs, SSD controllers are multi-channel devices. As a cost-savings method, vendors will often pair a controller with a single size NAND package and just use multiple packages to add up to the desired capacity. A common example would be 1TB SSD on a 4-channel controller with 4x 256GB NAND packages. If you buy the 500GB version of the same drive, often times that drive will be the same controller but only 2 packages of 256GB NAND, which means the controller is operating with only 2 of the available 4 channels populated. Controllers come with up to 8 channels.

Often times maximum performance is limited by the controller or the interface, so dropping from 4 channels to 2 will not necessarily result in a strict halving of performance, but there will be an impact to performance in any situation where a controller is under-populated.
 
thanks sinisterDei - great explanation. That makes sense. hmm what to do?

add 2x Seagate 520 1TB (total 4) or 4x WD SN850 or Samsung 980 Pro 500gb - for OS/App boot drive

:nailbiting: 🤔🤔🤔
 
So I've reviewed a 500GB Seagate 520, and a 500GB Samsung 980 Pro, and now am looking at a 2TB Sabrent Rocket 4 Plus. The 980 Pro, Rocket 4 Plus, and SN850 are all from the 'second wave' of PCIe 4.0 SSDs and all significantly outpace all the first wave drives like the Firecuda 520 and MP600 in terms of raw sequential throughput. Their random read/write performance is less improved.

You *seriously* pay for the second wave PCIe 4.0 privilege though, and for PCIe 4.0 in general.

You said that circa 2010-2016 you were operating 12 drives in RAID0, and I am assuming we're talking about SATA SSDs at that point. Assuming good SATA drives and perfect scaling on a controller, 12 SATA SSDs in RAID0 should net you ~6600 MB/s of read/write. In context, that is matched by a *single* second wave PCIe 4.0 SSD like the SN850, 980 Pro, or Rocket 4 Plus.

I'm a bit confused though. You say you might stitch together "1000s" of images from between 2-10GB each. Assuming an average size of 6GB and having 1000 files, that presents 6TB of data. So, I would think you would need an array large enough to hold 6TB of data. To me that would mean 4x 2TB or 8x 1TB SSDs. If you got 4x Rocket 4 Plus/SN850 drives that would be $1600+, or if you got 8x SK hynix P31 Gold SSDs (PCIe 3.0) you would only be looking at around $1000, for approximately the same performance.
 
You aren't going to see much, if any, improvement RAIDing drives for boot. The time difference between a 905p and an SATA SSD on an optimized system was 1 second. Stick to a single drive and it'll be plenty fast. If you've got the money to spare, the Optane drives will give you the best response from programs (and I'll never give mine up!), plus loading all the background stuff you might have, but the 980 and SN850 will do just as well to your perception of overall system response.
 
So I've reviewed a 500GB Seagate 520, and a 500GB Samsung 980 Pro, and now am looking at a 2TB Sabrent Rocket 4 Plus. The 980 Pro, Rocket 4 Plus, and SN850 are all from the 'second wave' of PCIe 4.0 SSDs and all significantly outpace all the first wave drives like the Firecuda 520 and MP600 in terms of raw sequential throughput. Their random read/write performance is less improved.

You *seriously* pay for the second wave PCIe 4.0 privilege though, and for PCIe 4.0 in general.

You said that circa 2010-2016 you were operating 12 drives in RAID0, and I am assuming we're talking about SATA SSDs at that point. Assuming good SATA drives and perfect scaling on a controller, 12 SATA SSDs in RAID0 should net you ~6600 MB/s of read/write. In context, that is matched by a *single* second wave PCIe 4.0 SSD like the SN850, 980 Pro, or Rocket 4 Plus.

I'm a bit confused though. You say you might stitch together "1000s" of images from between 2-10GB each. Assuming an average size of 6GB and having 1000 files, that presents 6TB of data. So, I would think you would need an array large enough to hold 6TB of data. To me that would mean 4x 2TB or 8x 1TB SSDs. If you got 4x Rocket 4 Plus/SN850 drives that would be $1600+, or if you got 8x SK hynix P31 Gold SSDs (PCIe 3.0) you would only be looking at around $1000, for approximately the same performance.
Very good points, thanks

regarding the images, it’s 2-10GB finished image that is made up of 100’s and sometimes up to a couple of thousand images

making an image in the terabytes that would be a nightmare to manage let along move around and edit.

better have a bit of think 💭

thanks guys
 
Back
Top