Next Capacity leap in M.2 or U.2?

Brahmzy

Supreme [H]ardness
Joined
Sep 9, 2004
Messages
5,024
Thoughts / bets on bigger, cheaper M.2s or U.2s? Cannot believe how small and expensive flash is compared to the spinners. STILL. They’re milking this transition for all it’s worth.

I’ve got some NAS systems that really need to go all-flash and trying to do that now is very expensive still, because the capacities are so small, still. I think we’re one generation away from seeing a big jump.

We’ve got a few mainstream 8TB M.2s and a lot of Datacenter 16TB & 32TB U.2s.

We need $300 16TB M.2s / U.2s or better. I’d love to see a jump to 64TB U.2/U.3 which would hopefully make the smaller sizes much cheaper.

We’re close. We need that next halvening/doubling to happen.
 
What I don't understand is why they don't make 110mm M.2s to place more of the cheaper flash chips.

A possible explanation is that the cheap controllers can't address that many chips, but I have not seen confirmation on this one way or another.
 
why they don't make 110mm M.2s to place more of the cheaper flash chips.
If we look at the price-capacity of "big" 2.5 inch ssd.... and how empty they tend to be:

-hard-drive-is-just-empty-plastic-v0-ddofzyxr6zka1.jpg


I am not sure how much they would save money by spreading a lot at the cost of more complex controller, pcb, etc...

As for the need of more than 8TB in a m2 format stated, I do wonder how many they would sell (and how many of those 8TB they sell currently), the need could be quite niche.

For a NAS for which say 7 m2 x 8 tb would not be enough how much value does the added bandwith to individual drive vs sata and saved space versus 2.5 inch drive, once you go in the 8+ ssd drive, you need quite the network for them to be the bottleneck, they can saturate 40GB/s...

I feel there a bit of we want vs we need
 
Last edited:
For a NAS for which say 7 m2 x 8 tb would not be enough how much value does the added bandwith to individual drive vs sata and saved space versus 2.5 inch drive, once you go in the 8+ ssd drive, you need quite the network for them to be the bottleneck, they can saturate 40GB/s...

I feel there a bit of we want vs we need
First, this is {H]. not wimps-are-us.
Second, then we will want faster LANs.
 
What I don't understand is why they don't make 110mm M.2s to place more of the cheaper flash chips.

A possible explanation is that the cheap controllers can't address that many chips, but I have not seen confirmation on this one way or another.
I feel the 110mm (22110) will never be adopted heavily due to laptops, it will require better planning by them, and in some cases its already tight as it is. Not saying it cant happen, but i doubt it will.

To me we are going to reach 8/16 TB, where m.2 will no longer be able to grow above, and probably people that need more than that, we likely move to formarts like u.2/u.3.
 
I'm all for U.2, but I think current case design is holding it back.

Also, hotswap is questionable in most OSes. Of course M.2 NVMe doesn't do hotswap either, but U.2 with working hotswap would be a big draw for me.
 
I'm all for U.2, but I think current case design is holding it back.

Also, hotswap is questionable in most OSes. Of course M.2 NVMe doesn't do hotswap either, but U.2 with working hotswap would be a big draw for me.
I'm also all for U.2, but it's frankly a bit of a pain to wrap my head around.
  • The connectors are basically SAS, but the protocol isn't SAS and thus you can't just use any old SAS backplane with U.2.
  • Some SAS HBAs are actually tri-mode and support U.2 NVMe drives alongside the expected SAS and SATA, but generally have PCIe 3.0 x8 interfaces - obvious bottleneck.
  • Most servers (the enterprise rackmount kind) seem to have specialized U.2 backplanes alongside the SAS ones, driven by PCIe redriver/retimer cards running OCuLink cables to the backplane. (I'd literally never heard of OCuLink before this, and some server documentation omitting its name entirely did not help.)
Meanwhile, consumer cases still continue to stagnate on SATA even when they offer hotswap backplanes (Corsair 800D, Lian-Li O11D XL, etc.), so I'm not holding out any hope on that situation getting any better, especially with EDSFF/U.3 diverging from typical desktop/workstation drive form factors.

First, this is {H]. not wimps-are-us.
Second, then we will want faster LANs.
If you were truly [H] with your networking, you'd have abandoned twisted-pair copper a long time ago and embraced OM3/4/5 fiber alongside (Q)SFP28 transceivers.

I'm still getting to that point - got the fiber (part of some old Fibre Channel stuff that was given to me), but not the NICs.
 
So Solidigm IS releasing a 64GB U2 - not available anywhere I can see yet and says “Coming soon” on their website still, yet reveiwers had reviews out in August..

https://hothardware.com/reviews/solidigm-d5-p5336-review-61tb-data-center-ssd?page=2

https://www.storagereview.com/review/solidigm-p5336-61-44tb-ssd-review

‘I’m sure that’ll be an expensive sumbish.

A pair of those would do me fine in a RAID 1 mirror. My current target is 60TB. Or I’d have to do 4 32s in an R10.

Then get one of these:

https://www.highpoint-tech.com/nvme1/ssd7580b


View: https://www.youtube.com/watch?v=XIg6VdtnLwQ&t=42s

https://www.highpoint-tech.com/post...ot-plug-hot-swap-capable-nvme-raid-controller

https://www.highpoint-tech.com/post...ividual-nvme-ssds-non-raid-using-the-ssd7580b

It’d be a cable/power challenge in most cases, but doable.

Been on 10GbE for like 10 years at home, don’t really need more than that for what I’m doing.

What I would love is to borrow a pair of 25GbE or 40GbE HBAs temporarily to do a direct-connect between old array and new array for the data migration.
 
Back
Top