New VNX Flash Drives - FAST or FAST Cache?

Joined
Jan 25, 2007
Messages
983
We're in process of setting up/testing/etc our VNX. It's primary purpose will soon be View desktops, but for now will be server VMs plus file duties. We have a 5300 with 20 600GB 15K, 5 2TB NL-SAS and 5 100GB flash. Right now, the flash (minus hotspare) is used for fast cache, and the rest are thrown in a FAST storage pool. We had some extra dollars to spend and are picking up 4 more flash drives - but I'm at odds on how to use them.

If I throw them in cache, then that'll give me 400GB of cache (freaking awesome!) which is only 100GB shy of the max on the 5300. On the other hand, if I put them in the pool, it'll be easier to guarantee performance for the View master images.

The flip side is that by the time we're running View, the VMs will get to storage via 10GB NFS instead of FC, so I'm thinking I wouldn't be able to force them to extreme performance (flash) unless I setup a lun + share separately for the master images.

Forgive me for thinking through this out loud, I'm still going through the VNX training and EMC is all pretty new to me.

But my question is really how should the flash be used? Is it smarter to max cache out before using any in a storage pool? If so, how do I deal with "contention" when View I/O is high? Should I just let the cache balance it out? I know that putting it in the pool will give me a chance to guarantee performance, but would I be better off letting all enabled luns benefit instead?
 
Put them in their own RAID5 storage group and put the VDI replica on that. The replica is almost 100% read and we try to put those on dedicated EFDs.
 
I recently set up a vnx5700 with a little over 212Ts usable space all 2tb sas drives and 5 EFDs.

I'm pretty sure the 5300 only has 2 busses.

Try and have your EFDS spread out on your bus. There have been some internal documents that have been floating around EMC that are detailing about how spreading EFDs over your busses is better then having them all on 1 bus. I saw a few of them, but my TC wouldn't hook me up with them.

I hope you have your 5 flash drives a laid out like so:

Bus0_0_5, Bus0_0_6 and hot spare on Bus0_0_14 and the other 2 are on Bus1_0_0 and Bus1_0_1.

Im not 100% sure, but EMC might not let you buy only 4 drives, they may make you buy 5. 4+1.

If you do get a few more EFDs try and keep them on the first tray B0_0_0, and or Bus1_0_0 that way your efd have the fastest path to your SPs.

The above is only if you have the 15 drive trays. I know then have new higher density drive tray out, just can't remember how many drive it has.

However, don't panic if you don't have the drive installed the way I detailed above.

As for how to use them?

Depends.

Flash cache is global array. If you have 2 storage pools, then both storage pools with have access to that Flash Cache. If you have throw some Flash EFDs into your storage pool, then your pool would have access to the tiers of disks in your pool.

You could do something like 1 pool with giant dumb sata drives and a 2nd pool with 5% efds, 15 fc drives and then 80% sata drives. Both pools would use the FlashCache, but pool 2 would handle higher writes better in the long run.

If I was in your shoes, I would setup FLASH Cache, and then setup Pool 0 with both my 600Gig Fc and 2T drives in a single pool. FAST on the array will move most access blocks of disks to the FC drives during the scheduled relocation.

The neat thing about Flash Cache is will fill itself up 100% as soon as you turn it on with the what it thinks is the buisiest stuff. There is no scheduled relocation with FlashCache, it does it live on the fly. Its awesome for reads, and thats what 90% of most vmware shops are doing.

I honestly don't think you'll crush your 5 drive flash cache doing VMware. I would use the budget for more 2TB drives and grow your foot print.

To put it in perspective.. I'm only runnin 5 EFDs in my 5700, the rest is 212TBs of 2TB sata drives. I'm running a pretty busy development environment, with 90% Oracle running either cooked filesystems of ASM and I have zero performance issues. My dev environment is running 4 copies of my production OLTP environment for various development teams.

fyi.. if you havent done so already.. there is an issue with Flare 31 and EFD drives.. something about the array and it will over voltage the EFD drive causing it to burn up, and take the other EFD drives with it causing a data loss and outage. Reach out to your field CE or use Unisphere manager to download the latest flare revisions and get it updated.

best of luck.
 
The key here is that he is doing VMware View which does change some recommendations..that's why I said put the replica on the EFDs as a standalone RAID group. You want that replica on EFDs since it's almost 100% read..no worries about a write penalty so RAID5 is fine and it'll greatly help VDI clients.
 
Well thanks for the input guys - I'll update as soon as get the time to work on that project some more.

Anyone else here also responsible for VMware, san, network, firewall, VPN, wireless, web filtering, mail, VDI, desktops, AD, Netware, servers and the kitchen sink?
 
The key here is that he is doing VMware View which does change some recommendations..that's why I said put the replica on the EFDs as a standalone RAID group. You want that replica on EFDs since it's almost 100% read..no worries about a write penalty so RAID5 is fine and it'll greatly help VDI clients.

I checked out your blog, its good soup and good information.

I'm not saying your wrong by any means. Your recommendation makes perfect sense on any other array then an VNX.

but with flash cache it doesn't make sense to create a 2nd raid group of nothing but EFDs on a vnx. It would be a complete waste of resources. Flash Cache is already acting as a second raid group, but its used as a global resource. Having a single pool with FC and Sata drives that has access to the Flash Cache is the way to go based on what the original poster posted. Its also the least amount of work, baby sitting and up keep. Turn on FlashCache, and Set your FAST tiering policy, and forget about it. Just add dumb sata drives to the pool as you go.
 
Well thanks for the input guys - I'll update as soon as get the time to work on that project some more.

Anyone else here also responsible for VMware, san, network, firewall, VPN, wireless, web filtering, mail, VDI, desktops, AD, Netware, servers and the kitchen sink?

i do lots of emc san stuff supporting vmax, clarrion, ns, vnx with either brocade or cisco switches. about to pile in a bunch of isilon as i type this. my biggest user of storage is oracle, followed by vmware. i used to work for a fortune 500 company doing it, now i work in the financial sector.
 
I checked out your blog, its good soup and good information.

I'm not saying your wrong by any means. Your recommendation makes perfect sense on any other array then an VNX.

but with flash cache it doesn't make sense to create a 2nd raid group of nothing but EFDs on a vnx. It would be a complete waste of resources. Flash Cache is already acting as a second raid group, but its used as a global resource. Having a single pool with FC and Sata drives that has access to the Flash Cache is the way to go based on what the original poster posted. Its also the least amount of work, baby sitting and up keep. Turn on FlashCache, and Set your FAST tiering policy, and forget about it. Just add dumb sata drives to the pool as you go.

Then we'll agree to disagree. :) Yes, FAST Cache is global...which is why I don't want to put all of my EFDs as those with VDI. I want FAST Cache to soak up the writes coming to the Linked Clone deltas. I want the replica that is 100% reads to sit on dedicated EFDs. I want to know the entire image is going to come off those during a boot storm and that the FAST Cache hasn't dropped a lot of that and replaced it with other data.

The thing is, I'm assuming anyone doing this will have BOTH. If you ONLY have a set of EFDs then yes, absolutely put those as FAST Cache..by far your biggest bang for that buck. But if you already has some FAST Cache and this is for VDI I am going to recommend the new EFDs go to the replica volume.

Tiering and FAST Cache work very, very well for varied workloads but VDI isn't really a varied workload. The different type of workloads are specifically split out. Replica which is almost 100% read. Linked Clone deltas which are 80% or more write...and then persistent data that is just user data that can go on slow disk.
 
No No.. I totally agree.

I can clearly see how you'd want EFDs for the replica volume on a small dedicated array like a 5300. I'm not sure how I completely missed that point a few posts back. It makes total sense, when you'd have 200 VDIs power on at 8am all at once slamming the same volume group.

I come from a background where all my partners want the fastest they can get when ever they can get it. Hence the Flash Cache. If I saw my partners were just running over my Flash Cache I would just buy more Flash Cache so more can sit in it for longer periods of time. I'm sured to have all sorts of different apps on my arrays. I'm too big to have small islands of storage for single teams. It doesn't make sense for me.

either way good soup and good information.
 
I was doing some digging on power link and found this for the OP.

https://community.emc.com/message/574275

but here is the jist.

Test that i've done show that FAST Cache it's give me better perfomance.
I can create one raid group from my 2 Flash drives and put there my critical boot lun and use other 2 Flash drivers for FAST Cache.
I will try to set bigger window for relocation and will see.
I've just created 2x600 SAS in one Raid Group and they give me better perfomance that my pool...
So i think that for these small number of drives Raid Group+FAST Cache give me better perfomance.
I have to read this pdf may be inside i will find a lot of usefull info.

Really if the OP has 10 Flash drives, 5 for FastCache, and 5 for your Replica Volume, that would be the most perfect way(in my mind anyway) to do it. Why? Set it, and forget it. Less work on the OPs part in terms of tweaking and less staring at Anaylzer. Buy 1 or 2 more trays of FC and then the rest just dumb sata drives.

Inital cost would be a factor, but buying dumb sata drives over the next few years would get cheaper.

If cost is a factor.. Based on this powerlink document..

https://powerlink.emc.com/nsepn/web...h8850-oracle-performance-vnx-fastcache-wp.pdf

Page 28 says that you have the ability to disable Fast Cache on the luns that do no require it. You could disable FastCache on everything but your Replica Luns.

To find the above document, log into power link search "vnx vmware" and then click on the documents tab. Should be the 5th or 6th hit.

Anyway, there are a few ways to skin this cat.

yah.. im just pretty much repeating what Netjunkie already said just a little different.
 
For now we are going with the 5 EFD's that we ordered for FAST CACHE. It will be hands down better performance over the current CX4 anway..and i've budgeted for 5 additional SSD's around mid-year. We should be fine. Thanks.
 
Time to dig up an old thread!

So we just installed our second VNX and this configuration we only have 10 EFDs.

This started a little debate in my head on how to use them.

The original configuration was 4 100gb EFDs for FAST Cache and 5 in a RAID5 for one of our Tiered Storage Pools.

My thoughts are because we only have 5 100gb EFDs for Tiered Storage that maybe it would be better to place them into FAST Cache along with our hot spare, order a new hot spare and call it a day.

Any thoughts? The rest of the drive configuration is 25 600gb 15k SAS and 10 1tb NL-SAS.
 
Yeah..I think FAST Cache provides more bang for the buck..really. I only have EFD's dedicated to FAST Cache and it's working out quite well, especially for our Linked Clone Pools.

We are, however, migrating some heavy OLTP off of our HP Lefthand over to the VNX shortly and I may get some more EFD's for tiers in with this environment. Prod Control is running a crapload of ETL jobs for a conversion and it's killing the p4000's. Instead buying more p4000 nodes I recommended leveraging what we already have and take advantage of FAST with our VNX 5500.
 
Sounds like a plan. FAST Cache cares more about small chunks of data (64k vs. 1GB) and is a global resource whereas FAST VP SSDs can only be used by the one Storage Pool and requires time to migrate data between tiers (usually a day, depending on your migration schedule).
 
Then we'll agree to disagree. :) Yes, FAST Cache is global...which is why I don't want to put all of my EFDs as those with VDI. I want FAST Cache to soak up the writes coming to the Linked Clone deltas. I want the replica that is 100% reads to sit on dedicated EFDs. I want to know the entire image is going to come off those during a boot storm and that the FAST Cache hasn't dropped a lot of that and replaced it with other data.

The thing is, I'm assuming anyone doing this will have BOTH. If you ONLY have a set of EFDs then yes, absolutely put those as FAST Cache..by far your biggest bang for that buck. But if you already has some FAST Cache and this is for VDI I am going to recommend the new EFDs go to the replica volume.

Tiering and FAST Cache work very, very well for varied workloads but VDI isn't really a varied workload. The different type of workloads are specifically split out. Replica which is almost 100% read. Linked Clone deltas which are 80% or more write...and then persistent data that is just user data that can go on slow disk.

This isn't a loaded question at all, but would you still dedicate EFD's to a separate volume for the replica with the CBRC stuff in 5.1? I haven't had the opportunity to see it in action just yet, but if it works as advertised I would think the EFD's would be better utilized globally perhaps?
 
This isn't a loaded question at all, but would you still dedicate EFD's to a separate volume for the replica with the CBRC stuff in 5.1? I haven't had the opportunity to see it in action just yet, but if it works as advertised I would think the EFD's would be better utilized globally perhaps?

Great question. I don't know yet. My first thought is "yes" simply because you don't know what's in the CBRC and during a morning boot storm until CBRC is warm the EFDs for the replica would help. Is it worth it? I don't know yet as I haven't tested CBRC yet either. Also keep in mind that CBRC is really meant to be 1GB to 2GB in size (going by what I was told at EMC World) so it may not be able to stage as much as you want for the replica.

That's a good thing and a bad thing about VDI right now. It's changing VERY quickly..but for the better and cheaper.
 
I would be real curious to see how much of an impact it makes on reading in a bootstorm. I would dare say there has to be a tipping point where the number of desktops simultaniously booting is less than X then your EFD's would serve a better in the pool in the grand scheme of performance during and after the gold rush/bootstorm.

If I had the gear available to me right now I would try to find X.
 
Time to dig up an old thread!

So we just installed our second VNX and this configuration we only have 10 EFDs.

This started a little debate in my head on how to use them.

The original configuration was 4 100gb EFDs for FAST Cache and 5 in a RAID5 for one of our Tiered Storage Pools.

My thoughts are because we only have 5 100gb EFDs for Tiered Storage that maybe it would be better to place them into FAST Cache along with our hot spare, order a new hot spare and call it a day.

Any thoughts? The rest of the drive configuration is 25 600gb 15k SAS and 10 1tb NL-SAS.

I think your point is right on and the way to go. Even more so with a such a small system as that.

I'm a big fan of FlashCache 1st, vs putting drives into a sharing pool. FlashCache is global array and as soon as its turned on, you use it. When its EFDs are added to a pool, it takes some time to move to them to where they gotta go.
 
I've a question about Fast Cache, Do you really need a hot spare? I mean it makes sets of raid 1 after all... Thoughts?


EDIT:

After some twitter discussion, I learned that the Pool is really moved into Fast Cache, Therefore, if you loose your Fast Cache, you loose your pool.

That also answers some Tier vs FC discussion...
 
Last edited:
Back
Top