Virtualised Storage: Definition?

parityboy

Limp Gawd
Joined
Nov 13, 2010
Messages
390
How would "virtualised storage" be defined? What constitutes "virtualised storage"? Would it be one or more of the following?

1. Virtual hard disks (big files), like what VirtualBox and VMware Workstation use?
2. Storage pooled and then partitioned into logical volumes, as used by Linux LVM, or ZFS?
3. A LUN exposed by a SAN, which could have an LVM logical volume behind it?

Is there an accepted definition?
 
At the end of the day, even though storage is "virtualized" it all goes back to a physical drive somewhere.

I define storage virtualization as any storage that is utilized by a virtual machine. Whether it be the actual VMDK, or an iSCSI lun being exposed that the virtual machine is using as a secondary drive, etc.
 
The industry term for virtualized storage is when you have multiple storage arrays behind some sort of appliance/system that makes it appear as one and the appliance handles data placement, protection, replication, tiering, migration, etc. Examples would be IBM SVC, EMC VPlex, NetApp vFiler..things like that.
 
How would "virtualised storage" be defined? What constitutes "virtualised storage"? Would it be one or more of the following?

1. Virtual hard disks (big files), like what VirtualBox and VMware Workstation use?
2. Storage pooled and then partitioned into logical volumes, as used by Linux LVM, or ZFS?
3. A LUN exposed by a SAN, which could have an LVM logical volume behind it?

Is there an accepted definition?

No, no, and no.

A SVD, or storage virtualization device, is a device that presents luns from a DIFFERENT array behind it (or arrays).

see: IBM SVC. Atlantis Ilio. Hitachi has one too


Could also be something like the lefthand VSA, but that term technically isn't accurate for it.
 
So if I built a box that exported LUNs via iSCSI, and those LUNs were mounted from a separate box running a front-end filer, is that not considered to be a form of virtualised storage?
 
That could be, yes, but it's rarely done with iSCSI.

EG: Several boxes that have storage exporting to a central box that then maps out different luns from those luns. Would be kinda pointless for most home environments though, and even many corp ones.
 
So what sort of things *are* done with iSCSI? I was under the impression that iSCSI was a SAN technology.
 
It is, but you just don't see an iscsi architecture for SVDs all that often. It's seen mostly in the FC realm (at least for traditional ones).

See some NFS ones too.
 
You're not understanding what I'm saying.

No one uses SVD devices really for iSCSI. It's for far more expensive pieces of hardware, generally virtualizing several fibre channel arrays and presenting them back out, or converting them to NFS file shares. They're very expensive pieces of kit, so they don't fit most of the general iSCSI market area currently, nor is there really a need there it seems. They do exist, but they're not common in practice it seems.
 
iSCSI is a storage connectivity protocol, just like Fibre Channel. You don't see it as often for virtual environments for a couple reasons. First, many orgs with SANs started with FC and they'll stay with FC. It's the "easy answer". It's fast. It's reliable. Mature. Well known. Once you build a FC fabric you rarely move away from it.

iSCSI works..and works well for many things but it often runs over a shared network not owned by the storage team. It's untrusted by them. It's also usually confined to certain arrays and use cases so there is far less need to put a bunch of iSCSI targets behind a storage virtualization device.

We see storage virtualization devices in places where they buy different vendor SANs or they do a lot of data migration/tiering between SANs and let the device do it. With EMC's VPLEX we use it to create true active/active storage across two datacenters.
 
Thanks Netjunkie, I was really having trouble figuring out the right words - it's for places with BIG needs, and those places have many arrays.
 
Many thanks for the replies. :)

So, with the coming of 40Gb and 100Gb Ethernet, do you see a slowdown in the demand for FC adapters & switches, or will FC simply change tack and push FCoE?

Also, at what point does a business say, "we need a SAN"? Is it number of users? Is it data bulk? System management headaches? Something else?
 
FCoE, FC, etc all advance - interconnect, especially in a virtualized world, doesn't really matter :)

FC will never go away. It has advantages over any IP based storage, much has IP storage has advantages over any FC based storage.

As for when they need it - that's an open ended question :)
 
Virtualization pushes a lot of people to a SAN for shared storage. As your environment grows it's often very advantageous to do a SAN for better utilization of storage..easier provisioning...tiering...and replication (SAN replication simpler than many hosts to many hosts). Lots of reasons but eventually DAS with X number of servers becomes a real problem.

We may see more of a shift to a unified fabric..it's happening but the standards and support matrixes have been an absolute mess. No one is doing a true end-to-end deployment of FCoE yet...it's specific pieces of the datacenter. I don't see 40Gb or 100Gb fixing that problem...it's that standards need to be complete, mature, tested, and gear shipping to use it.

As for iSCSI vs FC. A lot of our iSCSI customers run dedicated switches for that. But with the cheap price of like Cisco MDS9124 switches now it's really not much more to go FC. The cost comes in for HBAs, but if I need to add a quad-NIC card to a server for dedicated iSCSI that also narrows the gap.
 
Back
Top