Free ESXi 6 on a MicroserverGen8 - best HW/logical setup for home lab and NAS

qhash

Weaksauce
Joined
Oct 25, 2011
Messages
110
Hello,

I know that there have been similiar threads here. I have searched the forum, but could not find a specific answer that would be helpful for me.

I do have a Microserver Gen8 with a following parts:
Celeron CPU (which will be upgraded to 1220 or 1230, if a can get one),
8GBs ECC ram,
2 x Crucial M4 256GB drives,
1 x Seagate Constellation ES 2TB drive,
1 x Seagate Constellation CS 2TB drive,
SD 8GB.

I do not have an ODD, so I can fit there another SSD, if required.

I want to create an ESXi 6 home lab server that will also handle home NAS using one of the VMs. Things I am planning to do might be an overkill for my needs, but I also want to learn in the process.

Looking at the parts I have, how should I use my drives to get best performance and reliability? Is having so few drives not worth playing with ZFS? Most simple solution would be 2 x RAID1 volumes from 2 SSDs and 2 HDDs I have. SSDs for VMs datastore and HDDs connected to a VM with Xpenology/other NAS platform, right?

But if the sharing back the storage to the hypervisor is considered, how should I setup that? Let us assume I can add another SSD to the system using ODD sata port.
 
What will work best

-Add 8 GB RAM, a Xeon and a Sata DOM like http://www.supermicro.com/products/nfo/SATADOM.cfm
or a 30GB+ SSD to boot ESXi or at least as a local datastore for the base storage VM

- Add a second Disk controller/HBA like an IBM 1015 or similar dedicated for ZFS storage

- Install ESXi to a stick or the Sata Dom/SSD

- Install a ZFS Storage VM to the local datastore with pass-through of storage
ex a mirror of the SSDs for VMs and a mirror of the 2TB

you can download and try my ESXi template with a ready to use ZFS server based on OmniOS
http://www.napp-it.org/doc/downloads/napp-in-one.pdf

Share the pools via SMB and NFS, use the NFS storage foe ESXi to place other VMs


cheaper but slower oprion, works without a Xeon and the dedicated HBA
- use ESXi physical RDM to offer single disks to your storage VM
 
I must rewrite the whole post. This is so bad.. got the "you have been logged out" message.

Thanks a lot for your response.

1. Will buy Xeon CPU definately.
2. Why another controller? Isn't the B120i suppoerted by VMWare with the HP dedicated ISO?
3. 2 x mirror, got it, sata dom got it - but with the setup you suggested the main VM hosting the storage to all other VMs is not redundant. I can do a backup to USB stick or sth. I am not sure if that is a good aproach, however. Are the ZFS volumes recoverable without the host system? I have no experience in that. If yu would be so kind to elaborate.
4. Using RDM now for testing. Want to do something more advanced.
 
2.
You need the B120 to boot ESXi and for a local datastore.
Professional storage requires that your ZFS storage OS has direct access to storage hardware.
This requires pass-through of a second HBA with disks as you cannot use the B120 for boot and pass-through

3. After a crash, the storage VM reinstall with a pool import is done within five minutes
You may use a rpool mirror but this is not really needed for a basic storage.

4. Physical RDM is ok for testings. You shoiuld be even capable of import such a pool on a barebone system.
 
One more quick question, if you do not mind.
Do you know any cheap rack box model (new or used) with a full support of ESXi 5.5 and capable of running your proposed virtualized setup with RAID1 for boot/base storage and add. controller for the ZFS? Maybe you have some preferences for such a unit..
 
Adaptec ASR-3405/128MB TCA-00288-01-B SATA/SAS
is that RAID card a good replacement for an IBM 1015?
it is almost two times cheaper
We are still considering a configuration with ZFS and pass-through
 
Last edited:
No, the Adaptec is a bad choice for ZFS
- like any hardware raid adapter.

At least under Solarish, driver support for Adaptec is poor
as an additional problem.

The Adaptec may be a good choice for a local Raid-1 ESXi datastore
.

A Dell H200
 
Thank you for your advice, again.

Yeah, I have found a DELL T110 with Xeon 1220, 4GB ECC (will sell it), Dell Perc 200 EC71 (so the good one) and 300GB SAS 15k drive for ~219 USD :) seems to be a best option, i will either sell that SAS drive and buy 2 x SSD or buy another SAS 15k drive - the question is:
can I have both SAS and SATA drives in the microserver gen8 enclosure ?
15k drives are loud - would prefer SSDs - is ZFS capable of stopping spin of unused drives e.g. if no access to the ZFS Z1 storage based on two SATA drives would occur, could the drives remain stopped until they are accessed? if yes, then going for SSDs is a better option.
 
sorry to dig up my own thread, but as my T110 just have arrived and my situation has changed - now I have a place to put a full size tower server - I started to think that maybe I will not scrap that T110 - it is ESXi 5.5 compatible with its fakeraid for booting upo hypervisor and storing ZFS pool host VM, it has more space for drives so I could even do mirror using 2x SSD 240GB in + 3-4xSAS 300GB in RAIDZ/RAID10. I am I just not sure whether I can add SATA drives to one 4x port connector and SAS drives to the other? Or just sell this sas speed deamon and buy SATA drives as planned before. I believe that mirror from SATA drives is significantly slower in random read than mirror from SAS drives, right? Under ZFS files system and using H200 HBA, of course.
 
Back
Top