virtual nas silliness

honegod

[H]F Junkie
Joined
Aug 31, 2000
Messages
8,327
I have an itx machine that I use to data hoard, it currently runs 4 14tb spinners but I want to add another 4.
consensus is I want a separate nas box for the drives, but I don't.
so it occurs to me that the main advantage of the nas box is the different OS that allows a better filesystem and such on the HDDs.

so could I use a VM to let me have unraid like stuff using my win10 hardware ?
a VM nas to run the drives that windows can access directly, skipping the network silliness ?
 
I'm not sure I understand the point of an itx box with room for 8 3.5" hard drives, but I'll let that go, cause I think I saw your other thread.

Personally, I'd try to flip your plan, and run the NAS OS as the host, and Windows as a VM, if you can convince your video card to work with pci-passthrough. That way, you can store your Windows hard drive as part of your NAS (could be a ZFS volume, with snapshots, if you like snapshots, etc)
 
flip your plan
just the sort of perspective I am asking for.
I have NO understanding of virtual machines.
this 'plan', more like a nebulous notion, is to get my current operating system to be able to use the HDDs with raidy type stuff that windows does not allow.
one of my 'folders' is too big to fit on a single 14tb drive.
a second is growing rapidly.
so raiding them to allow big folders is the main notion here.
all the cool raid stuff is needing a not-windows-os.
so, a nas to run the other os, and the drives, with my computer using the drives over a network.
that's not happening.

pci-passthrough
an example of my ignorance.
I assume there would be some problems with windows not being able to use the HDDs formatted to the virtual nas raid system.
this sounds like the same sort of problem for your inverted solution, sort of, kinda.
the motherboard in use is an asus strix Z490i gaming, so is most unlikely to be setup for server style trickery, unless I could somehow use the unused RGB controls it certainly IS fully equipped with for that ?
 
pci-passthrough requires some bios/cpu support, but is otherwise painless, I think. I'd have to look it up, but it's some virtualization extension, something to do with numa, and maybe another. NUMA should be enabled on AMD by default, but dunno about the other stuff.
 
Well... if Windows is the host, and other OS is the guest, maybe you run Windows off a m.2 or something, and tell it not to touch the spinning drives, then you can use your VM software of choice and tell the VM software to give full access to the drives in the guest. You can pass individual drives, or maybe the SATA controller (via PCI passthrough; described later).

If the NAS os is the host, you'd (maybe) have one big array with a ton of storage, and a file (more or less) that's the hard drive for Windows; you'd tell your VM software to provide that file to the Windows guest as its only storage, and then use SMB (probably) for your media, but the networking would be internal to the machine, so faster/easier (I hope) than a separate machine. The only tricky bit is VM software doesn't make for good video performance, so the trick is you want the guest to talk to the GPU as directly as possible; enter PCI passthrough.

PCI passthrough is powered by IO-MMU, Intel calls it VT-D, AMD has some other name for it, which needs to be enabled in BIOS, but I think is supported on your motherboard (interwebs say only when BIOS boot (CSM) is disabled); it lets the host OS setup direct access to PCI cards from the guest OS; then you can hopefully run the unmodified drivers and get nearly full performance. There's some tricks and traps sometimes, because virtualization is a 'server feature' and GPU makers sometimes put in checks to see if you're running the driver under virtualization and fail to work... I haven't done any real work with this, but it's a thing, forums should be able to help you. Sometimes, you have to tell the guest OS it's a different card (which is easy to do if you know what to set it to; when I briefly used PCI pass through, there was a box to set what the guest saw for PCI vendor/device sub vendor/device ids), sometimes you have to convince the guest OS it's not living in a virtual world, which can be more tricky, but not too hard either AFAIK.
 
if Windows is the host, and other OS is the guest, maybe you run Windows off a m.2 or something, and tell it not to touch the spinning drives, then you can use your VM software of choice and tell the VM software to give full access to the drives in the guest.
I DO run the OS off a m.2, the spinners are all data drives.
this is pretty much exactly what I had in mind, but I see/saw no way to make the spinners invisible to windows although I figured that having them formatted in a non windows supported format would do pretty much that.
I want the drives invisible to windows to protect them from virus attack, which cost me a couple in the win98 era.
 
If you don't want windows to see it, you can just change the partition type. IIRC there are types for virtual machine volumes, but you could just as well call it a linux data partition or some such and I'm pretty sure Windows will ignore it.

The way to change the partition type will differ depending on whether it's a MBR or GPT partition scheme.
 
If you don't want windows to see it
not the main point, that would be a side benefit of having them partitioned for ZFS or the like.
which is the main benny of a NAS, better file management than windows, as I understand it.
I just want to eliminate the second computer, and the network.
 
Last edited:
the whole virtual machine thing is to get my windows install to use stuff like ZFS and unraid.
because, though a 20tb drive is available, and would fit the whole 'big folder', I have nine of the 14s on hand and want to use them.
hopefully by the time they need replacement, 50tb M.2 will be cheap.
 
Interesting concept. Not familiar enough with filesystems like ZFS to know if there's anything required configuration-wise for RAID'd drives to be brought back online, in the worst case event you describe of say ransomware encrypting everything (including presumably the VM image managing it). Though I guess you'd be making separate backups of the VM anyway.

One downside that pained me when looking into alternative filesystems (eg: zfs, ext4) is they don't support migration of date created timestamps from NTFS in a native way, from everything I've read. So while those (and various other) filesystems support their own form of date created/birth time timestamps they can't be modified like one can do on NTFS and so can't be made to match arbitrary dates of the original files (at least without some serious/overly complex inode finagling or insane workarounds like literally changing the system date before re-writing the file again :p). They only set a fixed creation date to when its own filesystem first wrote it.

Linux now even has built-in support for NTFS but maps such created timestamps to extended attributes which is separate metadata. Wish there was a cross-platform date created timestamp that was modifiable like NTFS (it truly baffles me there isn't tbh).

In terms of the storage for an ITX have you considered just using a HDD cage externally? There are generic 3.5" 5x bay cages with hotswap-ish glides that have mounts for a 120mm fan, without any backplane to block the airflow (an issue with most pre-built external hotswap drive enclosures). If you had some m.2 slot free and room in the case you could then use a m.2-to-PCIe adapter with a SAS HBA card and a couple SAS-to-SATA breakout cables to connect the SATA drives (or if you had enough SATA ports on the motherboard just using those instead). It's something I'm considering myself atm.
 
Last edited:
Though I guess you'd be making separate backups of the VM anyway.
that would be a good use for the second M.2, storage of OS backups.
that is how I am using it now, it has a full bootable copy of my C drive, adding the VM would just make the copy bigger.

3.5" 5x bay cages with hotswap-ish glides that have mounts for a 120mm fan
I have two 4 drive cages like that. with backplanes, only most of the airflow is blocked by the circuit board.
I read several reviews about the drives being smoked by the poor quality control on the board assembly in several of these units.
I looked at the boards and both looked sketchy.
I have some drives I could sacrifice but I want no risk to the computer I plug it into.
so they gather dust.
I COULD pull the backplane and run cables direct to the drives but the fan complicates drive swapping then.

regarding how to plug the extra 4 drives in,
https://hardforum.com/threads/drive-adaptor-silliness.2021323/
 
Last edited:
I have two 4 drive cages like that. with backplanes, only most of the airflow is blocked by the circuit board.
I COULD pull the backplane and run cables direct to the drives but the fan complicates drive swapping then.

This is the variety I was looking at. Only actually houses 4x drives from reviews but the 5th slot is free to put cables/whatever into. The grille at the front is where the fan mounts to.

HDD-cage.jpg

Then Molex to 4x/5x SATA power connector daisy chain adapters are available where each plug is just an inch or so from each other from more streamlined cable management. The ones I was looking at are the non-molded kind so without the risk of shorting as some molded single Molex to SATA adapters are criticized for.

Similar to this (Silverstone CP06) except with Molex powering it instead of SATA. Molex outputs from the PSU have 10-11A max current from what I've read so can handle the power spike in HDDs when they first spin up (around 2-3x their regular draw).

SilverStone CP06.jpg
 
Storage virtualization: Storage can be virtualized by consolidating multiple physical storage devices to appear as a single storage device. Benefits include increased performance and speed, load balancing and reduced costs. Storage virtualization also helps with disaster recovery planning, as virtual storage data can be duplicated and quickly transferred to another location, reducing downtime.
 
I currently run an all-in one box. Room for 8 3.5 inch drives running on ESXi 6.7. I pass the HBA directly to my FreeNas VM and I run my Plex server from the same box via windows an pass my P400 Quadro card directly to the VM. Plus I run a few other VM's on the side. Go ZFS with a SAS card and pass the whole set of disk to the VM and your golden.
 
you lost me at "ESXi 6.7", I just spent a bit reading docs.vmware pages.
I got nowhere, very dilbert. advertising speak for buying department suits with a list of catch phrasey checkboxes.
something about virtual machines living in a virtual cloud on your system.
seems to be aimed at several computers all running together to do lots of separate things at once, but coordinated exquisitely.

"provides physical connectivity between a host system (computer or server) and networks and storage devices"
again implying lots of other machines.
how are your drives hooked to the HBA ?
not to sata ports on the motherboard ?
to a
sas drives ? into the motherboard through PCI sas card, THEN to the HBA ?

are these all virtual cards and the drives are actually sata plugged into the motherboard ?
with VMware creating imaginary computers in a virtual cloud that exists only on this machine ?
does VMware allow you to use ZFS without linux so windows, on a virtual machine, can see and use the drives ?

I have a VERY poor visualization of your system.
instead of plex I use MKV and MPC-HC x64

Quite puzzled.
 
The SaS card is an HBA
ok, I see that now
so is the M.2, 5 port sata adapter also a HBA ?
how do the motherboard sata plugs fit into this ?

so, install VMware on C and format all the sata drives zfs, then create a vm to install windows to and tell VMware to let the win10vm see the drives as network storage but operating at direct sata speeds.
I could then copy the backup files to the freshly formatted ZFS volume from usb connected ntfs drives in the win10vm.

am I getting warmer ?

it sounds scary, because I end up with the 'unbacked up to ntfs drives', data in a useless format for a windows reinstallation to use. until I get the whole VM system going TOO, first.
this is exactly a big reason I am not looking at RAID in windows, harddrives full of unusable data.

the Drive bender trial expires soon and I will find out if I need to use the usb backups or if the drives can be reverted.
 
A typical AiO setup with ESXi and a storage VM requires two independent disk controller (Sata, SAS HBA or M.2)

On one you install ESXi, the other is given to a storage VM in passthrough mode. With an SAS HBA physical raw disk mapping of single disks to VMs is also a supported option. So with Sata and M.2 I would install ESXi on the M.2 disk (as NVMe passthrough often makes more troubles than SAS or Sata passthrough).

You can then boot ESXi. Put a storage VM onto the local datastore on the M.2 and activate passthrough for the second disk controller, boot the ZFS storage VM.

The ZFS storage VM has now direct disk access ex to sata disks. Create a zfs pool and share it via NFS3 or 4.1. Mount the NFS filesysten in ESXi. Put all other VMs onto NFS. With an SAS HBA you can raw passthrough single disks to VMs or you use ESXi virtual disks on NFS/ZFS with all ZFS security and performance features. Share the NFS filesystem also via SMB. This allows easy VM copy/move/backup or access to ZFS snaps via Windows previous versions. You can backup the whole NFS filesystem via ZFS replication.

See my AiO howto. It is based on Solaris (native ZFS or free Solaris fork OmniOS). This is the most minimalistic full featured ZFS server lowest resource needs and with kernelbased NFS and multithreaded SMB. I published this idea 12 years ago, see https://napp-it.org/doc/downloads/napp-in-one.pdf
 
boot into esxi, then boot into a virtual machine with a "storage VM" OS to see the zfs drives , then boot into the windows VM.
then run both the other OSs from inside the windows os.
I am sure that the rebooting for windows updates will be graceful.

See my AiO howto.
I got to ,
11.1 SMB related settings (Solaris CIFS
and got distracted by a page that said $300 for a 1 year subscription to a program mentioned for being useful in this context.
which shook me awake from the virtual networking dreams that were getting a little dark and scary.

does it have a happy ending ?
with a little computer dreaming itself into a mighty server cluster, with sufficient storage.
 


Watch this and see if this helps. All the software should be free. Honestly I would do True Nas Scale if I was building a new AIO NAS/ VM/ Container server
 
I just built a FreeNas Scale following this build. No HBA is needed and packs alot of punch. One change I made was to use this M.2 5 port sata card.

SilverStone Technology ECS07 5-Port SATA Gen3 6Gbps Non-RAID M.2 PCIe Storage Expansion Card, SST-ECS07​

 
SilverStone Technology ECS07 5-Port SATA Gen3 6Gbps Non-RAID M.2 PCIe Storage Expansion Card, SST-ECS07
Image1.jpg



looks good !
I like the heatsink on the processor, any idea if it gets warm when all the drives are spinning ?
the one I found, at half the price of the silverstone, has no heatsink, but does have pins for an activity led.
this has an onboard led ? I like the remote wired light, one on the board will be pretty buried.
 
View attachment 513288


looks good !
I like the heatsink on the processor, any idea if it gets warm when all the drives are spinning ?
the one I found, at half the price of the silverstone, has no heatsink, but does have pins for an activity led.
this has an onboard led ? I like the remote wired light, one on the board will be pretty buried.
It's probably not needed but might help in high read and right situations if the ambient temperature is high. Plus I got mine from Amazon before it went up 20$
 
Back
Top