Proxmox / LXC file sharing / Quick Sync questions

Joined
Mar 2, 2014
Messages
49
I am planning to migrate my small home VM server from ESXi to Proxmox. The main objectives are to add a Windows VM with QuickSync to the server to try out BlueIris and use the limited amount of RAM on this system more effectively. In general many of the services on VMs now could probably better fit in LXC containers, which is why I want to move away from VMWare. I have some experience with KVM / Qemu but none with LXC or Docker. I have a few questions and would like feedback on my migration plan.

Hardware is a x10slm-F (6 SATA ports onboard), 4 x 8gb DDR3 ECC RAM (the maximum), and a Haswell E3-1230v3. Right now, ESXi passes a H310 in IT mode to a nas4free VM. A 6 drive RaidZ2 pool is attached to the H310. 3 SSDs and 1 HDD are attached to the motherboard (an old 120 gb Plextor, 240 gb Seagate Pro, 400 gb Intel S3700, and a 500 gb 5400 WD). All VMs boot from SSD datastores (most are on the S3700) and one VM gets an additional vmdk from the HDD for slow media storage. Currently nas4Free handles NFS, SMB, and AFP shares for other VMs and for other systems elsewhere on the network, which are a mix of W10 and Mac laptops, a Kaby Lake desktop running Arch, and an nvidia Shield.

The majority of VMs are linux for various services (Plex, MythTV, Unifi, transmission, etc), many of which I suspect could run more efficiently in LXC containers. One VM is dedicated to iRedMail - I recognize this is major overkill for home and I set this up only to have a local IMAP server to store email long term. When I set these VMs up, I couldn't get all of the services to play nice together, hence individual VMs. I use Ubuntu for MythTV and Debian for the others. Each has NFS or SMB clients configured for the nas4free shares. Finally, one VM is xpenology since it was quick and easy to get our IP cameras running and Synology's mobile camera app is pretty good.

What I want to do is:
  1. Boot Proxmox from a SATA SSD,
  2. Have Proxmox handle the ZFS pool (so that the H310 is no longer passed to anything else),
  3. Run the linux services inside individual LXCs (except maybe iRedMail),
  4. Run servers for NFS, samba, and AFP directly on Proxmox, with shares attached (?) to the LXC containers,
  5. Pass through one SSD and the 500 gb HDD to a W10 VM,
  6. Allow both the Plex LXC and the windows VM to take advantage of the CPU's QuickSync extension.
For step 4, I am getting confused as to "where" it is best to configure nfs-server and samba, and how to best configure storage for the LXC containers. I am thinking I can just configure them like I ordinarily would on Debian, which would provide shared directories to other networked computers. Do I just configure the VMs to also use these shares (via their NFS or SMB clients?). Or do I use bind-mounts? I am not sure what the difference is actually. Also, some of the ZFS datasets are concurrently used by multiple clients on different OS's, in different "places" (for example, a dataset containing ripped FLACs is read-only via SMB to the laptops, read-only via NFS to the Shield, and read-write via NFS to the Plex VM and the Arch desktop where I do the ripping). If Plex is moved to a LXC, what is the best way for Proxmox to share a dataset with it?

For step 5, can Proxmox share the entire attached SATA drive to VMs or does it share virtual disk files like ESXi and vmdk? Do I need another PCIe disk controller for efficient passthrough? I want to have BlueIris record direct-to-disk to the 500 gb HDD for now and later I will configure it to store saved clips to the ZFS pool and replace the HDD with a big WD Purple.

For step 6, I think Plex in LXC will use QuickSync whereas I am not sure Plex in a linux VM will. For VMs, the CPU has to support GVT-g, which Haswell may not do anymore, even though that generation does support h.264 / AVC. Is there any other way to enable QuickSync in the Windows VM? The IP cameras can use h.264 so having QuickSync will lessen CPU load when viewing live or recorded feeds. QS is not a dealbreaker here since the number of cameras right now is small. Would it be easier to put Plex also on the Windows VM?

I'd say there is a 90-95% chance I am overthinking this. An alternative plan might be to use KVM and GVT-g on the Kaby Lake desktop for the BlueIris VM and leave the VM server purely linux. This might require moving drives around though. TIA!
 
Hm.. I used to use proxmox but have moved away from it. I didn't really need any VM's, just containers, so I ended up just moving over to docker completely (plex, minecraft server, samba file share server, dns/dhcp server, remote development server). I can't help with all of this, but I have setup a samba server for my shares inside of a container. I just pass the mount point from the host OS into the container, then use the container to do the serving. This way if I ever decide to change anything, I can just pass the same mount point and it works the same. I can easy swap out samba with any other file server I want with minimal impact too, not sure why you would want this to be a function of the host OS though. Or prior to an updated I can make a copy of the container, so if anything fails I can just revert back. I would think it'd be something to put into an LXC and just pass the mount into it in a similar fashion, I try not to run things on the host OS. I can swap my host OS from ubuntu to red hat to arch and everything would still just work as long as docker functioned properly. Then if your host drive dies, you can import your LXC from your backups and there is minimal config to do on proxmox/host. I did have some passthrough stuff setup on proxmox at one point, but it's been a long time so probably not much help honestly.

If you are running ESXi and it's working for you, what is the reason for switching to proxmox? I don't know what versions or w/e, but ESXi in general supports containers as well, so it's not as if you can't do what you want to do with what you currently have, unless there is a specific reason this won't work?

ps. My samba server shares 3 separate share points right now, but only because I have different storage for different reasons... SSD, RAID, and a large slow platter just for backups or storage.
 
Ready4Dis, thanks. This is sort of why I liked nas4free up to this point, it is pretty lightweight and upgrading / backups are easy. I completely agree with the idea of keeping the host OS separate from the services provided by VMs / containers for exactly the reasons you mentioned. I guess this is why I am posing this question - by moving to Proxmox, in theory I can now leverage the underlying Debian for samba and NFS servers, which I could not do with ESXi. The question is whether this is a good idea, and where to draw the line on host-based servers vs VM / container-based servers.

I just pass the mount point from the host OS into the container, then use the container to do the serving.

How exactly does this work? Say the zfs pool is mounted to /mnt/tank/ and datasets lie underneath. Would I just pass /mnt/tank/dataset1 directly to the container? What if I wanted to pass one dataset to more than one container, which might each be reading and writing simultaneously? I think traditionally on linux if one directory is shared via both NFS and samba, each can have clients attached that are writing simultaneously, correct?

If you are running ESXi and it's working for you, what is the reason for switching to proxmox? I don't know what versions or w/e, but ESXi in general supports containers as well,

I had no idea ESXi did containers now - have to read about this, thanks!. I am running 5.5 U3, and this gets to why I am thinking about switching. It is EOL and I have a feeling I will keep a linux based hypervisor up-to-date better than this.... the other issue is RAM management. Because I pass through the H310, all the RAM for that VM is reserved and I am limited to 32 GB on this platform so by trying to maximize RAM for zfs the other VMs are left with little to work with. Fully admit I have not done any sort of testing to know if this is really an issue. I think I allocate 24 or 26 gb to nas4free, 4 to the MythTV VM, and the rest to the others.
 
Ready4Dis, thanks. This is sort of why I liked nas4free up to this point, it is pretty lightweight and upgrading / backups are easy. I completely agree with the idea of keeping the host OS separate from the services provided by VMs / containers for exactly the reasons you mentioned. I guess this is why I am posing this question - by moving to Proxmox, in theory I can now leverage the underlying Debian for samba and NFS servers, which I could not do with ESXi. The question is whether this is a good idea, and where to draw the line on host-based servers vs VM / container-based servers.
Yeah, I understand, I'm still not sure what's the "best" way, lol.

How exactly does this work? Say the zfs pool is mounted to /mnt/tank/ and datasets lie underneath. Would I just pass /mnt/tank/dataset1 directly to the container? What if I wanted to pass one dataset to more than one container, which might each be reading and writing simultaneously? I think traditionally on linux if one directory is shared via both NFS and samba, each can have clients attached that are writing simultaneously, correct?

With docker (and before that when I was using proxmox), I simply pass the volume /mnt/tank into however many containers I want to have access to it. So, for example, plex has access to my /mnt/data/plex folder, while my samba share has access to /mnt/data. I can upload files straight into the plex library via file explorer and then just hit rescan and it picks it up. No needs to worry about which one is using it, it will lock the files it is currently using and leave the rest. I also have a folder for my remote development, so /mnt/data/projects that is mounted to my remote dev server, where I can edit files within MSVC on my windows box, then compile from my remote dev server, or I can login to the web interface and edit the files directly in visual studio code in my browser. I don't have to partition the drive or do anything fancy with it. I am currently running my dhcp and dns server on the host OS, mostly because I wanted to try to get it working first. I will be shifting these into a container at some point as well as adding in an active directory server (probably using Samba) in order to get logins and user permissions on all my windows boxes working.

I had no idea ESXi did containers now - have to read about this, thanks!. I am running 5.5 U3, and this gets to why I am thinking about switching. It is EOL and I have a feeling I will keep a linux based hypervisor up-to-date better than this.... the other issue is RAM management. Because I pass through the H310, all the RAM for that VM is reserved and I am limited to 32 GB on this platform so by trying to maximize RAM for zfs the other VMs are left with little to work with. Fully admit I have not done any sort of testing to know if this is really an issue. I think I allocate 24 or 26 gb to nas4free, 4 to the MythTV VM, and the rest to the others.
Yeah, I saw that they started supporting it so figured I'd mention it as a possibility, I'm not really to sure much else about it, I haven't ever really used ESXi, so I can't really be to much more use for it. I see what you're saying with RAM, I mostly used proxmox LXC containers where you can assign some or all access and even over-provision if you wanted (I can't recall all the details it was a few years back now that I switched). Now I don't even have to assign resources with Docker... I just fire up a container and let it do its thing. I can update, swap them out, etc without worry. I can export them for backups, etc. The nice thing about using containers this way, is they can share some data. For example, most of my containers are based on Ubuntu, so they all start with a base ubuntu install. It only stores this one time for however many containers use it, so even though it's (just a made up number) 300MB, I don't have that 300MB for all 5 containers, just the 300MB once and shared, then the specific packages for each container are stored seperate (docker containers are built in stages like this, so you can share one or more stages with multiple containers to help consolidate space). I'm not exactly sure how the ram works in ESXi, I know in docker it just uses what you need. I remember in proxmox having to assign resource, but I think you could overprovision ram and it'd only have an effect if they all combined were actually using more than you had and had to use swap space (again, i'm a bit rusty on this part).
 
Back
Top