yummygizzards
n00b
- Joined
- Mar 2, 2014
- Messages
- 49
I am planning to migrate my small home VM server from ESXi to Proxmox. The main objectives are to add a Windows VM with QuickSync to the server to try out BlueIris and use the limited amount of RAM on this system more effectively. In general many of the services on VMs now could probably better fit in LXC containers, which is why I want to move away from VMWare. I have some experience with KVM / Qemu but none with LXC or Docker. I have a few questions and would like feedback on my migration plan.
Hardware is a x10slm-F (6 SATA ports onboard), 4 x 8gb DDR3 ECC RAM (the maximum), and a Haswell E3-1230v3. Right now, ESXi passes a H310 in IT mode to a nas4free VM. A 6 drive RaidZ2 pool is attached to the H310. 3 SSDs and 1 HDD are attached to the motherboard (an old 120 gb Plextor, 240 gb Seagate Pro, 400 gb Intel S3700, and a 500 gb 5400 WD). All VMs boot from SSD datastores (most are on the S3700) and one VM gets an additional vmdk from the HDD for slow media storage. Currently nas4Free handles NFS, SMB, and AFP shares for other VMs and for other systems elsewhere on the network, which are a mix of W10 and Mac laptops, a Kaby Lake desktop running Arch, and an nvidia Shield.
The majority of VMs are linux for various services (Plex, MythTV, Unifi, transmission, etc), many of which I suspect could run more efficiently in LXC containers. One VM is dedicated to iRedMail - I recognize this is major overkill for home and I set this up only to have a local IMAP server to store email long term. When I set these VMs up, I couldn't get all of the services to play nice together, hence individual VMs. I use Ubuntu for MythTV and Debian for the others. Each has NFS or SMB clients configured for the nas4free shares. Finally, one VM is xpenology since it was quick and easy to get our IP cameras running and Synology's mobile camera app is pretty good.
What I want to do is:
For step 5, can Proxmox share the entire attached SATA drive to VMs or does it share virtual disk files like ESXi and vmdk? Do I need another PCIe disk controller for efficient passthrough? I want to have BlueIris record direct-to-disk to the 500 gb HDD for now and later I will configure it to store saved clips to the ZFS pool and replace the HDD with a big WD Purple.
For step 6, I think Plex in LXC will use QuickSync whereas I am not sure Plex in a linux VM will. For VMs, the CPU has to support GVT-g, which Haswell may not do anymore, even though that generation does support h.264 / AVC. Is there any other way to enable QuickSync in the Windows VM? The IP cameras can use h.264 so having QuickSync will lessen CPU load when viewing live or recorded feeds. QS is not a dealbreaker here since the number of cameras right now is small. Would it be easier to put Plex also on the Windows VM?
I'd say there is a 90-95% chance I am overthinking this. An alternative plan might be to use KVM and GVT-g on the Kaby Lake desktop for the BlueIris VM and leave the VM server purely linux. This might require moving drives around though. TIA!
Hardware is a x10slm-F (6 SATA ports onboard), 4 x 8gb DDR3 ECC RAM (the maximum), and a Haswell E3-1230v3. Right now, ESXi passes a H310 in IT mode to a nas4free VM. A 6 drive RaidZ2 pool is attached to the H310. 3 SSDs and 1 HDD are attached to the motherboard (an old 120 gb Plextor, 240 gb Seagate Pro, 400 gb Intel S3700, and a 500 gb 5400 WD). All VMs boot from SSD datastores (most are on the S3700) and one VM gets an additional vmdk from the HDD for slow media storage. Currently nas4Free handles NFS, SMB, and AFP shares for other VMs and for other systems elsewhere on the network, which are a mix of W10 and Mac laptops, a Kaby Lake desktop running Arch, and an nvidia Shield.
The majority of VMs are linux for various services (Plex, MythTV, Unifi, transmission, etc), many of which I suspect could run more efficiently in LXC containers. One VM is dedicated to iRedMail - I recognize this is major overkill for home and I set this up only to have a local IMAP server to store email long term. When I set these VMs up, I couldn't get all of the services to play nice together, hence individual VMs. I use Ubuntu for MythTV and Debian for the others. Each has NFS or SMB clients configured for the nas4free shares. Finally, one VM is xpenology since it was quick and easy to get our IP cameras running and Synology's mobile camera app is pretty good.
What I want to do is:
- Boot Proxmox from a SATA SSD,
- Have Proxmox handle the ZFS pool (so that the H310 is no longer passed to anything else),
- Run the linux services inside individual LXCs (except maybe iRedMail),
- Run servers for NFS, samba, and AFP directly on Proxmox, with shares attached (?) to the LXC containers,
- Pass through one SSD and the 500 gb HDD to a W10 VM,
- Allow both the Plex LXC and the windows VM to take advantage of the CPU's QuickSync extension.
For step 5, can Proxmox share the entire attached SATA drive to VMs or does it share virtual disk files like ESXi and vmdk? Do I need another PCIe disk controller for efficient passthrough? I want to have BlueIris record direct-to-disk to the 500 gb HDD for now and later I will configure it to store saved clips to the ZFS pool and replace the HDD with a big WD Purple.
For step 6, I think Plex in LXC will use QuickSync whereas I am not sure Plex in a linux VM will. For VMs, the CPU has to support GVT-g, which Haswell may not do anymore, even though that generation does support h.264 / AVC. Is there any other way to enable QuickSync in the Windows VM? The IP cameras can use h.264 so having QuickSync will lessen CPU load when viewing live or recorded feeds. QS is not a dealbreaker here since the number of cameras right now is small. Would it be easier to put Plex also on the Windows VM?
I'd say there is a 90-95% chance I am overthinking this. An alternative plan might be to use KVM and GVT-g on the Kaby Lake desktop for the BlueIris VM and leave the VM server purely linux. This might require moving drives around though. TIA!