2 gamers, 1 box?

travm

[H]ard|Gawd
Joined
Feb 26, 2016
Messages
2,042
SO needs a new system.
Looking at moving her GPU into my machine and using VM's with hw passthrough to avoid building another box. Less clutter, etc. Also much cheaper to have 1 honking system than 2 mid-range.
how-to's on this seem far and few between, and focus mostly on different software configs, and are not very thorough.

What software?

Looking at unraid, but its website so full of buzzwords and online documentation is extremely lacking.
 
More research suggests unraid is not the correct software.
Possibly this is just a bad idea.
 
You're going to spend way more time/money on this than you would ever spend maintaining two separate systems, There are so may gotchas getting consumer hardware to to work n a VM with hardware passthrough.

Unless you already bought Xeons plus Quadro, it's a waste of time
 
LTT did several vm based gaming videos. start with one of those to see what it entails. ^^^ that is probably pretty bang on.
this:
 
Yea the issue is going to be finding an affordable hypervizor that will work with the hardware you have. Not to mention other needed items... like a USB port card so your GF can have her headset, keyboard, mouse, streaming camera, and other assorted hardware in there. You'll likely want a good threadripper system to do it right an a minimum of 16 cores 32 threads. Oh and some good solid memory.,. maybe on the order of 48 or 64 gig. Once all of that is in place you can then look to hardware compatibility with your other devices.
1. What can be shared by your hypervisor. At a minimum....
a. Storage pool. Probably a couple of large SSD's in a raid 0, or if you're feeling spicy raid 10, (4 disks with redundancy and speed but only 2 disks of capacity.) You'll need a ok raid controller so you're not stuck with trying to shorehorn software drivers into your hypervizor.
b. Network.. If you have enterprise grade NIC's it shouldn't be an issue. 1 nic can probably suffice as you're not going to have redundant networking built in.

2. Now things you will need assigned to your VM's.
a. USB ports. Plan on a minimimum of 4 high speed ports.
b. Video Cards. (included video output from said card.

The issue isn't CAN consumer hardware do this... it's going to be finding actual driver support for said hardware in a hypervisor that will let you do this.

If you want to shell out for like an MX 100 or better you can do it off of one card but then you will need to look into thin clients and that's a pain in the arse and quickly becomes not worth it.

LTT did a video getting 6 workstations on a single setup... it's not easy.
 
Think I'll give KVM a shot when I get my new drive. Not expecting it to work.
 
Unraid has lots of documentation and support. From their own site, docs, youtubers, forums, and Reddit. If it looked more daunting than KVM & other hypervisors then you’re trying to run a marathon without training. Running a powerful machine that can support two virtual machines plus some host duties is going to disadvantage both systems against the performance of two mid range boxes. Your thought that it would be cheaper and easier is a common misconception.

There is absolutely no “it just works” plug & play solution for your request. You would need the same ram as two midrange systems but, broadly speaking, only have access to 90% of it. Your OS drives would likely not be available to the VM’s as bare metal (a safe assumption unless your board topology is perfect). The processor cores would all, or at least core 0 would, be handling requests from the hypervisor and hindering performance compared to bare metal. Oh, and I’m not even going to broach the issue of USB pass through yet.

Here’s the thing with virtualization...

Everyone who does it has heard of these concerns before and said “Fuck It”, before making their first VM. Then they learned how to make their next VM better, and the next, and so on. It won’t be as responsive and fast as bare metal, but it’s worth trying.
 
Last edited:
I have had multiple PC's running as VM's with graphics cards passed through using Unraid and a Threadripper 1950x in the past. It really wasn't too difficult and they ran fine. Not quite bare metal speeds but close.

Right now I am running my primary machine as a VM on unraid with a 3900x/64GB Ram; 6 cores, a GTX 1070, 32GB of ram and a 512GB NVME passed through to the VM along with one of my boards USB controllers. It runs fine for everything I do.

It is not cheaper or easier than maintaining 2 systems but it is a fun thing to play around with. The problem comes when there is an issue and the SO's system goes down along with yours or if you want to upgrade the HW and the SO wants to be on their 'machine'.
 
I have had multiple PC's running as VM's with graphics cards passed through using Unraid and a Threadripper 1950x in the past. It really wasn't too difficult and they ran fine. Not quite bare metal speeds but close.

Right now I am running my primary machine as a VM on unraid with a 3900x/64GB Ram; 6 cores, a GTX 1070, 32GB of ram and a 512GB NVME passed through to the VM along with one of my boards USB controllers. It runs fine for everything I do.

It is not cheaper or easier than maintaining 2 systems but it is a fun thing to play around with. The problem comes when there is an issue and the SO's system goes down along with yours or if you want to upgrade the HW and the SO wants to be on their 'machine'.
What I was hoping was that a ryzen 3800 or 3900 might be enough, its a drop in to my current mobo, and if upgrade ram to 32GB I can eliminate a box, while possibly still affording to upgrade my GPU.

I think however I'll treat it as a fun experiment, and not start out as a "this will replace a machine". That way, if its not fun, and the experiment sucks, Its just swapping some cables and everything is as it was.

Currently SO is on an FX 8320e, which badly needs replaced. Just working options really.
 
Lots have tried it, it's not something I would call production ready. The issue is when you just want a working computer this isn't a solution for that. From the many times I've tried it, yes unraid it by far the easiest and most likely way to succeed. But it costs money if you want to run it long term. So take that money and just but another board and spring for another CPU. You probably already have the rest of the components to build two complete systems anyway.
 
  • Like
Reactions: travm
like this
Lots have tried it, it's not something I would call production ready. The issue is when you just want a working computer this isn't a solution for that. From the many times I've tried it, yes unraid it by far the easiest and most likely way to succeed. But it costs money if you want to run it long term. So take that money and just but another board and spring for another CPU. You probably already have the rest of the components to build two complete systems anyway.

Not to mention everytime something f*cks up, she'll be pissed as hell and ask "Why can't I just have something that works????"
 
I just let my SO have my hand-me-downs.
It's a great excuse to upgrade yourself more often. ;)
Also I like having two boxes to mess with, usually I get hers running as absolutely *stable* as possible and I do all the fun overclocking and tweaking with mine.

She's currently running an R7 1700, GTX 1070 and 32GB of with a 1TB NVME drive. She only has 1080p monitors and its more than enough for us to co-op Monster Hunter: World, Borderlands 3, Divinity 2, etc.

If you wonder why I gave her 32GB of ram, its because she usually has over 1000 tabs open at a time on chrome.
That's not an exaggeration.
It gives me anxiety just looking at her browser.
 
I just let my SO have my hand-me-downs.
It's a great excuse to upgrade yourself more often. ;)
Also I like having two boxes to mess with, usually I get hers running as absolutely *stable* as possible and I do all the fun overclocking and tweaking with mine.

I love the term WAF. Wife Acceptance Factor. She doesn't care about the funky, fun sh*t I do. She just 'wants it to work!'.
 
The main issue with this being that all IT projects are generally conceived for a technical reason.

For virtualisation this would generally be hardware consolidation, power, ease of administration, ease of backup/recovery, etc.

It's a struggle for me to see how a 2-gaming-PCs-one-chassis is worthwhile.

Hardware consolidation - yes to an extent, although you need to duplicate graphics cards and you will have hypervisor and thermal overheads compared to two separate CPUs in their own chassis.
Power - yes to an extent, although the rig will consume more power for single-user-use than a separate PC would
Ease of administration - hell no.
Ease of backup/recovery - depends, more likely with a commercial hypervisor.

Oh and the most important decision factor of them all.

<Critical Hit> Wife attacks for 384 Why the hell doesn't my PC work again Damage
 
Not to mention everytime something f*cks up, she'll be pissed as hell and ask "Why can't I just have something that works????"
Once you manage to set it up and get working, what are the usual things that make it break afterwards?
 
Are you both playing at the same time? Might be easier to just share the same computer the old fashioned way. Time division.

Otherwise, having gone through the VFIO method with my 3400G + GTX970, I'd not recommend it for anything that has to "just work." I could simultaneously use the IGP for the base OS and the GPU for a Win10 VM, but it just was a lot of work for something that is somewhat poorly described. Lots of "follow this guide and execute all of these commands" without any real explanation. Sure, it works, but often changes things that I didn't expect, such as GRUB failing to load several kernel branches for a couple of months, due to some changes made to enable the VFIO VM (specifically to keep the GPU reserved for the VM). I did get to really learn how to chase down documentation and delve into the Arch wiki, at least.

KB and Mouse were sometimes finicky in passthrough. I learned n-key rollover KB and RGB effects controllers on these devices (hard to avoid without excluding a lot of mice) often present themselves as multiple devices, and even directly binding those devices to the passthrough machine was not very effective (would frequently fail to actually passthrough to the VM). Though I tried all of this without using a PCIe USB controller card (ITX build). Some GPUs have built in USB-C ports, which may make this easier. In the latter two cases, one would just pass through the entire IOMMU group, which apparently vastly simplifies things.

It is a lot of work for something that's coming under scrutiny by at least one major anticheat developer. I am only aware of one youtuber that regularly uses a VFIO VM setup, and that person has stopped playing one of their favorite games due to the anticheat making it very hard to play (SomeOrdinaryGamers; Rainbow 6 Siege). In the end, I went back to running two systems. Do I wish we could easily provision our own "minicloud" to share the resources of a powerful workstation? Yes. But even that is quite expensive from a software POV (for Nvidia) and hardware POV (for AMD and for Nvidia). Intel might change things in the future, so here's to that. It's basically the only reason I'm even interested in an Intel discrete GPU.
 
Are you both playing at the same time? Might be easier to just share the same computer the old fashioned way. Time division.

Otherwise, having gone through the VFIO method with my 3400G + GTX970, I'd not recommend it for anything that has to "just work." I could simultaneously use the IGP for the base OS and the GPU for a Win10 VM, but it just was a lot of work for something that is somewhat poorly described. Lots of "follow this guide and execute all of these commands" without any real explanation. Sure, it works, but often changes things that I didn't expect, such as GRUB failing to load several kernel branches for a couple of months, due to some changes made to enable the VFIO VM (specifically to keep the GPU reserved for the VM). I did get to really learn how to chase down documentation and delve into the Arch wiki, at least.

KB and Mouse were sometimes finicky in passthrough. I learned n-key rollover KB and RGB effects controllers on these devices (hard to avoid without excluding a lot of mice) often present themselves as multiple devices, and even directly binding those devices to the passthrough machine was not very effective (would frequently fail to actually passthrough to the VM). Though I tried all of this without using a PCIe USB controller card (ITX build). Some GPUs have built in USB-C ports, which may make this easier. In the latter two cases, one would just pass through the entire IOMMU group, which apparently vastly simplifies things.

It is a lot of work for something that's coming under scrutiny by at least one major anticheat developer. I am only aware of one youtuber that regularly uses a VFIO VM setup, and that person has stopped playing one of their favorite games due to the anticheat making it very hard to play (SomeOrdinaryGamers; Rainbow 6 Siege). In the end, I went back to running two systems. Do I wish we could easily provision our own "minicloud" to share the resources of a powerful workstation? Yes. But even that is quite expensive from a software POV (for Nvidia) and hardware POV (for AMD and for Nvidia). Intel might change things in the future, so here's to that. It's basically the only reason I'm even interested in an Intel discrete GPU.
When I get around to it in going to try, although I'm going to install Linux and use KVM for both Windows installs.

Not expecting it to work
 
<Critical Hit> Wife attacks for 384 Why the hell doesn't my PC work again Damage

Even further than this - there are a number of hardware issues that can bring down your entire Shared PC.

If you have two separate PCs, the likelihood of both of them being down is incredibly low.

So, would you rather have to share a PC for a week, or would you rather have to wait that week while you troubleshoot a motherboard/ram/psu/processor failure on your phone, and then replace the part?

The only case where it makes sense for home is if you have more money than sense (and you are just looking for something crazy to do with your 34th PC).

Any cost savings you see from reduced hardware count is easily made-up by added costs from hypervisor/ higher premiums to unlock professional hardware features.
 
Last edited:
In case you need it. Here's a walkthrough I did. Should still be helpful. It's a little old now. But I cover quite a bit. I probably should do it over with ZFS which is what I use now.
https://hardforum.com/threads/guide...2-ubuntu-gnome-from-beginning-to-end.1862980/

I've had my all in one for quite some time. It works reliably but there is one thing that's annoying about it. It's always on! When you have a normal PC it sleeps, etc. But an all-in-one stays on because it has to. It has to be available to anyone who needs it. So keep that in mind. Other than that the experience is amazing as you can have one box power an entire home if you do it right. Especially now with cores being far cheaper than when I first did this. I wouldn't listen to those who say you'll spend tons of money. I never really did other than for hard drives but that came later and was totally a separate project.
 
Last edited:
Back
Top