I can play VR in Qubes OS!
Qubes OS is pretty cool and succeeds at making virtual machines on the desktop unexpectedly seamless. But I'm a massive virtual reality fanboy and I need my Beat Saber! VR needs a powerful GPU and passing a GPU to a VM is harder than it should be. Thus began my quest to make my virtual reality machine a virtual reality virtual machine with Qubes.
My machine has two GPUs, one for the Qubes UI and one to pass through to VMs: An AMD Ryzen 3 3200G CPU with an integrated Vega 8 GPU and an AMD RX 5500 XT GPU. I use a Mini-ITX case so I needed a CPU with an iGPU instead of two dedicated GPUs.
Installing Qubes OS
Installing Qubes was already a small challenge because because the default LTS kernel of Qubes dom0 is too old and does not support the GPUs.
I had to fix UEFI boot by commenting out
The installation stick booted but couldn't start the graphical installer because of the old kernel.
The text based installer is broken, too.
But running the installer over VNC worked!
To cite the very helpful Github comment about VNC installation:
The text based installer is also broken (tickets #2113 #1161), but if you plug in a network adaptor, alt+tab to the terminal, use dhclient to get an IP address, use ip a to see the IP, remove /var/run/anaconda.pid, and run anaconda --vnc --vncpassword pass1234 and connect to your IP with a VNC client on port 5901, the install will work fine...
And it did! Apart from the LTS kernel that also breaks the GUI in the installed OS.
I needed to upgrade the dom0 kernel.
But this needs the dom0-upgrade-VMs set up, which normally happens in the GUI on first boot.
Luckily there's a text-based initial setup script at
I could then upgrade dom0 to
kernel-latest and make it the default kernel.
After a restarting either Qubes or lightdm I had a fully working Qubes OS!
Creating a Windows VM was a lot more straightforward than I expected, I just followed the Qubes Windows documentation: Download the Windows ISO into a VM, create a new standalone VM, increase its RAM and storage to 3.7GB+ and 50GB+, and boot the ISO from the other VM. Then wait forever until the installer starts. Because it will. Eventually.
Passing through the GPU
A passed through GPU roughly goes through the following stages:
- It's added to a VM
- The GPU driver inside the VM runs stuff on the GPU
- After VM shutdown, the GPU is reset to be ready for another passthrough
The biggest, most important secret to pass through a GPU to a Windows VM is to not use an Nvidia GPU. Their drivers can detect they're running inside a VM and will throw Error 43, halt and catch fire. KVM supports workarounds, Xen and thus Qubes do not. I even tried to patch the Nvidia driver, but the error persisted.
Thus I got myself an AMD RX 5500 XT and upgraded my partner's desktop PC with my "old" Nvidia GTX 1060.
Passing through PCI devices in Qubes is pretty easy.
If they're not used by another VM, e.g. by dom0!
To prevent dom0 from grabbing the GPU, I needed to hide its PCI device at boot.
lspci in dom0 gave me the PCI addresses for my GPU:
03:00.0 for the graphics part and
03:00.1 for the HDMI audio part.
rd.qubes.hide_pci=03:00.0,03:00.1 to my kernel commandline in
/boot/efi/EFI/qubes/xen.cfg and rebooted my machine.
Booting the VM afterwards needs work, too, because VMs with more than 3.5GB of RAM and passed-through devices crash.
A gaming VM with 3.5GB of RAM is rather useless these days, but luckily there is a fix!
I couldn't get the
sed command from the Github comment to work and used a different modification of the
# $dm_args and $kernel are separated with \x1b to allow for spaces in arguments. dm_args=$(echo "$dm_args" | sed 's/xenfv/xenfv,max-ram-below-4g=3.5G/g')
It's actually important that this is 3.5G and not any other value for some reason!
With the VM now booting with usable amounts of RAM and a passed-through GPU, drivers are the next step.
Sadly AMD drivers currently have a problem with VMs, too: The AMD Adrenaline drivers after a certain version prevent a VM from booting. Installing the AMD Pro drivers worked fine, though!
The final catch is a bug in AMD GPUs below the RX 6000 series. They don't reset correctly and can't be passed through to a VM a second time without rebooting the full system.
The vendor-reset kernel module implements the correct reset procedures for these GPUs and needs to be installed into dom0.
This is very, very much a no-no for Qubes security because a vulnerability or backdoor in this code (or any code on dom0) allows full access to all VMs on your system.
On the other hand: Beat Saber.
So I downloaded the zip from Github, copied it over to dom0, installed it and made it load on boot.
(Create a file called
/etc/modules-load.d/00-vendor-reset.conf containing only the line
And that's it! I have a VR capable virtual machine!
VR VM Performance
Of course there is another catch: The virtualized network is too slow and makes my tracking lag!
How in the world could my network connection screw with my VR experience? Because I have an original HTC Vive with a TPCast wireless adapter. A TPCast has two wireless connections, one very fast 60GHz connection for the HDMI signal and one standard WiFi connection running USB over IP. The USB connection carries the positional tracking information, controller input and microphone signal, so pretty important stuff. And the virtual network device can't provide the low latency this USB over IP connection needs.
The final piece of the puzzle was to pass through an entire USB controller to the VM and attach a USB 3 to gigabit ethernet adapter.
My tracking-lag was solved and I could finally enjoy virtualized Beat Saber.