Newer VR Adventures in Qubes OS

A lot has changed since I first played VR in Qubes OS. All of it for the better.

My current VR setup is a surprisingly seamless and out-of-the-box experience: Install Qubes, Windows, GPU drivers and games the normal way. That's it.

No driver patching, no TOLUD workaround,

What changed?

Qubes fixed the TOLUD problem and I sold my AMD GPU to get an Nvidia.

I was fed up with AMD's buggy drivers in my VMs and a bit after my first post in 2021, Nvidia finally dropped their artificial restriction on GPU passthrough from their Windows drivers. And their Linux drivers too even though I can't find anybody mentioning it…

Virtualized Windows now has the same installation experience as bare-metal Windows: Download and run Nvidia's driver installer. Done.

I love the Windows template VM setup scripts, it's an amazing way to run Windows. Even if I do screw up the template VM beyond reverting I can just install a new one and keep all data, downloaded games and installed software from my Windows App-VMs.

Linux is a bit more annoying because I'm having trouble with getting the Nvidia driver to install on Fedora, the default Linux distro for Qubes VMs. It just works in Debian but the drivers are ancient. So for my GPU Linux VMs I use Arch btw., built in Qubes builder. Once it's up and running, it works great.

What stayed the same?

My VR VM still has a USB controller passed through. USB Ethernet still helps a lot with wireless VR performance, USB audio fixes missing Qubes audio support on Windows, and game controllers feel like they have less latency than with USB passthrough.

What happened to your HTC Vive?

Wait, how did you know?!

I stopped using my HTC Vive and replaced it with a Meta Quest 2 with Steam Link streaming. I just couldn't stand the low resolution anymore! I kept getting sniped in Population: One by players that took up less than a pixel. Now I'm finally getting sniped by players taking up four pixels.

I did keep everything but the headset though. I hate Meta's controllers and wanted to keep using my Index controllers.

OpenVR Space Calibrator makes it work! I put an HTC Vive Tracker on my Quest 2 and run continuous calibration. Starting up is a bit finicky because when starting the Steam Link app, the Quest controllers and hand tracking need to be turned off. So I use the old school gaze pointer with volume up as the confirm button. SteamVR starts up automatically, I turn on both controllers and the tracker, and continuous calibration syncs their positions to my Quest's playspace.

What are your closing thoughts?

I'm happy virtualized VR gaming works better than ever before for me. 5/7, can recommend, would futz with this for hours and days on end again.

VR Adventures in Qubes OS

I can play VR in Qubes OS!

Qubes OS is pretty cool and succeeds at making virtual machines on the desktop unexpectedly seamless. But I'm a massive virtual reality fanboy and I need my Beat Saber! VR needs a powerful GPU and passing a GPU to a VM is harder than it should be. Thus began my quest to make my virtual reality machine a virtual reality virtual machine with Qubes.

My machine has two GPUs, one for the Qubes UI and one to pass through to VMs: An AMD Ryzen 3 3200G CPU with an integrated Vega 8 GPU and an AMD RX 5500 XT GPU. I use a Mini-ITX case so I needed a CPU with an iGPU instead of two dedicated GPUs.

Installing Qubes OS

Installing Qubes was already a small challenge because because the default LTS kernel of Qubes dom0 is too old and does not support the GPUs. I had to fix UEFI boot by commenting out noexitboot=1 and mapbs=1. The installation stick booted but couldn't start the graphical installer because of the old kernel. The text based installer is broken, too. But running the installer over VNC worked! To cite the very helpful Github comment about VNC installation:

The text based installer is also broken (tickets #2113 #1161), but if you plug in a network adaptor, alt+tab to the terminal, use dhclient to get an IP address, use ip a to see the IP, remove /var/run/anaconda.pid, and run anaconda --vnc --vncpassword pass1234 and connect to your IP with a VNC client on port 5901, the install will work fine...

And it did! Apart from the LTS kernel that also breaks the GUI in the installed OS.

I needed to upgrade the dom0 kernel. But this needs the dom0-upgrade-VMs set up, which normally happens in the GUI on first boot. Luckily there's a text-based initial setup script at /usr/libexec/initial-setup/initial-setup-text. I could then upgrade dom0 to kernel-latest and make it the default kernel.

After a restarting either Qubes or lightdm I had a fully working Qubes OS!

Installing Windows

Creating a Windows VM was a lot more straightforward than I expected, I just followed the Qubes Windows documentation: Download the Windows ISO into a VM, create a new standalone VM, increase its RAM and storage to 3.7GB+ and 50GB+, and boot the ISO from the other VM. Then wait forever until the installer starts. Because it will. Eventually.

Passing through the GPU

A passed through GPU roughly goes through the following stages:

  1. It's added to a VM
  2. The GPU driver inside the VM runs stuff on the GPU
  3. After VM shutdown, the GPU is reset to be ready for another passthrough

And while all of them had some catch for me, neowutran's article and the Qubes community documentation on GPU passthrough lead me to success.

GPU Passthrough

The biggest, most important secret to pass through a GPU to a Windows VM is to not use an Nvidia GPU. Their drivers can detect they're running inside a VM and will throw Error 43, halt and catch fire. KVM supports workarounds, Xen and thus Qubes do not. I even tried to patch the Nvidia driver, but the error persisted.

Thus I got myself an AMD RX 5500 XT and upgraded my partner's desktop PC with my "old" Nvidia GTX 1060.

Passing through PCI devices in Qubes is pretty easy. If they're not used by another VM, e.g. by dom0! To prevent dom0 from grabbing the GPU, I needed to hide its PCI device at boot. Running lspci in dom0 gave me the PCI addresses for my GPU: 03:00.0 for the graphics part and 03:00.1 for the HDMI audio part. I added rd.qubes.hide_pci=03:00.0,03:00.1 to my kernel commandline in /boot/efi/EFI/qubes/xen.cfg and rebooted my machine.

Booting the VM afterwards needs work, too, because VMs with more than 3.5GB of RAM and passed-through devices crash.

A gaming VM with 3.5GB of RAM is rather useless these days, but luckily there is a fix! I couldn't get the sed command from the Github comment to work and used a different modification of the init file:

# $dm_args and $kernel are separated with \x1b to allow for spaces in arguments.
dm_args=$(echo "$dm_args" | sed 's/xenfv/xenfv,max-ram-below-4g=3.5G/g')

It's actually important that this is 3.5G and not any other value for some reason!

GPU Drivers

With the VM now booting with usable amounts of RAM and a passed-through GPU, drivers are the next step.

Sadly AMD drivers currently have a problem with VMs, too: The AMD Adrenaline drivers after a certain version prevent a VM from booting. Installing the AMD Pro drivers worked fine, though!

GPU Reset

The final catch is a bug in AMD GPUs below the RX 6000 series. They don't reset correctly and can't be passed through to a VM a second time without rebooting the full system.

The vendor-reset kernel module implements the correct reset procedures for these GPUs and needs to be installed into dom0. This is very, very much a no-no for Qubes security because a vulnerability or backdoor in this code (or any code on dom0) allows full access to all VMs on your system. On the other hand: Beat Saber. So I downloaded the zip from Github, copied it over to dom0, installed it and made it load on boot. (Create a file called /etc/modules-load.d/00-vendor-reset.conf containing only the line vendor-reset)

And that's it! I have a VR capable virtual machine!

VR VM Performance

Of course there is another catch: The virtualized network is too slow and makes my tracking lag!

How in the world could my network connection screw with my VR experience? Because I have an original HTC Vive with a TPCast wireless adapter. A TPCast has two wireless connections, one very fast 60GHz connection for the HDMI signal and one standard WiFi connection running USB over IP. The USB connection carries the positional tracking information, controller input and microphone signal, so pretty important stuff. And the virtual network device can't provide the low latency this USB over IP connection needs.

The final piece of the puzzle was to pass through an entire USB controller to the VM and attach a USB 3 to gigabit ethernet adapter.

My tracking-lag was solved and I could finally enjoy virtualized Beat Saber.

Renaming i3 workspaces while keeping navigation prefixes

I navigate my i3 workspaces using named workspaces of the format <number>:<key>, for example 1:s. The number keeps them in the same order in my workspace-list and the key is what I press to navigate them.

Sometimes I have a lot of windows open on many workspaces and begin to lose track of what's where. So I want to give workspaces appropriate names when needed while keeping the <number>:<key> prefix for navigation.

Out of the box the renaming in i3 isn't as convenient as I'd like, so I wrote this small Python script using i3ipc-python and added bindsym Mod4+q exec "python ~/.i3/renameworkspace.py" to my config.

This increases my productivity by at least .7% 🙌

New Year's Update:

Keep in mind to use workspace number instead of just workspace when moving between workspaces. Using strip_workspace_numbers yes in the i3bar removes the <number>: prefix and looks better.

bindsym $mod+1 workspace number 1:chat
bindsym $mod+Shift+1 move container to workspace number 1:chat 
bindsym $mod+q exec "python ~/.i3/renameworkspace.py"
bar {
    strip_workspace_numbers yes
    status_command python ~/.i3/i3status.py
}