{"id":1091,"date":"2022-07-22T12:41:55","date_gmt":"2022-07-22T20:41:55","guid":{"rendered":"https:\/\/angrysysadmins.tech\/?p=1091"},"modified":"2023-09-14T11:23:50","modified_gmt":"2023-09-14T19:23:50","slug":"vfio-tuning-your-windows-gaming-vm-for-optimal-performance","status":"publish","type":"post","link":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/","title":{"rendered":"VFIO: Tuning your Windows gaming VM for optimal performance"},"content":{"rendered":"<p>This will be a guide on advanced tuning for a VFIO gaming VM. If you&#8217;re starting from scratch, read through the Arch Wiki guide on <a href=\"https:\/\/wiki.archlinux.org\/title\/PCI_passthrough_via_OVMF#Setting_up_an_OVMF-based_guest_virtual_machine\">PCI passtrhough via OVMF<\/a>. It is a great starting point and covers all of the basics. I&#8217;d recommend using <code>libvirt<\/code> instead of straight QEMU.<\/p>\n<h2>Host hardware configuration<\/h2>\n<p>Before we begin, it&#8217;s best I show and explain my host host specs. If you only want configuration info, jump to <a href=\"#host-config\">Host Configurations<\/a>.<\/p>\n<pre>Distro: Arch Linux\r\nMotherboard: X399 AORUS Gaming 7  \r\nDE: Plasma on X11\r\nCPU: AMD Ryzen Threadripper 2950X OC to @ 4.000GHz \r\nGPUs: Radeon RX 480, Radeon RX6900 XT, Radeon RX 550X\r\nNIC: Intel X520-DA2\r\nSSDs: Samsung 970 pro 512GB (luks encrypted, BTRFS), Team Group MP34 1TB\r\nHDDs: 4x 4TB HGST Deskstar NAS drives in a ZFS RAID-Z1 \r\nMemory: 32GB (4x8GB) Gskill Samsung B-die kit running at 3200 MT\/s XMP<\/pre>\n<p>For the distro, I chose Manjaro Linux because its basically a few months delayed turn key Arch Linux. This is important because when my rig was built, Threadripper had issues with older linux kernels. Manjaro lets the user easily pick a kernel from a nice and easy to use GUI, very important for new-to-desktop-Linux me. Since its based on Arch, it&#8217;s also running the latest packges for QEMU and OVMF, always ensuring the ease of new feature deployment, and with AUR, its really easy to get custom kernels and packages.<\/p>\n<p>&nbsp;<\/p>\n<p>For the platform, I went with a Threadripper because I wanted to have 64 PCIe lanes for lots of add-in cards, with the added benefit of extra memory slots and NVMe slots. Threadripper is simultaneously a gift with all of its cores and a downside because of its NUMA architecture in the 1st and 2nd gen iterations. NUMA adds extra complexity to the VM, but these days its not too hard with tons of guides working around it. The X399 AORUS Gaming 7 motherboard I chose because I found it for 100 bucks on Amazon because the integrated sound card was broken.<\/p>\n<p>&nbsp;<\/p>\n<p>For the GPU&#8217;s, I initially had a GTX 1080 but swapped it out for a RX6900XT because I wanted to use it in both Linux and Windows. Despite Arch making Nvidia drivers easy, it&#8217;s still a pain to work with in Linux. This can be ignored if you just have the GPU for passthrough and nothing else. The other 2 GPU&#8217;s are there for display outputs to my array of 6 monitors.<\/p>\n<p>&nbsp;<\/p>\n<p>The NIC was chosen to take advantage of SR-IOV and Virtual Function NICs for use with the VM. The virtual function NIC allows the complete bypassing of the <code>virtio<\/code> network stack and use of essentially a PCIEe NIC in the VM. This decreases latency and weird kernel lag caused by 10 gbit speeds with the virt-io drivers in windows. I think this issue could have been fixed but I had the NIC anyways so I wanted to try it out and it works great. The way SR-IOV works you can even set a virtual function device to a specific VLAN either in the host or VM.<\/p>\n<p>&nbsp;<\/p>\n<p>The storage for the host is the Samsung 970. Its fast and reliable. My only regret was not getting the 1TB version. For the guest, I have a Team Group SSD passed through. The HDD array uses ZFS because its just the best file system in the world, but more importantly it has Zvol&#8217;s, Encryption, Compression, and a really good cache system with ARC.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h1 id=\"host-config\">Host Configurations<\/h1>\n<h3>BIOS\/UEFI<\/h3>\n<p>Here are the configs I&#8217;d turn on if possible<\/p>\n<ul>\n<li>Resizeable BAR set to on. This allows the CPU to and GPU to use bigger than 4GB blocks of RAM.<\/li>\n<li>Above 4G decoding set to on. This allows the communication of the GPU to bypass the CPU and go straight to the storage source.<\/li>\n<li>IOMMU set to on. This turns on IOMMU grouping, a must have.<\/li>\n<li>Only turn on &#8220;ACS override&#8221; if your motherboard IOMMU groups are not usable. It does work but sometimes causes lag<\/li>\n<li>Make sure you boot via UEFI and disable CSM. Graphics cards, Storage, and networking should all be set to UEFI only<\/li>\n<li>Make sure the Initial display is not set to the GPU you are passing though<\/li>\n<li>If you&#8217;re overclocking, make sure it is very stable, micro stutters and unstable voltages can cause crashing in the VM but not the host.<\/li>\n<\/ul>\n<h3>Boot and Kernel Parameters<\/h3>\n<p>Here are the kernel parameters that I use. Append what I have to the end of your boot line in <code>\/etc\/default\/grub<\/code>.<\/p>\n<pre>GRUB_CMDLINE_LINUX_DEFAULT=\"amd_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 iommu=pt\"<\/pre>\n<p>The ones that are different are the <code>allow_unsafe_interrupts<\/code> and <code>ignore_msrs<\/code>. These allow more leniency with interrupt timings and fix a Windows bug with EPYC. IOMMU set to passthrough does something that makes passthough work much better.<\/p>\n<p>&nbsp;<\/p>\n<p>For <code>mkinitcpio<\/code> options, I added the following to <code>\/etc\/mkinitcpio.conf<\/code>:<\/p>\n<pre>MODULES=\"amdgpu vfio_pci vfio vfio_iommu_type1 vfio_virqfd\"<\/pre>\n<p>The <code>vfio_pci<\/code> allows loading of the <code>vfio_pci<\/code> driver for any passed through device, important for GPU&#8217;s if you don&#8217;t want issues. The other VFIO stuff I just know it makes things better for passed through devices, so I enabled them.<\/p>\n<p>&nbsp;<\/p>\n<p>For <code>modprobe<\/code>, I added an SR-IOV command to make virtual NICs on my Intel X520 in <code>\/etc\/modprobe.d\/ixgbe.conf<\/code>:<\/p>\n<pre>options ixgbe max_vfs=2<\/pre>\n<p>This just makes 2 virtual function NICs per physical NIC on the card.<\/p>\n<p>&nbsp;<\/p>\n<h3>ZFS config<\/h3>\n<p>For my array, I have a VM&#8217;s dataset that has ZSTD-fast compression turned on, Access time Off, 16KiB recond size (since i have 4x4KiB sector drives), and AES-256-GCM encryption. I also have a Zvol in this dataset for the VM to use as a storage drive, inheriting the dataset config above. I also have ARC set to a max size of 4gb to avoid having ZFS eat all of my host&#8217;s memory.<\/p>\n<p>&nbsp;<\/p>\n<h3>Start VM script<\/h3>\n<p>I have a long VM start script that I run every time my virtual machine. There are many ways to have this done, but just sh it every time I want to use my VM.<\/p>\n<details>\n<summary>vfio-start script (Click to Expand)<\/summary>\n<pre>#!\/bin\/sh \r\n#==========Sudo Check========= \r\nsudo cat \/etc\/resolv.conf \r\n\r\n#========================================================================\r\n#=============== Pre Commands =========================================== \r\n#========================================================================\r\necho 'set frequency governer' \r\nsudo cpupower frequency-set -g performance \r\n\r\necho Setting up monitors \r\n\r\nxrandr --output HDMI-A-0 --primary \u00a0\r\nsleep 2 \r\nxrandr --output DisplayPort-2-3 --off \r\nsleep 2 \r\n\r\necho waiting for x to catch up \r\nsleep 15 \r\necho Done! \r\nsleep 1 \r\n\r\n\r\n#=============== SR-IOV Functions ================= \r\n#set virtual nic to VLAN 69 (DMZ) \r\nsudo ip link set enp7s0f1 vf 1 vlan 69 \r\n\r\n#================ PCIe Crap ======================================= \r\n#run UnBind script for RX 6900xt \r\nsudo sh \/root\/remove_6900xt.sh \r\n\r\n#================ interrupts ========================================= \r\n#grep vfio \/proc\/interrupts | cut -b 3-4 | while read -r i ; do \r\n# \u00a0\u00a0echo \"set mask fcfc to irq $i\" \r\n# \u00a0\u00a0echo fcfc &gt;\/proc\/irq\/$i\/smp_affinity \r\n#done \r\n\r\n#============ Barrier ========================================= \r\n#Start Barrier \r\necho Starting Barrier \r\nbarrier --config \/home\/grassyloki\/barrierconfig.conf &lt;\/dev\/null &amp;&gt;\/dev\/null &amp; \r\n\r\n#============= VFIO-Isolate ============== \r\nsudo vfio-isolate cpuset-create --cpus N0 --mems N0 -mm \/host.slice move-tasks \/ \/host.slice \r\nsudo vfio-isolate -u \/tmp\/undo_irq irq-affinity mask C8-15,24-31 \r\n\r\n#=============================================================\r\n#====================== Start VM ============================= \r\n#=============================================================\r\n#echo Allocating Huge Pages! \r\n#sudo sh \/lib\/systemd\/hugetlb-reserve-pages.sh \r\n\r\necho Starting Gaming VM \r\nsudo virsh start VFIO-NoHide \r\n\r\necho \u00a0\r\necho Verify Affinity of CPUs \r\nsudo virsh vcpuinfo VFIO-NoHide | grep Affinity \r\n\r\necho Press any key to end VM \r\nread \r\n\r\nclear \r\n\r\n#======================================================================== \r\n#=========================Stop procedure================================= \r\n#======================================================================== \r\necho Shuttingdown Gaming VM \r\n\r\nsudo virsh shutdown VFIO-NoHide \r\n\r\n#### Undo VFIO-Isolate \r\nsudo vfio-isolate cpuset-delete \/host.slice \r\nsudo vfio-isolate restore \/tmp\/undo_irq \r\n\r\necho 'Setting CPU governer' \r\nsudo cpupower frequency-set -g ondemand \r\n\r\necho Killing barrier \r\n#kill $(ps -e | grep barrier | awk '{print $1}') \r\nps -ef | grep barrier | grep -v grep | awk '{print $2}' | xargs kill \r\n\r\necho Re-init'ing 6900xt \u00a0\r\nsudo sh \/root\/reinit_6900xt.sh \r\n\r\necho 'initdisplays running....' \r\nsh \/home\/grassyloki\/initdisplays.sh \r\n\r\necho 'script done!' \r\nsleep .5\r\n\r\n<\/pre>\n<\/details>\n<h4>Pre-Commands<\/h4>\n<p>These are the commands that I run before the VM is started.The first one is set the CPU speed governor to performance. This is imporatnt to get max FPS since the guest can&#8217;t really control the CPU frequency. Next one disconnects my main monitor and sets another monitor as the primary in X. Since my GPU is getting passed through, I need to do this so X does not crash when I yoink the GPU from the host. Next I set one of the virtual function NICs to use a specific VLAN instead of trunking all of them to the host. The next one is the fun one, removing the 6900XT and preparing it for use in the VM. I&#8217;ll talk about that more in another section. Below that is a inline script to map interrupts to different cores and numa nodes. This is now handled by VFIO-Isolate so its commented out. Next is the start command. Barrier is the software I use to send my mouse and keyboard to the VM. It is basically a FOSS version of Synergy with some KVM enhancements. Finally VFIO-Isolate. I&#8217;ll cover this in its own section below.<\/p>\n<p>&nbsp;<\/p>\n<h4>Removing the GPU for use in the VM<\/h4>\n<p>For dynamic loading and unloading of a gpu in use by the system, some configurations need to be changed. Make sure the GPU you are passing though to the VM is NOT the primary. This can be checked with: <code>xrandr --listproviders<\/code>. Provider 0 is the primary gpu. You can change this by phyically changing slots in your motherboard, settings in the UEFI, and worse case linux command line in your bootloader.<\/p>\n<pre>Providers: number : 3 \r\nProvider 0: id: 0x56 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 5 outputs: 3 associated providers: 2 name:AMD Radeon RX 550 \/ 550 Series @ pci:0000:09:00.0 \r\nProvider 1: id: 0xce cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 6 outputs: 4 associated providers: 1 name:AMD Radeon RX 6900 XT @ pci:0000:45:00.0 \r\nProvider 2: id: 0x8e cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 6 outputs: 5 associated providers: 1 name:AMD Radeon RX 480 Graphics @ pci:0000:0a:00.0<\/pre>\n<p>From my output, you can see that my primary is a RX 550. This means i can dynamiclly rip the GPU from X without any issues.<\/p>\n<p>This is a very sketchy method for removing the GPU from the host, switching its kernel driver to vfio_pci, then praying X does not crash. I&#8217;ll go through it line by line.<\/p>\n<pre>#!\/bin\/sh \r\necho \"unbind 6900xt gpu from amdgpu (1002:73bf)\" \r\necho 0000:45:00.0 &gt; \/sys\/bus\/pci\/drivers\/amdgpu\/unbind<\/pre>\n<p>This tells the GPU&#8217;s PCIe address to unbind from amdgpu. I&#8217;m still unsure if it is a good idea to do this for all of the PCIe devices in the IOMMU group, but it seems to work this way<\/p>\n<pre>sleep 2 \r\necho 1002 73bf &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/new_id || echo -n \"0000:45:00.0\" &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/bind\r\necho done<\/pre>\n<p>This section binds the card to the vfio_pci kernel driver by giving the vfio-pci a new pci device id it can use. Now do this for all of the rest of the GPU&#8217;s devices.<\/p>\n<pre>echo \"unbind gpu sound card (1002:ab28)\" \r\necho 0000:45:00.1 &gt; \/sys\/bus\/pci\/drivers\/snd_hda_intel\/unbind \r\nsleep 2 \r\necho 1002 ab28 &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/new_id || echo -n \"0000:45:00.1\" &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/bind\r\necho done \u00a0\r\nsleep 1 \r\n\r\necho \"unbind gpu usb card (1002:73a6)\" \r\necho 0000:45:00.2 &gt; \/sys\/bus\/pci\/drivers\/xhci_hcd\/unbind \r\nsleep 2 \r\necho 1002 73a6 &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/new_id || echo -n \"0000:45:00.2\" &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/bind\r\necho done \u00a0\r\nsleep 1 \r\n\r\necho \"unbind gpu serial card (1002:73a4)\" \r\necho 0000:45:00.3 &gt; \/sys\/bus\/pci\/drivers\/i2c-designware-pci\/unbind \r\nsleep 2 \r\necho 1002 73a4 &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/new_id  ||  echo -n \"0000:45:00.3\" &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/bind\r\nsleep 1\r\n\r\necho \"script done\"<\/pre>\n<p>After this script finishes, the GPU should be ready for passthough. Either that or X crashed. Sometimes it likes to do that. Xorg does not support hot-remove of gpu&#8217;s so it kind of panics. The key thing to make sure is that the GPU getting removed is not the primary render GPU for X. You can check that with the command &#8220;xrandr &#8211;listproviders&#8221; where provider 0 is the primary. If it is the primary I think it will just crash when its yanked regardless. Wayland supports both hot-add and hot-remove, so if you can use Wayland use it for a better experience. If you got a way to make X happy please post below.<\/p>\n<p>&nbsp;<\/p>\n<h4>vfio-isolate<\/h4>\n<p>vfio-isolate is a crazy good project for mapping interrupts and host CPU prioritizes to other CPUs. This is important because host interrupts and CPU usage will cause high latency, stutters, or even crashing in the VM. For my setup, I have 2 numa nodes, with basically 1 dedicated to the VM. Use tools like <code>lstopo<\/code> to make sure that A) your GPU is on the numa node of the CPU that the VM is using, and B) that your CPU&#8217;s for the host are all on the same node, do not mix physical cores and SMT\/hyperthreaded cores of other nodes. In my setup, Node 0 is CPU&#8217;s 0-7, 16-25 and Node 1 is CPU&#8217;s 8-15, 24-31. The first vfio-isolate command is saying to make a CPU &#8220;slice&#8221; and move all host tasks to it. The second command sets the IRQ affinity mask to not use these CPU&#8217;s for host interrupts. This really helps with micro stutters and weird latency issues \/ game crashes.<\/p>\n<p>&nbsp;<\/p>\n<h4>Start VM<\/h4>\n<p>This part just starts the VM. I originally had static huge pages, but I&#8217;ve since moved to dynamic pages, it&#8217;s no longer needed and thus commented out. Next the VM starts, and if there is any error it shows. Next I dump the CPU mappings that the VM is using for CPU&#8217;s. It&#8217;s important that each CPU core in pinned correctly so that there are proper L1, L2, and L3 cache hits. It improves performance and decreases latency and stutters.<\/p>\n<p>&nbsp;<\/p>\n<h4>Stop procedure<\/h4>\n<p>To start off, we issue the shutdown command to the VM. Next we remove the blocks on all other CPU cores and memory blocks so that all programs can use all cores and all memory. Next line removes the interrupt mappings. After I kill the Barrier program. Next is the fun one, re-adding the GPU to the host. That has its own section below. Lastly sleep for 5 seconds while X finds the GPU, then turn on the displays.<\/p>\n<p>&nbsp;<\/p>\n<h4>Re Init RX 6900XT<\/h4>\n<p>This script is still kind of work in progress and does not fully work. It is basically the remove gpu script but reversed.<\/p>\n<pre>echo \"unbind gpu serial card from vfio-pci to i2c (1002:73a4)\" \r\necho 0000:45:00.3 &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/unbind<\/pre>\n<p>This unbinds the card from the vfio_pci driver. This will work <span style=\"text-decoration: underline;\"><strong>ONLY<\/strong><strong> AFTER<\/strong><\/span> the VM has fully turned off and a grace period of 5 seconds has passed.<\/p>\n<pre>sleep 2 \r\necho 0000:45:00.3 &gt; \/sys\/bus\/pci\/drivers\/i2c-designware-pci\/bind\r\necho done<\/pre>\n<p>This binds the GPU serial port for the gpu. We are not using\u00a0 new_id because the ID is already cleared to use the driver, instead just sending the bind command. Now, repeate this for all of the other non-gpu parts of the GPU.<\/p>\n<pre>echo \"unbind gpu usb card from vfio-pci to xhci_hcd (1002:73a6)\" \r\necho 0000:45:00.2 &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/unbind \r\nsleep 2 \r\necho 0000:45:00.2 &gt; \/sys\/bus\/pci\/drivers\/xhci_hcd\/bind \r\necho done \u00a0\r\n\r\necho \"unbind gpu sound card from vfio-pci to snd_hda_intel (1002:ab28)\" \r\necho 0000:45:00.1 &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/unbind \r\nsleep 2  \r\necho 0000:45:00.1 &gt; \/sys\/bus\/pci\/drivers\/snd_hda_intel\/bind \r\necho done<\/pre>\n<p>Now that all of the other parts of the gpu has been re-added to the host with the proper drivers, you can attempt to add the gpu back to the system<\/p>\n<pre>echo \"unbind gpu from vfio-pci to amdgpu (1002:73bf)\" \r\necho 0000:45:00.0 &gt; \/sys\/bus\/pci\/drivers\/vfio-pci\/unbind \r\nsleep 2  \r\necho 0000:45:00.0 &gt; \/sys\/bus\/pci\/drivers\/amdgpu\/bind \r\necho script done<\/pre>\n<p>Unfortinetly mine does not re-add fully to the computer after this&#8230; I think it might have something to do with the amdgpu gpu reset bug, but im not sure. this should &#8220;just werk&#8221; but it doesn&#8217;t. If you manage to get it working with an AMD card please post how below so i can update this part of the script.<\/p>\n<h1>Virtual Machine Configuration<\/h1>\n<p>My virtual machine is a bit crazy. It comes from 4 years of learning, tuning, and figuring out what seems to work the best. Some of the things I&#8217;ve done seem a bit overkill (because they are) but doing VFIO was the whole purpose of this rig, so some design decisions were made with this in mind. That does NOT mean that these optimizations can&#8217;t be used say on a laptop or a normal desktop.<\/p>\n<p>&nbsp;<\/p>\n<h3>Libvirt Configuration<\/h3>\n<details>\n<summary>Here is my Libvirt XML (Click to Expand)<\/summary>\n<pre>&lt;domain type=\"kvm\"&gt;\r\n&lt;name&gt;VFIO-NoHide&lt;\/name&gt;\r\n&lt;uuid&gt;9694362e-5fd3-4add-876e-e28a2e509bb6&lt;\/uuid&gt;\r\n&lt;metadata&gt;\r\n&lt;libosinfo:libosinfo xmlns:libosinfo=\"http:\/\/libosinfo.org\/xmlns\/libvirt\/domain\/1.0\"&gt;\r\n&lt;libosinfo:os id=\"http:\/\/microsoft.com\/win\/10\"\/&gt;\r\n&lt;\/libosinfo:libosinfo&gt;\r\n&lt;\/metadata&gt;\r\n&lt;memory unit=\"KiB\"&gt;16777216&lt;\/memory&gt;\r\n&lt;currentMemory unit=\"KiB\"&gt;16777216&lt;\/currentMemory&gt;\r\n&lt;vcpu placement=\"static\"&gt;16&lt;\/vcpu&gt;\r\n&lt;iothreads&gt;2&lt;\/iothreads&gt;\r\n&lt;iothreadids&gt;\r\n&lt;iothread id=\"1\"\/&gt;\r\n&lt;iothread id=\"2\"\/&gt;\r\n&lt;\/iothreadids&gt;\r\n&lt;cputune&gt;\r\n&lt;vcpupin vcpu=\"0\" cpuset=\"8\"\/&gt;\r\n&lt;vcpupin vcpu=\"1\" cpuset=\"9\"\/&gt;\r\n&lt;vcpupin vcpu=\"2\" cpuset=\"10\"\/&gt;\r\n&lt;vcpupin vcpu=\"3\" cpuset=\"11\"\/&gt;\r\n&lt;vcpupin vcpu=\"4\" cpuset=\"12\"\/&gt;\r\n&lt;vcpupin vcpu=\"5\" cpuset=\"13\"\/&gt;\r\n&lt;vcpupin vcpu=\"6\" cpuset=\"14\"\/&gt;\r\n&lt;vcpupin vcpu=\"7\" cpuset=\"15\"\/&gt;\r\n&lt;vcpupin vcpu=\"8\" cpuset=\"24\"\/&gt;\r\n&lt;vcpupin vcpu=\"9\" cpuset=\"25\"\/&gt;\r\n&lt;vcpupin vcpu=\"10\" cpuset=\"26\"\/&gt;\r\n&lt;vcpupin vcpu=\"11\" cpuset=\"27\"\/&gt;\r\n&lt;vcpupin vcpu=\"12\" cpuset=\"28\"\/&gt;\r\n&lt;vcpupin vcpu=\"13\" cpuset=\"29\"\/&gt;\r\n&lt;vcpupin vcpu=\"14\" cpuset=\"30\"\/&gt;\r\n&lt;vcpupin vcpu=\"15\" cpuset=\"31\"\/&gt;\r\n&lt;emulatorpin cpuset=\"2\"\/&gt;\r\n&lt;iothreadpin iothread=\"1\" cpuset=\"4\"\/&gt;\r\n&lt;iothreadpin iothread=\"2\" cpuset=\"5\"\/&gt;\r\n&lt;\/cputune&gt;\r\n&lt;os&gt;\r\n&lt;type arch=\"x86_64\" machine=\"pc-q35-7.0\"&gt;hvm&lt;\/type&gt;\r\n&lt;loader readonly=\"yes\" type=\"pflash\"&gt;\/usr\/share\/edk2-ovmf\/x64\/OVMF_CODE.fd&lt;\/loader&gt;\r\n&lt;nvram&gt;\/var\/lib\/libvirt\/qemu\/nvram\/VFIO_VARS.fd&lt;\/nvram&gt;\r\n&lt;smbios mode=\"host\"\/&gt;\r\n&lt;\/os&gt;\r\n&lt;features&gt;\r\n&lt;acpi\/&gt;\r\n&lt;apic\/&gt;\r\n&lt;hyperv mode=\"passthrough\"&gt;\r\n&lt;relaxed state=\"on\"\/&gt;\r\n&lt;vapic state=\"on\"\/&gt;\r\n&lt;spinlocks state=\"on\" retries=\"8191\"\/&gt;\r\n&lt;vpindex state=\"on\"\/&gt;\r\n&lt;runtime state=\"on\"\/&gt;\r\n&lt;synic state=\"on\"\/&gt;\r\n&lt;stimer state=\"on\"\/&gt;\r\n&lt;reset state=\"off\"\/&gt;\r\n&lt;vendor_id state=\"on\" value=\"7ba845ec2647\"\/&gt;\r\n&lt;frequencies state=\"on\"\/&gt;\r\n&lt;reenlightenment state=\"off\"\/&gt;\r\n&lt;tlbflush state=\"on\"\/&gt;\r\n&lt;ipi state=\"on\"\/&gt;\r\n&lt;evmcs state=\"off\"\/&gt;\r\n&lt;\/hyperv&gt;\r\n&lt;kvm&gt;\r\n&lt;hidden state=\"off\"\/&gt;\r\n&lt;\/kvm&gt;\r\n&lt;vmport state=\"off\"\/&gt;\r\n&lt;ioapic driver=\"kvm\"\/&gt;\r\n&lt;\/features&gt;\r\n&lt;cpu mode=\"host-passthrough\" check=\"none\" migratable=\"on\"&gt;\r\n&lt;topology sockets=\"1\" dies=\"1\" cores=\"8\" threads=\"2\"\/&gt;\r\n&lt;feature policy=\"require\" name=\"topoext\"\/&gt;\r\n&lt;\/cpu&gt;\r\n&lt;clock offset=\"localtime\"&gt;\r\n&lt;timer name=\"rtc\" tickpolicy=\"catchup\"\/&gt;\r\n&lt;timer name=\"pit\" tickpolicy=\"delay\"\/&gt;\r\n&lt;timer name=\"hypervclock\" present=\"yes\"\/&gt;\r\n&lt;timer name=\"hpet\" present=\"yes\"\/&gt;\r\n&lt;timer name=\"tsc\" present=\"yes\" mode=\"native\"\/&gt;\r\n&lt;\/clock&gt;\r\n&lt;on_poweroff&gt;destroy&lt;\/on_poweroff&gt;\r\n&lt;on_reboot&gt;restart&lt;\/on_reboot&gt;\r\n&lt;on_crash&gt;destroy&lt;\/on_crash&gt;\r\n&lt;pm&gt;\r\n&lt;suspend-to-mem enabled=\"no\"\/&gt;\r\n&lt;suspend-to-disk enabled=\"no\"\/&gt;\r\n&lt;\/pm&gt;\r\n&lt;devices&gt;\r\n&lt;emulator&gt;\/usr\/bin\/qemu-system-x86_64&lt;\/emulator&gt;\r\n&lt;disk type=\"file\" device=\"disk\"&gt;\r\n&lt;driver name=\"qemu\" type=\"qcow2\" io=\"threads\" iothread=\"1\"\/&gt;\r\n&lt;source file=\"\/var\/lib\/libvirt\/images\/vfio.qcow2\"\/&gt;\r\n&lt;target dev=\"vda\" bus=\"virtio\"\/&gt;\r\n&lt;serial&gt;HUS6588D984332&lt;\/serial&gt;\r\n&lt;boot order=\"1\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x04\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/disk&gt;\r\n&lt;disk type=\"block\" device=\"disk\"&gt;\r\n&lt;driver name=\"qemu\" type=\"raw\" cache=\"none\" io=\"threads\" discard=\"unmap\" iothread=\"2\"\/&gt;\r\n&lt;source dev=\"\/dev\/RustTank\/VirtualMachines\/VFIO_Games_Drive\"\/&gt;\r\n&lt;target dev=\"vdc\" bus=\"virtio\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x0d\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/disk&gt;\r\n&lt;disk type=\"file\" device=\"disk\"&gt;\r\n&lt;driver name=\"qemu\" type=\"qcow2\"\/&gt;\r\n&lt;source file=\"\/var\/lib\/libvirt\/images\/win10.qcow2\"\/&gt;\r\n&lt;target dev=\"vdd\" bus=\"virtio\"\/&gt;\r\n&lt;readonly\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x01\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/disk&gt;\r\n&lt;disk type=\"file\" device=\"cdrom\"&gt;\r\n&lt;driver name=\"qemu\" type=\"raw\"\/&gt;\r\n&lt;source file=\"\/var\/lib\/libvirt\/images\/virtio-win.iso\"\/&gt;\r\n&lt;target dev=\"sda\" bus=\"sata\"\/&gt;\r\n&lt;readonly\/&gt;\r\n&lt;address type=\"drive\" controller=\"0\" bus=\"0\" target=\"0\" unit=\"0\"\/&gt;\r\n&lt;\/disk&gt;\r\n&lt;controller type=\"usb\" index=\"0\" model=\"qemu-xhci\" ports=\"15\"&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x02\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"sata\" index=\"0\"&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x1f\" function=\"0x2\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"0\" model=\"pcie-root\"\/&gt;\r\n&lt;controller type=\"pci\" index=\"1\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"1\" port=\"0x10\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x02\" function=\"0x0\" multifunction=\"on\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"2\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"2\" port=\"0x11\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x02\" function=\"0x1\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"3\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"3\" port=\"0x12\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x02\" function=\"0x2\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"4\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"4\" port=\"0x13\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x02\" function=\"0x3\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"5\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"5\" port=\"0x14\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x02\" function=\"0x4\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"6\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"6\" port=\"0x15\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x02\" function=\"0x5\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"7\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"7\" port=\"0x16\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x02\" function=\"0x6\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"8\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"8\" port=\"0x17\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x02\" function=\"0x7\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"9\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"9\" port=\"0x18\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x03\" function=\"0x0\" multifunction=\"on\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"10\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"10\" port=\"0x19\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x03\" function=\"0x1\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"11\" model=\"pcie-to-pci-bridge\"&gt;\r\n&lt;model name=\"pcie-pci-bridge\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x0a\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"12\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"12\" port=\"0x1a\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x03\" function=\"0x2\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"13\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"13\" port=\"0x1b\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x03\" function=\"0x3\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"14\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"14\" port=\"0x1c\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x03\" function=\"0x4\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"15\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"15\" port=\"0x1d\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x03\" function=\"0x5\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"16\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"16\" port=\"0x1e\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x03\" function=\"0x6\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"17\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"17\" port=\"0x1f\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x03\" function=\"0x7\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"18\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"18\" port=\"0x20\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x04\" function=\"0x0\" multifunction=\"on\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"pci\" index=\"19\" model=\"pcie-root-port\"&gt;\r\n&lt;model name=\"pcie-root-port\"\/&gt;\r\n&lt;target chassis=\"19\" port=\"0x21\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x04\" function=\"0x1\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;controller type=\"virtio-serial\" index=\"0\"&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x03\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/controller&gt;\r\n&lt;interface type=\"network\"&gt;\r\n&lt;mac address=\"52:54:00:dd:82:2e\"\/&gt;\r\n&lt;source network=\"vnet_internal0\"\/&gt;\r\n&lt;model type=\"virtio\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x0c\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/interface&gt;\r\n&lt;serial type=\"pty\"&gt;\r\n&lt;target type=\"isa-serial\" port=\"0\"&gt;\r\n&lt;model name=\"isa-serial\"\/&gt;\r\n&lt;\/target&gt;\r\n&lt;\/serial&gt;\r\n&lt;console type=\"pty\"&gt;\r\n&lt;target type=\"serial\" port=\"0\"\/&gt;\r\n&lt;\/console&gt;\r\n&lt;channel type=\"spicevmc\"&gt;\r\n&lt;target type=\"virtio\" name=\"com.redhat.spice.0\"\/&gt;\r\n&lt;address type=\"virtio-serial\" controller=\"0\" bus=\"0\" port=\"1\"\/&gt;\r\n&lt;\/channel&gt;\r\n&lt;input type=\"tablet\" bus=\"usb\"&gt;\r\n&lt;address type=\"usb\" bus=\"0\" port=\"1\"\/&gt;\r\n&lt;\/input&gt;\r\n&lt;input type=\"mouse\" bus=\"ps2\"\/&gt;\r\n&lt;input type=\"keyboard\" bus=\"ps2\"\/&gt;\r\n&lt;tpm model=\"tpm-crb\"&gt;\r\n&lt;backend type=\"emulator\" version=\"2.0\"\/&gt;\r\n&lt;\/tpm&gt;\r\n&lt;graphics type=\"spice\" autoport=\"yes\"&gt;\r\n&lt;listen type=\"address\"\/&gt;\r\n&lt;image compression=\"off\"\/&gt;\r\n&lt;gl enable=\"no\"\/&gt;\r\n&lt;\/graphics&gt;\r\n&lt;sound model=\"ich9\"&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x1b\" function=\"0x0\"\/&gt;\r\n&lt;\/sound&gt;\r\n&lt;audio id=\"1\" type=\"spice\"\/&gt;\r\n&lt;video&gt;\r\n&lt;model type=\"qxl\" ram=\"65536\" vram=\"65536\" vgamem=\"16384\" heads=\"1\" primary=\"yes\"\/&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x00\" slot=\"0x01\" function=\"0x0\"\/&gt;\r\n&lt;\/video&gt;\r\n&lt;hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"&gt;\r\n&lt;source&gt;\r\n&lt;address domain=\"0x0000\" bus=\"0x0b\" slot=\"0x00\" function=\"0x3\"\/&gt;\r\n&lt;\/source&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x06\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/hostdev&gt;\r\n&lt;hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"&gt;\r\n&lt;source&gt;\r\n&lt;address domain=\"0x0000\" bus=\"0x07\" slot=\"0x10\" function=\"0x1\"\/&gt;\r\n&lt;\/source&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x07\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/hostdev&gt;\r\n&lt;hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"&gt;\r\n&lt;source&gt;\r\n&lt;address domain=\"0x0000\" bus=\"0x42\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/source&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x08\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/hostdev&gt;\r\n&lt;hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"&gt;\r\n&lt;source&gt;\r\n&lt;address domain=\"0x0000\" bus=\"0x45\" slot=\"0x00\" function=\"0x3\"\/&gt;\r\n&lt;\/source&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x09\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/hostdev&gt;\r\n&lt;hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"&gt;\r\n&lt;source&gt;\r\n&lt;address domain=\"0x0000\" bus=\"0x45\" slot=\"0x00\" function=\"0x2\"\/&gt;\r\n&lt;\/source&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x0e\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/hostdev&gt;\r\n&lt;hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"&gt;\r\n&lt;source&gt;\r\n&lt;address domain=\"0x0000\" bus=\"0x45\" slot=\"0x00\" function=\"0x1\"\/&gt;\r\n&lt;\/source&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x10\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/hostdev&gt;\r\n&lt;hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"&gt;\r\n&lt;source&gt;\r\n&lt;address domain=\"0x0000\" bus=\"0x45\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/source&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x11\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/hostdev&gt;\r\n&lt;hostdev mode=\"subsystem\" type=\"pci\" managed=\"yes\"&gt;\r\n&lt;source&gt;\r\n&lt;address domain=\"0x0000\" bus=\"0x07\" slot=\"0x10\" function=\"0x3\"\/&gt;\r\n&lt;\/source&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x12\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/hostdev&gt;\r\n&lt;memballoon model=\"virtio\"&gt;\r\n&lt;address type=\"pci\" domain=\"0x0000\" bus=\"0x05\" slot=\"0x00\" function=\"0x0\"\/&gt;\r\n&lt;\/memballoon&gt;\r\n&lt;\/devices&gt;\r\n&lt;\/domain&gt;<\/pre>\n<\/details>\n<p>I&#8217;m only going to go over the I think relivant things&#8230; so here we go:<\/p>\n<p>&nbsp;<\/p>\n<h3>CPU Pinning<\/h3>\n<p>Here I make sure to pin a physical core and its hyperthreded core together so that the L1 cache Windows thinks is there is accurate. From my testing, Windows is expecting is 1 to 1 what Linux is reporting. Make sure to double check with lstopo and other tools this is correct!<\/p>\n<p>&nbsp;<\/p>\n<h3>CPU mode<\/h3>\n<p>I&#8217;m using host-passthrough, so Windows thinks it its the same CPU as my host. I do this so that all the cpu extentions are properly utulized, also some games complain about the KVM generic cpu or the generic EPYC one. I have specter and meltdown midigations enabled with &#8220;migratable=&#8221;on&#8221;&#8221;. This does make perforance slightly worse, but for my use case I can&#8217;t take any chances. The topoloy matches the topology of NUMA node 1 on my system that is pinned, so 1 numa node, 8 cores with 2x threads per. The only feature policy I have force enabled is topoext, could add more but this works. If I should add more, please post what and why.<\/p>\n<p>&nbsp;<\/p>\n<h3>Clock offset<\/h3>\n<p>This depends on how hidden you want your system to be from anti-cheat programs. One of the way they detect if a machine is a VM or not is by the RTC being different than the machine boot time. For mine, I don&#8217;t care to be hidden (at least in this VFIO vm), so I have options enabled to make the RTC more forgiving to lag, stutters, and what not.<\/p>\n<p>&nbsp;<\/p>\n<h3>IO Threads and IO Thread Pin<\/h3>\n<p>This is critical if you have fast storage. For my case, I have the boot drive on my Linux root SSD, and a Zvol mapped to another drive. Since these speeds can get crazy fast, they can cause an interupt or a compute task to take place on a random core. To fix this, I made 2 IO threads, each statically assigned to a single device and given a cores to minimize cross core tasks.<\/p>\n<p>&nbsp;<\/p>\n<h3>Emulator Pin<\/h3>\n<p>This sets the QEMU emulation tasks to be done on core #2. This just keeps it from getting in the way of other processes, and minimizes latency so it can utulize its L1 cache properly<\/p>\n<p>&nbsp;<\/p>\n<h3>OS Section<\/h3>\n<p>In the first line, I have my machine type set to Q35 version 7.0, which at the time of writing this is the latest machine type and latest QEMU version. This should pull the latest defaults for everything I think. Next 2 are just OVMF UEFI stuff. Lastly is the &#8220;smbios mode=&#8221;host&#8221;&#8221;. This makes it so dmi-decode is passed through to the VM. It&#8217;s a no cost obfuscation stuff and bypasses some basic anti-cheat VM detection stuff. You can manually set these strings if you want, but I just pass through my host options.<\/p>\n<p>&nbsp;<\/p>\n<h3>Hyper-V stuff<\/h3>\n<p>These hyper-v tuneables are critical for decreasing lag with the VM by turning on hyper-v guest feaures that make the VM more easy to emulate. The first line &#8220;hyperv mode=&#8221;passthrough&#8221;&#8221; makes it so it tries to enable all of the featues. I&#8217;m not going to go through all of the featues, but you can <a href=\"https:\/\/libvirt.org\/formatdomain.html#hypervisor-features\">read more about them here<\/a>. Generally the more enabled the better, but not always. If you want to hide the fact that you are running a VM, you want to disable some of these and other things in the featues part. This part gets updated regulary, so check back on the official libvirt documentation for thigns to enable\/disable.<\/p>\n<p>&nbsp;<\/p>\n<h3>Virtio-Block tuning<\/h3>\n<p>For each block device, I have manual IO threadding on with &#8220;io=&#8221;threads&#8221; iothread=&#8221;1&#8243;&#8221;. I also have a fake serial number enabled to somewhat hide that this is a VM with &#8220;&lt;serial&gt;HUS6588D984332&lt;\/serial&gt;&#8221; were the serial is some random real-ish sounding thing. For my ZFS array, I disabled caching for the virtual disk and set the discard mode to unmap. This helps with weird speed spiking issues. It is slower, but much more stable.<\/p>\n<p>&nbsp;<\/p>\n<h3>Audio<\/h3>\n<p>Audio on my setup is handled by a passed though USB controller that has a soundblaster X3 on it. I dump this into a mixer to combine the audio from the host and guest. Audio was a pain for me with this project, but I was lucky to find out that my usb controllers are on unique IOMMU groups, so I passed one through.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h1>Windows Settings<\/h1>\n<h3>Page file<\/h3>\n<p>For some reason some games freak out when the page file is small. I had a long running issue with Call of Duty Warzone where the game would crash with some memory issue. Turns out it was some issue with page file not being equal to the system memory. I set mine at 16384 mb static size on my passed through NVMe ssd.<\/p>\n<h3>PCIe Device interrupts<\/h3>\n<p>There are many ways for a pcie device to request resources from storage or memory. The old way to do this was line based interrupts, where all devices share the same IRQ. The new method is MSI (Message Signal Interrupts). <a href=\"https:\/\/forums.guru3d.com\/threads\/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044\/\">More info can be found in this thread on guru3d<\/a>. The short version of it is the old line based interrupts can cause unnecessary latency, so the devices should be set to MSI or even better MSI-X interrupts. There is a tool in that thread called <a href=\"http:\/\/www.mediafire.com\/file\/ewpy1p0rr132thk\/MSI_util_v3.zip\/file\">MSI utility v3<\/a> that makes enabling these options easy in a nice GUI. I have my GPU forced to MSI-x and its priority set to high. Make sure every device that can support it has it enabled, as it will greatly aid with small stutters.<\/p>\n<h3>Re sizeable BAR and Above 4G decoding<\/h3>\n<p>Resizeable bar is a setting that allows the the GPU to bypass the CPU for getting things into memory. Helps dramatically with latency. You need to make sure your OVMF firmware has the feature enabled and your host UEFI has them enabled for this to work. Check with GPU-Z to see if these features are enabled.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h1>Diagnosing issues<\/h1>\n<p>You will run across weird issues. These are the tools I&#8217;ve used to help me detect issues.<\/p>\n<h3>Latencymon<\/h3>\n<p>Latencymon is a tool used to see what is causing stuttering in Windows. It is great for finding what driver is causing lag. Load the system with something like a steam download, Iperf test, or benchmark and see what driver is causing issue.<\/p>\n<h3>dmesg<\/h3>\n<p>In Linux as root run dmesg. See what the latest errors are if there are any.<\/p>\n<h3>journalctl<\/h3>\n<p>Look at the logs for libvirt or other system functions.<\/p>\n<h3>numastat<\/h3>\n<p>Show numa hits and misses for calls, interupts and other things. Useful in a NUMA system.<\/p>\n<h3>glances, bashtop, htop<\/h3>\n<p>All of these are terminal tools to show what is using cpu, memory, disk, etc. very useful to see if another thing is taking resources.<\/p>\n<p>&nbsp;<\/p>\n<h1>Conclusion<\/h1>\n<p>I hope you were able to improve your VM gaming expirence with these tweaks. If you have any sudgested tweaks of your own, please post them! Congrats if you got to the end of this. If you have questions, I might be able to help, but no guarantees&#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This will be a guide on advanced tuning for a VFIO gaming VM. If you&#8217;re starting from scratch, read through the Arch Wiki guide on PCI passtrhough via OVMF. It is a great starting point and covers all of the basics. I&#8217;d recommend using libvirt instead of straight QEMU. Host hardware configuration Before we begin, <br \/><a class=\"read-more-button\" href=\"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/\">Read More &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10,166,8,22,3],"tags":[],"coauthors":[39],"class_list":["post-1091","post","type-post","status-publish","format-standard","hentry","category-kernel","category-libvirt","category-linux","category-virtualization","category-windows"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>VFIO: Tuning your Windows gaming VM for optimal performance - Angry Sysadmins<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"VFIO: Tuning your Windows gaming VM for optimal performance - Angry Sysadmins\" \/>\n<meta property=\"og:description\" content=\"This will be a guide on advanced tuning for a VFIO gaming VM. If you&#8217;re starting from scratch, read through the Arch Wiki guide on PCI passtrhough via OVMF. It is a great starting point and covers all of the basics. I&#8217;d recommend using libvirt instead of straight QEMU. Host hardware configuration Before we begin, Read More &raquo;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/\" \/>\n<meta property=\"og:site_name\" content=\"Angry Sysadmins\" \/>\n<meta property=\"article:published_time\" content=\"2022-07-22T20:41:55+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-14T19:23:50+00:00\" \/>\n<meta name=\"author\" content=\"Ryan Parker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Ryan Parker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/\"},\"author\":{\"name\":\"Ryan Parker\",\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/#\\\/schema\\\/person\\\/651321cd35645fb6a4d8a75b7bc7c199\"},\"headline\":\"VFIO: Tuning your Windows gaming VM for optimal performance\",\"datePublished\":\"2022-07-22T20:41:55+00:00\",\"dateModified\":\"2023-09-14T19:23:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/\"},\"wordCount\":3270,\"commentCount\":2,\"articleSection\":[\"Kernel\",\"libvirt\",\"Linux\",\"Virtualization\",\"Windows\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/\",\"url\":\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/\",\"name\":\"VFIO: Tuning your Windows gaming VM for optimal performance - Angry Sysadmins\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/#website\"},\"datePublished\":\"2022-07-22T20:41:55+00:00\",\"dateModified\":\"2023-09-14T19:23:50+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/#\\\/schema\\\/person\\\/651321cd35645fb6a4d8a75b7bc7c199\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/2022\\\/07\\\/grassyloki\\\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/angrysysadmins.tech\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"VFIO: Tuning your Windows gaming VM for optimal performance\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/#website\",\"url\":\"https:\\\/\\\/angrysysadmins.tech\\\/\",\"name\":\"Angry Sysadmins\",\"description\":\"A site full of angry sysadmins here to vent and help\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/angrysysadmins.tech\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/angrysysadmins.tech\\\/#\\\/schema\\\/person\\\/651321cd35645fb6a4d8a75b7bc7c199\",\"name\":\"Ryan Parker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/fc12b1a02765c8017062ee6f41eb34a7b14575bcd8acd7da40e176fe8f12b10f?s=96&d=mm&r=g664d0e05248e51cb1d71b3f66c6f929d\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/fc12b1a02765c8017062ee6f41eb34a7b14575bcd8acd7da40e176fe8f12b10f?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/fc12b1a02765c8017062ee6f41eb34a7b14575bcd8acd7da40e176fe8f12b10f?s=96&d=mm&r=g\",\"caption\":\"Ryan Parker\"},\"description\":\"Professionally im a Infrastructure Security Specialist. I current maintain a homelab with about 3TB of RAM, 240+ TB of storage, tons of CPU cores, and 100gbit networking backbone in the garage running up my electricity bill.\",\"url\":\"https:\\\/\\\/angrysysadmins.tech\\\/index.php\\\/author\\\/grassyloki\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"VFIO: Tuning your Windows gaming VM for optimal performance - Angry Sysadmins","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/","og_locale":"en_US","og_type":"article","og_title":"VFIO: Tuning your Windows gaming VM for optimal performance - Angry Sysadmins","og_description":"This will be a guide on advanced tuning for a VFIO gaming VM. If you&#8217;re starting from scratch, read through the Arch Wiki guide on PCI passtrhough via OVMF. It is a great starting point and covers all of the basics. I&#8217;d recommend using libvirt instead of straight QEMU. Host hardware configuration Before we begin, Read More &raquo;","og_url":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/","og_site_name":"Angry Sysadmins","article_published_time":"2022-07-22T20:41:55+00:00","article_modified_time":"2023-09-14T19:23:50+00:00","author":"Ryan Parker","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Ryan Parker","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/#article","isPartOf":{"@id":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/"},"author":{"name":"Ryan Parker","@id":"https:\/\/angrysysadmins.tech\/#\/schema\/person\/651321cd35645fb6a4d8a75b7bc7c199"},"headline":"VFIO: Tuning your Windows gaming VM for optimal performance","datePublished":"2022-07-22T20:41:55+00:00","dateModified":"2023-09-14T19:23:50+00:00","mainEntityOfPage":{"@id":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/"},"wordCount":3270,"commentCount":2,"articleSection":["Kernel","libvirt","Linux","Virtualization","Windows"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/","url":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/","name":"VFIO: Tuning your Windows gaming VM for optimal performance - Angry Sysadmins","isPartOf":{"@id":"https:\/\/angrysysadmins.tech\/#website"},"datePublished":"2022-07-22T20:41:55+00:00","dateModified":"2023-09-14T19:23:50+00:00","author":{"@id":"https:\/\/angrysysadmins.tech\/#\/schema\/person\/651321cd35645fb6a4d8a75b7bc7c199"},"breadcrumb":{"@id":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/angrysysadmins.tech\/index.php\/2022\/07\/grassyloki\/vfio-tuning-your-windows-gaming-vm-for-optimal-performance\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/angrysysadmins.tech\/"},{"@type":"ListItem","position":2,"name":"VFIO: Tuning your Windows gaming VM for optimal performance"}]},{"@type":"WebSite","@id":"https:\/\/angrysysadmins.tech\/#website","url":"https:\/\/angrysysadmins.tech\/","name":"Angry Sysadmins","description":"A site full of angry sysadmins here to vent and help","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/angrysysadmins.tech\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/angrysysadmins.tech\/#\/schema\/person\/651321cd35645fb6a4d8a75b7bc7c199","name":"Ryan Parker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/fc12b1a02765c8017062ee6f41eb34a7b14575bcd8acd7da40e176fe8f12b10f?s=96&d=mm&r=g664d0e05248e51cb1d71b3f66c6f929d","url":"https:\/\/secure.gravatar.com\/avatar\/fc12b1a02765c8017062ee6f41eb34a7b14575bcd8acd7da40e176fe8f12b10f?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/fc12b1a02765c8017062ee6f41eb34a7b14575bcd8acd7da40e176fe8f12b10f?s=96&d=mm&r=g","caption":"Ryan Parker"},"description":"Professionally im a Infrastructure Security Specialist. I current maintain a homelab with about 3TB of RAM, 240+ TB of storage, tons of CPU cores, and 100gbit networking backbone in the garage running up my electricity bill.","url":"https:\/\/angrysysadmins.tech\/index.php\/author\/grassyloki\/"}]}},"_links":{"self":[{"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/posts\/1091","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/comments?post=1091"}],"version-history":[{"count":31,"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/posts\/1091\/revisions"}],"predecessor-version":[{"id":1195,"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/posts\/1091\/revisions\/1195"}],"wp:attachment":[{"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/media?parent=1091"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/categories?post=1091"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/tags?post=1091"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/angrysysadmins.tech\/index.php\/wp-json\/wp\/v2\/coauthors?post=1091"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}