1

Disclaimer: pretty new to Linux

Relevant system specs:

Motherboard: MSI B450 Tomahawk Max
CPU: AMD Ryzen 9 3950X (no iGPU)
PCIe slot 0: Nvidia 2070 Super (1x monitor connected)
PCIe slot 1: AMD RX 550X (2x monitor connected)
OS: Ubuntu 20.04.2 LTS

I'm following a noob's guide on how to set up a Windows virtual machine with a passthrough GPU for playing games, using xubuntu as a hypervisor instead of Debian which was used in the guide. All steps up to isolating the Nvidia GPU are fine, but when I actually isolate the GPU using vfio my AMD GPU seems to be disabled or at least not used, and I am left with a (on but) black screen on all monitors. To get the screens to display again I have to disable IOMMU in my BIOS settings, then I can disable vfio and re-enable IOMMU.

I tried swapping the graphics cards, which for some reason messed up the ACS so each GPU couldn't be properly isolated, so that can't be done.

I tried following the answers to this similar question, however when I generated the xorg config I ended up with three separate GPU sections, each one assigned to each connected screen. Furthermore, the AMD GPUs are on the top, which would give them priority in my logic.

Section "Device"
    Identifier  "Card0"
    Driver      "amdgpu"
    BusID       "PCI:37:0:0"
EndSection
Section "Device"
    Identifier  "Card1"
    Driver      "amdgpu"
    BusID       "PCI:37:0:1"
EndSection
Section "Device"
    Identifier  "Card2"
    Driver      "nouveau"
    BusID       "PCI:38:0:0"
EndSection

I foolishly tried deleting card2 and its' connected display which bricked my system and forced me to reinstall.

Since I'm pretty new to linux and this is my first time diving into xorg.conf, now I'm stumped. How do I change the default GPU used by the OS, from slot 0 to slot 1?

Relevant, the output for lspci -nn | grep vga is

25:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Lexa PRO [Radeon 540/540X/550/550X / RX 540X/550/550X] [1002:699f] (rev c7)
26:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] [10de:1e84] (rev a1)

and find /sys/kernel/iommu_groups/ -type l confirms that the GPU is the only one in its IOMMU group.

First time asking, let me know if I've missed anything or made any mistakes. Sorry for lack of understanding, feel free to explain to me like I'm 5.

  • This hardware configuration is impractical and will only cause you problems. The Nvidia drivers and AMD drivers will be in conflict if you try to use them simultaneously. Pick the GPU you want to use for this PC and take the other one out of the PC – Nmath Jul 19 '21 at 04:55
  • The point is that the AMD graphics card will be used exclusively by the host and the Nvidia graphics card will be used exclusively by the VM. They shouldn't interact at all, just like the intel iGPU and Nvidia card used in the linked guide, right? – qwertaii Jul 19 '21 at 05:01
  • No, it's not the same thing. This is an unviable build. If you want to use multiple GPUs for some reason, it's best that they are identical. – Nmath Jul 19 '21 at 05:31

1 Answers1

0

I agree with earlier answer of not mixing Nvidia and AMD cards. But to your original question the GPU that has monitor connected during boot will be picked as default. So inelegant but simple solution is disconnect monitor from the card you want to be secondary. You can connect it back after boot and it will stay secondary.