Project

General

Profile

Actions

Bug #3310

open

NVMM+QEMU fail to boot with UEFI: Mem Assist Failed [gpa=0xfffffff0]

Added by liweitianux about 4 years ago. Updated 9 days ago.

Status:
In Progress
Priority:
Normal
Assignee:
-
Category:
nvmm
Target version:
Start date:
01/09/2022
Due date:
% Done:

0%

Estimated time:

Description

NVMM+QEMU fail to boot with UEFI, for example:

% qemu-system-x86_64 \
      -boot menu=on -display sdl -accel nvmm \
      -drive file=OVMF_CODE.fd,if=pflash,format=raw,readonly=on \
      -drive file=OVMF_VARS.fd,if=pflash,format=raw
NetBSD Virtual Machine Monitor accelerator is operational
qemu-system-x86_64: NVMM: Mem Assist Failed [gpa=0xfffff000]
qemu-system-x86_64: NVMM: Failed to execute a VCPU.

The UEFI firmware can be obtained by installing the uefi-edk2-qemu-x86_64 package
or by downloading from: https://leaf.dragonflybsd.org/~aly/uefi/

First reported by Mario Marietto and confirmed by me, see:
https://lists.dragonflybsd.org/pipermail/users/2022-January/404898.html


Files

qemu-system-x86.7z (3.5 MB) qemu-system-x86.7z marietto, 07/15/2022 07:34 AM
freebsd-boot.jpg (68.1 KB) freebsd-boot.jpg FreeBSD hangs right after EFI framebuffer information mneumann, 11/26/2025 11:14 AM
Actions #1

Updated by liweitianux about 4 years ago

  • Category set to nvmm
  • Status changed from New to In Progress

A temporary workaround to boot with UEFI in NVMM+QEMU is:
specify the UEFI code with the -bios option instead of the -device if=pflash (or -pflash) option.

However, this is not recommended because the UEFI variables are partially emulated and aren't persistent.
See: https://lists.gnu.org/archive/html/qemu-discuss/2018-04/msg00045.html

Actions #2

Updated by liweitianux about 4 years ago

After investigation, the issue is caused by the missing memory maps of the UEFI firmware for the guest.

The UEFI firmware are mapped by QEMU as ROM devices in the ROMD mode. And this is excluded for guest memory mappings in the QEMU NVMM code.

I figured out the following patch that makes NVMM+QEMU to boot with UEFI:

diff --git target/i386/nvmm/nvmm-all.c target/i386/nvmm/nvmm-all.c
index 290077f62..e3c948b31 100644
--- target/i386/nvmm/nvmm-all.c
+++ target/i386/nvmm/nvmm-all.c
@@ -1082,7 +1082,11 @@ nvmm_process_section(MemoryRegionSection *section, int add)
     unsigned int delta;
     uintptr_t hva;

-    if (!memory_region_is_ram(mr)) {
+    /*
+     * Don't exclude ROMD memory; for example, it's used to map UEFI firmware
+     * (if=pflash) and should be mapped for guest.
+     */
+    if (!memory_region_is_ram(mr) && !memory_region_is_romd(mr)) {
         return;
     }

However, NVMM+QEMU is extremely slow and uses 100% CPU. It's even much slower than TCG (i.e., without -accel nvmm).

For example: for QEMU to boot into the UEFI shell on my desktop, it takes ~12 seconds with TCG, but it takes ~170 seconds with NVMM !

Test command:

qemu-system-x86_64 -boot menu=on \
  -drive file=OVMF_CODE.fd,if=pflash,format=raw,readonly=on \
  -drive file=OVMF_VARS.fd,if=pflash,format=raw \
  -display sdl [-accel nvmm]
Actions #3

Updated by marietto about 4 years ago

Hello. I tried using the -bios parameter to add the efi code,like this :

qemu-system-x86_64 \
-machine type=q35,accel=nvmm \
-smp cpus=1 -m 8G \
-bios /home/marietto/Desktop/Files/Virt/OVMF/OVMF.fd \
-drive file=/mnt/dk26/bhyve/impish-cuda-11-4-nvidia-470.img,if=none,id=disk0 \
-device virtio-blk-pci,drive=disk0 \
-netdev user,id=net0,hostfwd=tcp:127.0.0.1:6022-:22 \
-device virtio-net-pci,netdev=net0 \  
-object rng-random,id=rng0,filename=/dev/urandom \
-device virtio-rng-pci,rng=rng0 \ 
-display curses \    
-vga qxl \
-spice addr=127.0.0.1,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on

but it didn't work :

qemu-system-x86_64: NVMM: Unexpected RDMSR 0x3a, ignored
qemu-system-x86_64: NVMM: Unexpected WRMSR 0x3a [val=0x1], ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x140, ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0xce, ignored
qemu-system-x86_64: NVMM: Unexpected WRMSR 0x140 [val=0x0], ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x64e, ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x34, ignored

and this :

qemu-system-x86_64 \
-machine type=q35,accel=nvmm \
-smp cpus=1 -m 8G \
-bios /home/marietto/Desktop/Files/Virt/OVMF/QEMU_UEFI_CODE-x86_64.fd \
-drive file=/mnt/dk26/bhyve/impish-cuda-11-4-nvidia-470.img,if=none,id=disk0 \
-device virtio-blk-pci,drive=disk0 \
-netdev user,id=net0,hostfwd=tcp:127.0.0.1:6022-:22 \
-device virtio-net-pci,netdev=net0 \  
-object rng-random,id=rng0,filename=/dev/urandom \
-device virtio-rng-pci,rng=rng0 \ 
-display curses \    
-vga qxl \
-spice addr=127.0.0.1,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on

qemu: could not load PC BIOS '/home/marietto/Desktop/Files/Virt/OVMF/QEMU_UEFI_CODE-x86_64.fd'

with this : OVMF_CODE.fd =

qemu-system-x86_64: NVMM: Unexpected RDMSR 0x3a, ignored
qemu-system-x86_64: NVMM: Unexpected WRMSR 0x3a [val=0x1], ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x140, ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0xce, ignored
qemu-system-x86_64: NVMM: Unexpected WRMSR 0x140 [val=0x0], ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x64e, ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x34, ignored        

with this :

Actions #4

Updated by marietto about 4 years ago

with this : -bios /usr/pkg/share/qemu/edk2-x86_64-code.fd \

qemu: could not load PC BIOS '/usr/pkg/share/qemu/edk2-x86_64-code.fd'
Actions #5

Updated by tuxillo over 3 years ago

marietto wrote in #note-3:

Hello. I tried using the -bios parameter to add the efi code,like this :

qemu-system-x86_64 \
machine type=q35,accel=nvmm \
-smp cpus=1 -m 8G \
-bios /home/marietto/Desktop/Files/Virt/OVMF/OVMF.fd \
-drive file=/mnt/dk26/bhyve/impish-cuda-11-4-nvidia-470.img,if=none,id=disk0 \
-device virtio-blk-pci,drive=disk0 \
-netdev user,id=net0,hostfwd=tcp:127.0.0.1:6022
:22 \
-device virtio-net-pci,netdev=net0 \
-object rng-random,id=rng0,filename=/dev/urandom \
-device virtio-rng-pci,rng=rng0 \
-display curses \
-vga qxl \
-spice addr=127.0.0.1,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on

but it didn't work :

qemu-system-x86_64: NVMM: Unexpected RDMSR 0x3a, ignored
qemu-system-x86_64: NVMM: Unexpected WRMSR 0x3a [val=0x1], ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x140, ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0xce, ignored
qemu-system-x86_64: NVMM: Unexpected WRMSR 0x140 [val=0x0], ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x64e, ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x34, ignored

and this :

qemu-system-x86_64 \
machine type=q35,accel=nvmm \
-smp cpus=1 -m 8G \
-bios /home/marietto/Desktop/Files/Virt/OVMF/QEMU_UEFI_CODE-x86_64.fd \
-drive file=/mnt/dk26/bhyve/impish-cuda-11-4-nvidia-470.img,if=none,id=disk0 \
-device virtio-blk-pci,drive=disk0 \
-netdev user,id=net0,hostfwd=tcp:127.0.0.1:6022
:22 \
-device virtio-net-pci,netdev=net0 \
-object rng-random,id=rng0,filename=/dev/urandom \
-device virtio-rng-pci,rng=rng0 \
-display curses \
-vga qxl \
-spice addr=127.0.0.1,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on

qemu: could not load PC BIOS '/home/marietto/Desktop/Files/Virt/OVMF/QEMU_UEFI_CODE-x86_64.fd'

with this : OVMF_CODE.fd =

qemu-system-x86_64: NVMM: Unexpected RDMSR 0x3a, ignored
qemu-system-x86_64: NVMM: Unexpected WRMSR 0x3a [val=0x1], ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x140, ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0xce, ignored
qemu-system-x86_64: NVMM: Unexpected WRMSR 0x140 [val=0x0], ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x64e, ignored
qemu-system-x86_64: NVMM: Unexpected RDMSR 0x34, ignored

with this :

Is there any specific reason why you need UEFI?

Also, what's that "impish cuda" image? Some custom linux? Please remember we do not have hardware devices passthrough.

Actions #6

Updated by marietto over 3 years ago

1) Is there any specific reason why you need UEFI?

Yes,because today every modern hypervisor uses UEFI and not BIOS anymore. In addition,I'm trying to start a collaboration to implement the passthrough. I'm not interested in using a hypervisor that uses the old BIOS bootloader.

2) Also, what's that "impish cuda" image? Some custom linux?

it is only a linux VM (ubuntu impish + cuda 470) that I've previously created for bhyve. I've also tried with windows 11 (also in this case it is a vm created for bhyve) and it gave the same error message.

4) I tried another experiment. I've created a fresh new img file with the command :

qemu-img create -f raw jammy.img 200G

and then I've launched the vm with these parameters :

qemu-system-x86_64 \
-machine type=q35,accel=nvmm \
-smp cpus=4 -m 8G \
-drive if=pflash,format=raw,readonly=on,file=/usr/local/share/uefi-edk2-qemu/QEMU_UEFI_CODE-x86_64.fd \
-drive if=pflash,format=raw,file=/usr/local/share/uefi-edk2-qemu/QEMU_UEFI_VARS-x86_64.fd \
-drive id=cdrom,if=none,media=cdrom,file="ubuntu-22.04-desktop-amd64.iso" \
#-drive file=ubuntu-22.04-desktop-amd64.iso,media=cdrom,id=cdrom \
-drive file=/mnt/da16s1d/home/marietto/Desktop/VMS/jammy.img,if=none,id=disk0 \
-device virtio-blk-pci,drive=disk0 \
-netdev user,id=net0,hostfwd=tcp:127.0.0.1:6022-:22 \
-device virtio-net-pci,netdev=net0 \
-object rng-random,id=rng0,filename=/dev/urandom \
-device virtio-rng-pci,rng=rng0 \
-display curses \
-vga qxl \
-spice addr=127.0.0.1,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on

the error messages are :

root@marietto:/mnt/da16s1d/home/marietto/Desktop/VMS # ./vm2.sh

NetBSD Virtual Machine Monitor accelerator is operational
libGL error: MESA-LOADER: failed to open iris: Cannot open "/usr/local/lib/dri/iris_dri.so" (search paths /usr/local/lib/dri, suffix _dri)
libGL error: failed to load driver: iris
qemu-system-x86_64: NVMM: Mem Assist Failed [gpa=0xfffffff0]
qemu-system-x86_64: NVMM: Failed to execute a VCPU.

Someone wants to debug the core file produced by the bug ? I've attached it.

Actions #7

Updated by mneumann 2 months ago

FYI, NetBSD pkgsrc qemu has the above patch (plus one additional line):

https://github.com/NetBSD/pkgsrc/blob/fa4b5df66cf974be6c49e8727a9b3006d70938ad/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-all.c

I am trying this out now...

Actions #8

Updated by mneumann 2 months ago

And the package for the UEFI BIOS is now named: edk2-qemu-x64-g202202_1

Actions #9

Updated by mneumann 2 months ago

Here some more information:

letsnote$ groups
mneumann wheel operator video nvmm

letsnote$ nvmmctl identify
nvmm: Kernel API version 3
nvmm: State size 1008
nvmm: Comm size 4096
nvmm: Max machines 128
nvmm: Max VCPUs per machine 128
nvmm: Max RAM per machine 127T
nvmm: Arch Mach conf 0
nvmm: Arch VCPU conf 0x3<CPUID,TPR>
nvmm: Guest FPU states 0x3<x87,SSE>

pkg ins edk2-qemu-x64-g202202_1

letsnote$ qemu-system-x86_64 --version
QEMU emulator version 6.0.0

qemu-system-x86_64 \
-machine type=q35,accel=nvmm \
-smp cpus=1 -m 1G \
-cdrom FreeBSD-14.3-RELEASE-amd64-bootonly.iso \
-boot d \
-bios /usr/local/share/edk2-qemu/QEMU_UEFI-x86_64.fd \
-vga qxl \
-spice addr=127.0.0.1,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on

This does not report "UEFI: Mem Assist Failed" error. Using "spicy" from package "spice-gtk" I can follow the boot process.But it hangs early in the FreeBSD kernel when the EFI Framebuffer is attached (see screenshot freebsd-boot.jpg).

If I remove -bios ... from above call, it gives me "UEFI: Mem Assist Failed" error.

Now let's try with pflash:

qemu-system-x86_64 \
-machine type=q35,accel=nvmm \
-smp cpus=1 -m 1G \
-cdrom FreeBSD-14.3-RELEASE-amd64-bootonly.iso \
-display sdl \
-boot d \
-drive file=/tmp/QEMU_UEFI_CODE-x86_64.fd,if=pflash,format=raw,readonly=on \
-drive file=/tmp/QEMU_UEFI_VARS-x86_64.fd,if=pflash,format=raw

Here, I am getting "UEFI: Mem Assist Failed" error

Now let's try with Qemu 9:

letsnote$ qemu-system-x86_64 --version
QEMU emulator version 9.0.1

Exactly the same problems.

Actions #10

Updated by tuxillo 13 days ago

Does it even boot in BIOS mode?

Actions #11

Updated by tuxillo 13 days ago

Bug 3310: NVMM+QEMU UEFI Boot Failure

Problem
QEMU with NVMM fails to boot UEFI when firmware is provided via if=pflash, stopping at the reset vector (0xfffffff0) with:
NVMM: Mem Assist Failed [gpa=0xfffffff0].

Root Cause
- UEFI pflash is a ROM device running in ROMD mode.
- NVMM’s nvmm_process_section() only maps RAM regions, so ROMD regions are skipped.
- The reset vector is never mapped, so the first instruction fetch fails.

Correct ROMD Handling (Complete Fix)
- If region is non-RAM but a ROM device, allow mapping when romd_mode=true.
- If romd_mode=false, force unmap so accesses trap for MMIO emulation.
- Map ROMD read-only; use memory_region_is_romd() instead of memory_region_is_rom().
- This mirrors KVM’s behavior and avoids stale mappings on ROMD mode switches.

Incomplete Fix in pkgsrc
The simple check !memory_region_is_romd(mr) drops ROMD regions when the device switches to MMIO mode, but never unmaps them, causing heavy VMEXIT overhead and ~14x slowdown.

Workaround
Use -bios instead of -pflash. This loads firmware into RAM but loses persistent UEFI variables.

Performance Findings
UEFI with the ROMD fix boots but remains slow due to heavy I/O exits.
- MMIO exits are costly but relatively infrequent.
- Port I/O dominates wall time in tests (over 1.3M PIO operations, ~76% of wall time).
- Intel VMX always decodes instructions in userspace for MMIO, adding overhead.

Test Configuration (Jan 2026)
Command used (abridged):
sudo -E qemu-system-x86_64 \
-machine type=q35,accel=nvmm -smp cpus=1 -m 1G -boot d \
-drive file=QEMU_UEFI_CODE-x86_64.fd,if=pflash,format=raw,readonly=on \
-drive file=QEMU_UEFI_VARS-x86_64.fd,if=pflash,format=raw \
-device virtio-scsi-pci -device scsi-cd,drive=cd0 \
-drive id=cd0,if=none,format=raw,media=cdrom,file=FreeBSD-14.3-RELEASE-amd64-bootonly.iso \
-device virtio-net-pci,netdev=net0 -netdev user,id=net0 -vga virtio

Key Metrics
Wall time: 440.7 s
VCPU run time: 4.2 s (0.96%)
PIO time: 335.7 s (76.2%)
MMIO time: 11.0 s (2.5%)

Next Steps
- Add port tracking to identify the hottest PIO ports (likely serial, PIT, PS/2, RTC, PCI config).
- Consider kernel-level optimizations (coalesced MMIO, ioeventfd) for long-term performance.

--

From Claude Opus 4.5

Actions #12

Updated by liweitianux 9 days ago

mneumann wrote in #note-9:

Here some more information:

letsnote$ groups
mneumann wheel operator video nvmm

letsnote$ nvmmctl identify
nvmm: Kernel API version 3
nvmm: State size 1008
nvmm: Comm size 4096
nvmm: Max machines 128
nvmm: Max VCPUs per machine 128
nvmm: Max RAM per machine 127T
nvmm: Arch Mach conf 0
nvmm: Arch VCPU conf 0x3<CPUID,TPR>
nvmm: Guest FPU states 0x3<x87,SSE>

pkg ins edk2-qemu-x64-g202202_1

letsnote$ qemu-system-x86_64 --version
QEMU emulator version 6.0.0

qemu-system-x86_64 \
-machine type=q35,accel=nvmm \
-smp cpus=1 -m 1G \
-cdrom FreeBSD-14.3-RELEASE-amd64-bootonly.iso \
-boot d \
-bios /usr/local/share/edk2-qemu/QEMU_UEFI-x86_64.fd \
-vga qxl \
-spice addr=127.0.0.1,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on

This does not report "UEFI: Mem Assist Failed" error. Using "spicy" from package "spice-gtk" I can follow the boot process.But it hangs early in the FreeBSD kernel when the EFI Framebuffer is attached (see screenshot freebsd-boot.jpg).

If I remove -bios ... from above call, it gives me "UEFI: Mem Assist Failed" error.

Now let's try with pflash:

qemu-system-x86_64 \
-machine type=q35,accel=nvmm \
-smp cpus=1 -m 1G \
-cdrom FreeBSD-14.3-RELEASE-amd64-bootonly.iso \
-display sdl \
-boot d \
-drive file=/tmp/QEMU_UEFI_CODE-x86_64.fd,if=pflash,format=raw,readonly=on \
-drive file=/tmp/QEMU_UEFI_VARS-x86_64.fd,if=pflash,format=raw

Here, I am getting "UEFI: Mem Assist Failed" error

Now let's try with Qemu 9:

letsnote$ qemu-system-x86_64 --version
QEMU emulator version 9.0.1

Exactly the same problems.

This case is unrelated to UEFI booting but happened only on Intel systems.

It has been fixed in commit 0087a1d163488a57787a9a6431dd94070b1988d4.

Actions

Also available in: Atom PDF