Skip to content

This document is a WORK IN PROGRESS.
This is just a quick personal cheat sheet: treat its contents with caution!


QEMU is a generic and open source machine emulator and virtualizer. When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance. QEMU can use other hypervisors like Xen or KVM to use CPU extensions (HVM) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.

KVM, is a hypervisor built into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. Unlike native QEMU, which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions (HVM) for virtualization via a kernel module.


Table of contents



You might need to enable CPU virtualization in your BIOS settings.

Gentoo kernel

A correct kernel config is needed:

$ cd /usr/src/linux
# make nconfig # or `# make menuconfig`

    # Enable KVM support
    # Double check here: <>
    > [*] Virtualization  ---> # Symbol: VIRTUALIZATION [=y]
    >     <*>   Kernel-based Virtual Machine (KVM) support # Symbol: KVM [=y]
    #     Enable KVM support for Intel processors:
    >     <M>   KVM for Intel processors support # Symbol: KVM_INTEL [=y]
    #     Enable KVM support for AMD processors:
    >     <M>   KVM for AMD processors support # Symbol: KVM_AMD [=y]
    #     [Recommeded] vhost-net USE flag support:
    >     <*>   Host kernel accelerator for virtio net # Symbol: VHOST_NET [=y]
    # [Optional] advanced networking support:
    > Device Drivers  --->
    >     [*] Network device support  ---> # Symbol: NETDEVICES [=y]
    >         [*]   Network core driver support # Symbol: NET_CORE [=y]
    >         <*>   Universal TUN/TAP device driver support # Symbol: TUN [=y]
    # [Optional] Enabling 802.1d Ethernet Bridging support:
    > [*] Networking support  ---> # Symbol: NET [=y]
    >         Networking options  --->
    >             <*> The IPv6 protocol #  Symbol: IPV6 [=y]
    >             <*> 802.1d Ethernet Bridging # Symbol: BRIDGE [=y]
    # [Optional] python USE flag for file capabilities support:
    > Kernel hacking  --->
    >         Compile-time checks and compiler options  --->
    >             [*] Debug Filesystem # Symbol: DEBUG_FS [=y]
    # [Optional] Ext4 kvm_stat support (if using ext4):
    > File systems  --->
    >     <*> The Extended 4 (ext4) filesystem # Symbol: EXT4_FS [=y]
    >     [*]   Ext4 Security Labels # Symbol: EXT4_FS_SECURITY [=y]

After configuring the kernel don't forget to do a kernel make and rebuild!

Specify targets, for each target a qemu executable will be built:

# vi /etc/portage/make.conf

    > ...
    > QEMU_USER_TARGETS="x86_64"
    > ...
See for more targets

And install qemu:
# emerge --ask app-emulation/qemu

# pacman -S qemu-headless # or `qemu` to also get a GUI interface
# apt install qemu qemu-kvm





In order to run a KVM accelerated virtual machine without logging as root, add your user name to the kvm group:

# gpasswd -a <username> kvm
Then log out and log back in.



  • Resize qcow ?
  • Try with vdi, vmdk, vhd and hdd
  • USB forwarding
  • Full screen
  • Forward HTTP
  • natnetwork? In order for multiple VMs to share the same network? With or without internet access.
  • Choose IP address
  • Allow ping (ICMP protocol)



By default, QEMU do not support the ICMP protocol (e.g. used by ping)! So do not assume that you have no internet connection if you can't ping anything!

To completely disable the networking use -nic none.

Redirect an unused host port (e.g. 60022) to the guest default SSH port (22) by adding the following option: -nic user,hostfwd=tcp::60022-:22

This way, you can SSH from host to guest with: $ ssh -p 60022 user_name@localhost

Image creation


  • qcow image: an optimized file format for disk image files used by QEMU:
    $ qemu-img create -f qcow2 image_name.img 20G

Start script


Use the -cpu host option to make QEMU emulate the host's exact CPU rather than a more generic CPU.


By default QEMU don't use UEFI. But if wanted, it's possible to install and use UEFI firmware for 64-bit x86 virtual machines:

# emerge -a sys-firmware/edk2-ovmf
# pacman -S ovmf

(UEFI firmware for 64-bit x86 virtual machines should already be installed)

$ apt install ovmf

# yum install ovmf # (or edk2-ovmf?)
# dnf install ovmf # (or edk2-ovmf?)

Now, those startup options should be added in order to use UEFI:

$ qemu-system-x86_64 \
    -bios /usr/share/OVMF/OVMF_CODE.fd `# Path to UEFI firmware` \
    -net none `# disable iPXE` \


Mac address

In order to specify a mac address (e.g. 00:00:00:11:11:11):

$ qemu-system-x86_64 \
    -device virtio-net,netdev=vmnic,mac=00:00:00:11:11:11 \

Check it inside the host with:

$ cat /sys/class/net/eth0/address # e.g. for eth0
$ cat /sys/class/net/*/address # e.g. for all devices



By giving the -net nic argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address 52:54:00:12:34:56. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.

Make sure that each virtual machine has a unique link-level address, but it should always start with 52:54:. Use the following option, replace X with arbitrary hexadecimal digit:

$ qemu-system-x86_64 -net nic,macaddr=52:54:XX:XX:XX:XX -net vde disk_image

Generating unique link-level addresses can be done in several ways:

1. Manually specify unique link-level address for each NIC. The benefit is that the DHCP server
   will assign the same IP address each time the virtual machine is run, but it is unusable for
   large number of virtual machines.

2. Generate random link-level address each time the virtual machine is run. Practically zero
   probability of collisions, but the downside is that the DHCP server will assign a different
   IP address each time. You can use the following command in a script to generate random
   link-level address in a `macaddr` variable:

    $ printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
    $ qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde disk_image

3. Use the following script `` to generate the link-level address from the
   virtual machine name using a hashing function. Given that the names of virtual machines are
   unique, this method combines the benefits of the aforementioned methods: it generates the
   same link-level address each time the script is run, yet it preserves the practically zero
   probability of collisions.


    #!/usr/bin/env python
    # usage: <VMName>

    import sys
    import zlib

    crc = str(hex(zlib.crc32(sys.argv[1].encode("utf-8")))).replace("x", "")[-8:]
    print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))

    In a script, you can use for example:

    vm_name="VM Name"
    qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$( "$vm_name") -net vde disk_image



QEMU offers guests the ability to use paravirtualized block and network devices using the virtio drivers, which provide better performance and lower overhead.

A virtio block device requires the option -drive for passing a disk image, with parameter if=virtio:

$ qemu-system-x86_64 -drive file=disk_image,if=virtio

Almost the same goes for the network:

$ qemu-system-x86_64 -nic user,model=virtio-net-pci

Note: This will only work if the guest machine has drivers for virtio devices. Linux does, most distributions will include the required drivers, but there is no guarantee that virtio devices will work with other operating systems.

Example with Artix (on Gentoo host with UEFI)

  • Download e.g. the 20200413 Artix, based on Runit, ISO image:

    $ wget

  • Create, e.g. a 20 GB qcow image called artix_base_runit.img for Artix:

    $ qemu-img create -f qcow2 artix_base_runit.img 20G

  • Since QEMU requires a lot of options, it's nice to put them into a shell script, e.g.:

    $ vi
        > #!/bin/sh
        > exec qemu-system-x86_64 \
        >   -name "artix_base_runit" `# Sets the name of the guest` \
        >   -enable-kvm `# enable KVM full virtualization support` \
        >   -cpu host `# [recommended] emulate the host processor` \
        >   -drive file=artix_base_runit.img,if=virtio `# Set virtio hdd with specified img` \
        >   -netdev user,id=vmnic,hostname=artix_base_runit `# [recommended] pass-through` \
        >   -device virtio-net,netdev=vmnic `# [recommended] viritio support` \
        >   -m 4G `# amount of memory (default: 128 MB) ram the guest is permitted to use` \
        >   -smp 2 `# number of cores the guest is permitted to use` \
        >   -monitor stdio `# Redirect the monitor to host device in non graphical mode` \
        >   -bios /usr/share/edk2-ovmf/OVMF_CODE.fd `# Path to UEFI firmware` \
        >   -net none `# disable iPXE, required for UEFI` \
        >   "$@"
    $ chmod +x

  • Boot the disk image:

    ./ -boot d -cdrom artix-base-runit-20200413-x86_64.iso

  • After the disk image installation, subsequent boot can simply be run like this:


Example with NixOS (on Gentoo host with UEFI) without KVM

  • Download e.g. the 21.11 NixOS minimal ISO image:

    $ wget

  • Create e.g. a 20 GB (qcow image called nixos_vm.img for NixOS:

    $ qemu-img create -f qcow2 nix_vm.img 20G

  • Since QEMU requires a lot of options, it's nice to put them into a shell script, e.g.:

    $ vi
        > #!/bin/sh
        > exec qemu-system-x86_64 \
        >   -name nixos_vm \
        >   -drive file=nixos_vm.img,if=virtio \
        >   -m 4G \
        >   -smp 2 \
        >   -nic user,hostname=nixos_vm,hostfwd=tcp::60022-:22,model=virtio-net-pci \
        >   -bios /usr/share/edk2-ovmf/OVMF_CODE.fd `# Path to UEFI firmware` \
        >   -net none `# disable iPXE, required for UEFI` \
        >   "$@"
    $ chmod +x

  • Boot the disk image:

    ./ -boot d -cdrom nixos-minimal-21.11.337100.7b38b03d76a-x86_64-linux.iso

  • After the disk image installation, subsequent boot can simply be run like this:



Cannot release cursor focus from QEMU/Cannot get cursor back

Press Ctrl+Alt+G


Could not access KVM kernel module error

If you get the bellow error message:

Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: failed to initialize kvm: No such file or directory

Then you might want to following the bellow troubleshooting steps:

  • If using Gentoo, make sure you followed the associated installation instructions (for the Gentoo kernel and /etc/portage/make.conf configuration).

  • Make sure you followed the configuration instructions.

  • Make sure your CPU has virtualization support:

    $ lscpu | grep Virtualization
    Should return either AMD-V (for AMD CPUs) or VT-x (for Intel CPUs).

    If there is no virtualization support for your processor, then KVM will not work at all on that machine.

  • On most distributions, KVM will be included as a module in the kernel (i.e. not built-in)

    $ modprobe kvm
    $ modprobe kvm_amd # or `modprobe kvm_intel`
    $ lsmod | grep kvm

    Note that if you modified your Linux kernel yourself (e.g. as usual on Gentoo), then you might have a built-in KVM. In this case, ignore this step.

  • By default, the virtualization support may not be enabled by all CPU vendors. In such cases, the modprobe kvm command will work but modprobe kvm_amd (of modprobe kvm_intel) will give error.

    If you check the dmesg output after running those modprobe commands, you might see kvm: disabled by bios and kvm: no hardware support. In this case you will have to enable virtualization in your BIOS. E.g. in the "Advanced" settings tab -> "CPU configuration" (or "Chipset Control") -> enable "Secure Virtual Machine Mode" (or "Intel Virtualization Technology", or "Intel VT-x", or "Virtualization Extensions").

After running the QEMU start shell script, you might see the following message:

QEMU 7.0.0 monitor - type 'help' for more information
(qemu) VNC server running on ::1:5900
This is because QEMU is using the VNC protocol for graphics output. It might use VNC because the GTK/SDL libraries needed by QEMU are not present (see

In this case, you can either install the QEMU dependencies for GTK/SDL and restart the QEMU shell script, or you can install VNC (in this case, just execute vncviewer in order to get the graphics output of your QEMU virtual machine).

If this cheat sheet has been useful to you, then please consider leaving a star here.