Fortinet black logo

KVM Administration Guide

GRUB

GRUB

virt-host-validate is a tool available to check if the hypervisor has been prepared:

[root@rhel-tiger-14-6 ~]# virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)
Note

QEMU: Checking for secure guest support is relevant to AMD or IBM processors only and is of no concern in this case.

You need IOMMU to access physical devices directly, but it is missing from the kernel command line:

[root@rhel-tiger-14-6 ~]# cat /proc/cmdline BOOT_IMAGE=(hd1,gpt2)/vmlinuz-4.18.0-305.25.1.el8_4.x86_64 root=/dev/mapper/vg1-root ro crashkernel=auto resume=/dev/mapper/vg1-swap rd.lvm.lv=vg1/root rd.lvm.lv=vg1/swap rhgb quiet

For performant systems, you must consider CPU and memory configuration. You can use tuned to take care of CPU and handle memory directly.

Direct update

[root@rhel-tiger-14-6 ~]# grubby --update-kernel=ALL --args=“intel-iommu=on iommu=pt hugepagesz=1G default_hugepagesz=1G hugepages=160 transparent_hugepage=never selinux=0”
[root@rhel-tiger-14-6 ~]# reboot

Command

Description

intel-iommu=on iommu=pt

Enable Intel IOMMU driver and put into passthrough (adapter does not need DMA translation) mode. This is needed for SR-IOV.

hugepagesz=1G default_hugepagesz=1G hugepages=160 transparent_hugepage=never

Optimizes how memory is used, especially in systems with a lot of it. The number of hugepages is shared equally across the NUMA nodes, and the number depends on the amount of memory installed.

In this case, there are 160 hugepages of size 1 GiB each to be used with the FortiGate-VMs. The important point is to avoid starving the hypervisor/OS and leave enough non-hugepage memory per NUMA.

[root@rhel-tiger-14-6 ~]# virsh nodeinfo
CPU model:           x86_64
CPU(s):              72
CPU frequency:       3258 MHz
CPU socket(s):       1
Core(s) per socket:  18
Thread(s) per core:  2
NUMA cell(s):        2
Memory size:         196373424 KiB

[root@rhel-tiger-14-6 ~]# expr 196373424 / 1024 / 1024
187

187 GiB total leaves 27 GiB outside of hugepage definition for the hypervisor/OS. Leaving approximately 16 GiB per NUMA is recommended. However, it is not an exact science. In this case, 12 x 16 GiB DIMMs (192 GiB) are known to be physically installed and 160 GiB has been configured in hugepages.

CPU sockets in virsh nodeinfo are counted per NUMA cell as per Bugzilla.

Command

Description

selinux=0

Disable SELinux.

If you must have SELinux enabled as part of your build standard, you can enable it. However, this may impact overall performance.

On Ubuntu systems, SELinux may be replaced with AppArmor. You can disable it using apparmor=0.

[root@rhel-tiger-14-6 ~]# cat /proc/cmdline
BOOT_IMAGE=(hd1,gpt2)/vmlinuz-4.18.0-305.25.1.el8_4.x86_64 root=/dev/mapper/vg1-root ro crashkernel=auto resume=/dev/mapper/vg1-swap rd.lvm.lv=vg1/root rd.lvm.lv=vg1/swap rhgb quiet intel-iommu=on iommu=pt hugepagesz=1G default_hugepagesz=1G hugepages=160 transparent_hugepage=never selinux=0

[root@rhel-tiger-14-6 ~]# virt-host-validate QEMU: Checking for hardware virtualization : PASS QEMU: Checking if device /dev/kvm exists : PASS QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'cpu' controller support : PASS QEMU: Checking for cgroup 'cpuacct' controller support : PASS QEMU: Checking for cgroup 'cpuset' controller support : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'devices' controller support : PASS QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for device assignment IOMMU support : PASS QEMU: Checking if IOMMU is enabled by kernel : PASS QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)

GRUB

virt-host-validate is a tool available to check if the hypervisor has been prepared:

[root@rhel-tiger-14-6 ~]# virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)
Note

QEMU: Checking for secure guest support is relevant to AMD or IBM processors only and is of no concern in this case.

You need IOMMU to access physical devices directly, but it is missing from the kernel command line:

[root@rhel-tiger-14-6 ~]# cat /proc/cmdline BOOT_IMAGE=(hd1,gpt2)/vmlinuz-4.18.0-305.25.1.el8_4.x86_64 root=/dev/mapper/vg1-root ro crashkernel=auto resume=/dev/mapper/vg1-swap rd.lvm.lv=vg1/root rd.lvm.lv=vg1/swap rhgb quiet

For performant systems, you must consider CPU and memory configuration. You can use tuned to take care of CPU and handle memory directly.

Direct update

[root@rhel-tiger-14-6 ~]# grubby --update-kernel=ALL --args=“intel-iommu=on iommu=pt hugepagesz=1G default_hugepagesz=1G hugepages=160 transparent_hugepage=never selinux=0”
[root@rhel-tiger-14-6 ~]# reboot

Command

Description

intel-iommu=on iommu=pt

Enable Intel IOMMU driver and put into passthrough (adapter does not need DMA translation) mode. This is needed for SR-IOV.

hugepagesz=1G default_hugepagesz=1G hugepages=160 transparent_hugepage=never

Optimizes how memory is used, especially in systems with a lot of it. The number of hugepages is shared equally across the NUMA nodes, and the number depends on the amount of memory installed.

In this case, there are 160 hugepages of size 1 GiB each to be used with the FortiGate-VMs. The important point is to avoid starving the hypervisor/OS and leave enough non-hugepage memory per NUMA.

[root@rhel-tiger-14-6 ~]# virsh nodeinfo
CPU model:           x86_64
CPU(s):              72
CPU frequency:       3258 MHz
CPU socket(s):       1
Core(s) per socket:  18
Thread(s) per core:  2
NUMA cell(s):        2
Memory size:         196373424 KiB

[root@rhel-tiger-14-6 ~]# expr 196373424 / 1024 / 1024
187

187 GiB total leaves 27 GiB outside of hugepage definition for the hypervisor/OS. Leaving approximately 16 GiB per NUMA is recommended. However, it is not an exact science. In this case, 12 x 16 GiB DIMMs (192 GiB) are known to be physically installed and 160 GiB has been configured in hugepages.

CPU sockets in virsh nodeinfo are counted per NUMA cell as per Bugzilla.

Command

Description

selinux=0

Disable SELinux.

If you must have SELinux enabled as part of your build standard, you can enable it. However, this may impact overall performance.

On Ubuntu systems, SELinux may be replaced with AppArmor. You can disable it using apparmor=0.

[root@rhel-tiger-14-6 ~]# cat /proc/cmdline
BOOT_IMAGE=(hd1,gpt2)/vmlinuz-4.18.0-305.25.1.el8_4.x86_64 root=/dev/mapper/vg1-root ro crashkernel=auto resume=/dev/mapper/vg1-swap rd.lvm.lv=vg1/root rd.lvm.lv=vg1/swap rhgb quiet intel-iommu=on iommu=pt hugepagesz=1G default_hugepagesz=1G hugepages=160 transparent_hugepage=never selinux=0

[root@rhel-tiger-14-6 ~]# virt-host-validate QEMU: Checking for hardware virtualization : PASS QEMU: Checking if device /dev/kvm exists : PASS QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'cpu' controller support : PASS QEMU: Checking for cgroup 'cpuacct' controller support : PASS QEMU: Checking for cgroup 'cpuset' controller support : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'devices' controller support : PASS QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for device assignment IOMMU support : PASS QEMU: Checking if IOMMU is enabled by kernel : PASS QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)