A NixOS-based Qubes-like app isolation system based on VMs
Find a file
Davíð Steinn Geirsson 43c99ec162 Document USB device passthrough in README and CLAUDE.md
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 13:10:54 +00:00
example Update docs and examples 2026-03-07 17:49:04 +00:00
home-template Add template home directory with konsole and mpv settings 2026-03-17 11:18:35 +00:00
modules Add vmsilo-usb to bash completion 2026-03-17 13:09:35 +00:00
packages Remove redundant dbus launch, fix sommelier 2026-02-20 23:30:03 +00:00
patches Add colored borders for VM app and menu icons 2026-03-07 17:33:48 +00:00
rootfs-nixos Add KDE/Breeze theming infrastructure to base guest config 2026-03-13 00:31:05 +00:00
vmsilo-balloond balloond: use available_memory for hard floor instead of free+caches 2026-02-18 22:03:14 +00:00
vmsilo-tray fix(tray): emit D-Bus signals on update, fix reconnect and name tracking 2026-02-18 00:13:38 +00:00
wayland_decoration_tests add wayland decoration security test suite 2026-02-12 22:31:53 +00:00
.gitignore Refactor crosvm options to use attrset-based configuration 2026-02-07 22:54:03 +00:00
.gitmodules vm-switch: Switch back to upstream vhost 2026-02-07 21:45:26 +00:00
CLAUDE.md Document USB device passthrough in README and CLAUDE.md 2026-03-17 13:10:54 +00:00
flake.lock Bump deps, fix crosvm pulseaudio-rs git URL 2026-03-16 17:13:56 +00:00
flake.nix Remove netvmSettings, mostly automated by isNetvm now 2026-03-17 12:55:23 +00:00
README.md Document USB device passthrough in README and CLAUDE.md 2026-03-17 13:10:54 +00:00
screenshot.png Update README.md, add screenshot 2026-02-18 15:02:54 +00:00

vmsilo

A NixOS VM compartmentalization system inspired by Qubes OS. Runs programs in isolated VMs using crosvm (Chrome OS VMM), displaying their windows natively on the host desktop: vmsilo screenshot

Thanks to Thomas Leonard (@talex5), who wrote the wayland proxy and made qubes-lite, which made this project possible. https://gitlab.com/talex5/qubes-lite

Warning: this is a vibecoded prototype made for fun. If you need a serious and secure operating system, use Qubes.

The built VMs are full-fat NixOS systems (a bit over 2GB for a VM with firefox). You can reuse the same image for multiple VMs by using the same NixOS config and package set for them. The configuration under programs.vmsilo.nixosVms is all passed through kernel command line so doesn't affect image reuse.

Features

  • Qubes-style colored window decorations enforced by patched kwin
  • Two-level clipboard like qubes, each VM gets independent clipboard and primary selection buffers
  • Fast guest graphics, including hardware Vulkan and OpenGL rendering
  • Supports wayland protocols for things like HDR, fractional scaling and smooth video playback
  • Each VM gets a folder in the host menu, automatically populated with its programs
  • VMs are launched on demand when apps are started through the menu (uses systemd socket activation)
  • Sound playback and capture (capture disabled by default for VMs)
  • VMs can be configured fully disposable with no state kept between restarts
  • Shared directories over virtiofs for easily sharing files between VMs
  • PCI passthrough
  • System tray integration (VM tray applets appear in host system tray, with VM color border)
  • Dynamic memory control through vmsilo-balloond, when host memory is low it will reclaim memory from VMs
  • Auto shutdown idle VMs (optional, can be enabled in VM settings)

Comparison to Qubes

The main benefits compared to Qubes are:

  • Fast, modern graphics. Wayland calls are proxied to the host.
  • Better power management. Qubes is based on Xen, and its support for modern laptop power management is significantly worse than linux.
  • NixOS-based declarative VM config.

The cost for that is security. Qubes is laser-focused on security and hard compartmentalisation. This makes it by far the most secure general-purpose operating system there is.

Ways in which we are less secure than Qubes (list is not even remotely exhaustive):

  • The host system is not isolated from the network or USB at all by default. The user needs to explicitly configure a netvm/usbvm if desired.
  • VM network connections go through host tap interfaces, so the host kernel needs to handle VM packets. If setting up VM networking, use tap.bridge to reduce attack surface by limiting host involvement to bridging layer 2 packets.
  • Proxying wayland calls means the attack surface from VM to host is way larger than Qubes' raw framebuffer copy approach. We use a whitelist of allowed wayland protocols to mitigate this somewhat.
  • Probably a million other things.

If you are trying to defend against a determined, well-resourced attacker targeting you specifically then you should be running Qubes.

Quick Start

Example flake.nix:

{
  nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
  inputs.vmsilo.url = "git+https://git.dsg.is/dsg/vmsilo.git";

  outputs = { self, nixpkgs, vmsilo, ... }: {
    nixosConfigurations.stofa = nixpkgs.lib.nixosSystem {
      system = "x86_64-linux";
      modules = [
        vmsilo.nixosModules.default
        ./configuration.nix
      ];
    };
  };
}

Configure VMs in your NixOS configuration:

{ config, pkgs, ... }: {

  # User must have explicit UID for vmsilo
  users.users.david.uid = 1000;

  programs.vmsilo = {
    enable = true;
    user = "david";

    nixosVms = {
      banking = {
        color = "darkgreen";
        memory = 4096;
        cpus = 4;
        autoShutdown = { enable = true; after = 120; };
        network = {
          nameservers = [ "9.9.9.9" ];
          netvm = "netvm";
        };
        guestPrograms = with pkgs; [ firefox konsole ];
      };

      netvm = {
        color = "darkred";
        memory = 1024;
        cpus = 2;
        network = {
          isNetvm = true;
          netvm = "host"; # Autoconfigure NAT through host networking
          nameservers = [ "9.9.9.9" ];
        };
        guestPrograms = with pkgs; [ konsole ];
      };

      vault = {
        # Offline VM with no networking
        color = "green";
        memory = 4096;
        cpus = 4;
        guestPrograms = with pkgs; [ libreoffice ];
      };
    };
  };
}

Usage

Copy/paste

Like Qubes, each VM has its own independent clipboard (and primary selection) buffer. Copying in a VM window only copies to the VM buffer. To copy between VMs, two new shortcuts are defined:

  • Copy to Global Clipboard (default ctrl+shift+c): Copies the VM or host clipboard buffer (depending on selected window) to the global clipboard buffer
  • Paste from Global Clipboard (default ctrl+shift+v): Copies the global clipboard buffer to the clipboard buffer for the current window (VM or host)

These can be reassigned under Settings -> Shortcuts -> Window Management

Video playback

Video playback in VMs works very well. We support the fifo-v1 and presentation-time wayland protocols, so frame timing should be accurate.

For mpv, make sure you use --vo=wlshm. Other backends probably won't work.

Configuration Options

There are a lot of configuration options but you don't really need to touch most of them. Check the examples for what a basic configuration looks like.

programs.vmsilo

Option Type Default Description
enable bool false Enable vmsilo VM management
user string required User who owns TAP interfaces and runs VMs (must have explicit UID)
nixosVms attrsOf VM config {} NixOS-based VMs to create (keys are VM names)
enableBashIntegration bool true Enable bash completion for vm-* commands
nvidiaWeakenSandbox bool false Use crosvm-nvidia package with relaxed W+X memory policy for nvidia GPU support
schedulerIsolation "full", "vm", or "off" "full" Mitigate hyperthreading attacks using scheduler thread isolation. "full": vCPU threads may not share a core with any other thread. "vm": vCPU threads may share a core with other vCPUs from the same VM only. "off": no mitigations.
crosvm.logLevel string "info" Log level for crosvm (error, warn, info, debug, trace)
crosvm.extraArgs list of strings [] Extra args passed to crosvm before "run" subcommand
crosvm.extraRunArgs list of strings [] Extra args passed to crosvm after "run" subcommand
vmsilo-balloond.logLevel string "info" Log level for vmsilo-balloond daemon (error, warn, info, debug, trace)
vmsilo-balloond.pollInterval string "2s" Max policy evaluation interval (at 0% PSI pressure)
vmsilo-balloond.minPollInterval string "250ms" Min poll interval under memory pressure
vmsilo-balloond.psiCeiling int 25 PSI avg10 % that maps to minimum poll interval
vmsilo-balloond.criticalHostPercent int 5 Host critical threshold as percentage of total RAM
vmsilo-balloond.criticalGuestAvailable string "256m" Guest critical threshold — hard floor for guest free memory
vmsilo-balloond.guestAvailableBias string "400m" Guest bias term — soft cushion above floor that scales with balloon fullness
vmsilo-balloond.extraArgs list of strings [] Extra command line arguments for vmsilo-balloond daemon
vmsilo-tray.logLevel string "info" Log level for tray proxy host and guest daemons (error, warn, info, debug, trace)
isolatedPciDevices list of strings [] PCI devices to isolate with vfio-pci

VM Configuration (nixosVms.<name>)

Option Type Default Description
memory int 1024 Memory allocation in MB
cpus int 2 Number of virtual CPUs
color string "darkred" Window decoration color (named color or hex, e.g., "#2ecc71")
network.nameservers list of strings [] DNS nameservers for this VM
network.interfaces attrset of interface configs {} Network interfaces (keys are guest-visible names)
autoShutdown.enable bool false Auto-shutdown when idle (after autoShutdown.after seconds)
autoShutdown.after int 60 Seconds to wait before shutdown
autoStart bool false Start VM automatically (GPU VMs: on session start; non-GPU VMs: at boot)
dependsOn list of strings [] VM names to also start when this VM starts
additionalDisks list of disk configs [] Additional disks to attach (see Disk Configuration)
rootDisk disk config or null null Custom root disk (defaults to built rootfs)
kernel path or null null Custom kernel image
initramfs path or null null Custom initramfs
rootDiskReadonly bool true Whether root disk is read-only
sharedHome bool or string true Share host dir as /home/user via virtiofs (true=/shared/<vmname>, string=custom path, false=disabled)
copyChannel bool false Include NixOS channel in rootfs (same nixpkgs rev used to build the VM)
kernelParams list of strings [] Extra kernel command line parameters
gpu bool or attrset true GPU config (false=disabled, true=default wayland+opengl, attrset=feature selection)
sound.playback bool true Enable sound playback
sound.capture bool false Enable sound capture (implies playback)
sharedDirectories attrsOf submodule {} Shared directories via virtiofsd (keys are fs tags, see below)
pciDevices list of attrsets [] PCI devices to passthrough (path + optional kv pairs)
usbDevices list of attrsets [] USB devices to passthrough (vendorId, productId, optional serial)
guestPrograms list of packages [] VM-specific packages
guestConfig NixOS module(s) [] VM-specific NixOS configuration (module, list of modules, or path)
vhostUser list of attrsets [] Manual vhost-user devices
crosvm.logLevel string or null null Per-VM log level override (uses global if null)
crosvm.extraArgs list of strings [] Per-VM extra args (appended to global crosvm.extraArgs)
rootOverlay.type "raw" or "tmpfs" "raw" Overlay upper layer: disk-backed (raw) or RAM-backed (tmpfs)
rootOverlay.size string "10G" Max ephemeral disk size (raw only). Parsed by truncate
crosvm.extraRunArgs list of strings [] Per-VM extra run args (appended to global crosvm.extraRunArgs)

For simple VM routing topologies, use network.netvm / network.isNetvm instead of manually configuring interfaces, bridges, and IPs.

programs.vmsilo = {
  netvmRange = "10.200.0.0/16";  # default; pool for auto /31 allocation

  nixosVms = {
    router = {
      network.isNetvm = true;
    };

    client1.network.netvm = "router";
    client2.network.netvm = "router";
  };
};

You can also set both network.isNetvm=true and network.netvm for the same VM. This is handy for creating VPN VMs.

This automatically creates:

  • A host bridge and TAP interfaces connecting each client to the router VM
  • Interface upstream on each client VM with a /31 IP and default route via the router
  • Interface client1 / client2 on the router VM
  • Masquerade NAT on the router: client traffic going out any non-loopback interface
  • Forward firewall on the router: clients cannot reach each other, only external interfaces

Host as netvm

Set network.netvm = "host" to route a VM's traffic through the host machine directly:

programs.vmsilo.nixosVms = {
  browsing = {
    network = {
      netvm = "host";
      nameservers = [ "9.9.9.9" ];
    };
    guestPrograms = with pkgs; [ firefox ];
  };
};

This creates a direct TAP interface between the VM and host (no bridge), assigns /31 IPs, and configures nftables masquerade NAT and forward firewall rules on the host — the same rules that a netvm guest would get.

A common pattern is a netvm VM that itself routes through the host:

netvm = {
  network = {
    isNetvm = true;
    netvm = "host";
  };
};
client1.network.netvm = "netvm";

IP allocation

IPs are allocated deterministically from netvmRange by hashing the (netvmName, clientName) pair. To pin a specific address or resolve a collision, set:

client1.network.netvmSubnet = "10.200.5.2/31";  # client gets .2, router gets .3

Constraints:

  • A client VM with network.netvm set cannot also define network.interfaces.upstream
  • network.isNetvm and network.netvm are independent; a VM can be both (e.g., VPN tunnel netvm)
  • The named netvm VM must have network.isNetvm = true (does not apply to "host")
  • The VM name "host" is reserved and cannot be used

Network Interface Configuration (network.interfaces.<name>)

For advanced or non-standard network configuration, you can manually configure interfaces. The network.interfaces option is an attrset where keys become guest-visible interface names (e.g., wan, internal).

Option Type Default Description
type "tap" "tap" Interface type
macAddress string or null null MAC address (auto-generated from vmName-ifName hash if null)
tap.name string or null null TAP interface name on host (default: <vmname>-<ifIndex>)
tap.hostAddress string or null null Host-side IP with prefix (e.g., "10.0.0.254/24"). Mutually exclusive with tap.bridge.
tap.bridge string or null null Bridge name to add TAP to (via networking.bridges). Mutually exclusive with tap.hostAddress.
dhcp bool false Enable DHCP for this interface
addresses list of strings [] Static IPv4 addresses with prefix
routes attrs {} IPv4 routes (destination -> { via = gateway; })
v6Addresses list of strings [] Static IPv6 addresses with prefix
v6Routes attrs {} IPv6 routes

Shared directories

Shared directories use virtiofsd (vhost-user virtio-fs daemon). Each shared directory runs a dedicated virtiofsd process that is automatically started before the VM and stopped when the VM stops. The attrset keys are used as virtiofs tags.

sharedDirectories = {
  data = {
    path = "/shared/personal";
    uidMap = ":1000:1000:1:";
    gidMap = ":1000:1000:1:";
  };
};

Shared Directory Options (sharedDirectories.<name>)

Option Type Default Description
path string required Host directory path to share
threadPoolSize int 0 Thread pool size for virtiofsd
xattr bool true Enable extended attributes
posixAcl bool true Enable POSIX ACLs (incompatible with translateUid/translateGid)
readonly bool false Share as read-only
inodeFileHandles "never", "prefer", "mandatory" "prefer" Inode file handles mode
cache "auto", "always", "never", "metadata" "auto" Cache policy
allowMmap bool false Allow memory-mapped I/O
enableReaddirplus bool true Enable readdirplus (false passes --no-readdirplus)
writeback bool false Enable writeback caching
allowDirectIo bool false Allow direct I/O
logLevel string "info" virtiofsd log level (error, warn, info, debug, trace, off)
killprivV2 bool true Enable FUSE_HANDLE_KILLPRIV_V2
uidMap string or null null Map UIDs via user namespace (format: :namespace_uid:host_uid:count:)
gidMap string or null null Map GIDs via user namespace (format: :namespace_gid:host_gid:count:)
translateUid string or null null Translate UIDs internally (format: <type>:<source>:<target>:<count>). Incompatible with posixAcl
translateGid string or null null Translate GIDs internally (format: <type>:<source>:<target>:<count>). Incompatible with posixAcl
preserveNoatime bool false Preserve O_NOATIME flag on files

Shared Home

By default, each VM's /home/user is shared from the host via virtiofs (sharedHome = true). The host directory is /shared/<vmname>. On first VM start, if the directory doesn't exist, it is initialized by copying /var/lib/vmsilo/home-template. You can seed that template with dotfiles, configs, etc.

  • sharedHome = true — use default path /shared/<vmname> (default)
  • sharedHome = "/custom/path" — use a custom host path
  • sharedHome = false — disable, guest /home/user lives on the root overlay

Both /shared and /var/lib/vmsilo/home-template are owned by the configured user.

Disk Configuration (additionalDisks items)

Free-form attrsets passed directly to crosvm --block. The path attribute is required and used as a positional argument.

additionalDisks = [{
  path = "/tmp/data.qcow2";  # required, positional
  ro = false;                # read-only
  sparse = true;             # enable discard/trim
  block-size = 4096;         # reported block size
  id = "data";               # device identifier
  direct = false;            # O_DIRECT mode
}];
# Results in: --block /tmp/data.qcow2,ro=false,sparse=true,block-size=4096,id=data,direct=false

Wayland Proxy

waylandProxy = "wayland-proxy-virtwl";  # Default: wayland-proxy-virtwl by Thomas Leonard
waylandProxy = "sommelier";             # ChromeOS sommelier (experiment, does not work currently)

GPU Configuration

gpu = false;  # Disabled
gpu = true;   # Default: wayland + opengl (context-types=cross-domain:virgl2)
gpu = { wayland = true; opengl = true; vulkan = true; };  # Enable Vulkan (Venus) support

Available GPU features:

  • wayland (default: true) — cross-domain context type for Wayland passthrough
  • opengl (default: true) — virgl2 context type for OpenGL acceleration
  • vulkan (default: false) — venus context type for Vulkan acceleration (adds gpu-render-server)

GPU-enabled VMs (the default) are automatically stopped when the desktop session ends (logout), since crosvm's Wayland connection becomes invalid after the compositor restarts.

VMs with gpu = false do not connect to the host Wayland socket and are unaffected by session changes.

Sound Configuration

sound.playback = false;  # Disabled
sound.capture = true;    # Playback + capture

PCI Passthrough Configuration

programs.vmsilo = {
  # Devices to isolate from host (claimed by vfio-pci)
  isolatedPciDevices = [ "01:00.0" "02:00.0" ];

  nixosVms = {
    sys-usb = {
      memory = 1024;
      pciDevices = [{ path = "01:00.0"; }];  # USB controller
    };
    sys-net = {
      memory = 1024;
      pciDevices = [{ path = "02:00.0"; }];  # Network card
    };
  };
};

# Recommended: blacklist native drivers for reliability
boot.blacklistedKernelModules = [ "xhci_hcd" ];  # for USB controllers

USB Passthrough

USB devices can be hot-attached to running VMs individually, without passing through an entire USB controller via PCI passthrough.

Configuration

Use the usbDevices per-VM option to declare persistent device assignments. Devices are matched by vendor/product ID, optionally narrowed by serial number. All matching physical devices are attached when the VM starts and detached when it stops.

banking = {
  usbDevices = [
    { vendorId = "17ef"; productId = "60e0"; }
    { vendorId = "046d"; productId = "c52b"; serial = "A02019100900"; }
  ];
};

Runtime CLI

The vmsilo-usb command manages USB device assignments at runtime:

vmsilo-usb                                  # List all USB devices and which VM they're attached to
vmsilo-usb attach <vm> <vid:pid|devpath>    # Attach a device (detaches from current VM if needed)
vmsilo-usb detach <vm> <vid:pid|devpath>    # Detach a device from a VM

Devices can be identified by vid:pid (e.g., 17ef:60e0) or by sysfs devpath (e.g., 1-2.3).

Persistent devices configured via usbDevices are auto-attached at VM start. All USB state is cleaned up when the VM stops.

vhost-user Devices

vhostUser = [{
  type = "net";
  socket = "/path/to/socket";
}];
# Results in: --vhost-user type=net,socket=/path/to/socket

Each attrset is formatted as key=value pairs for crosvm --vhost-user.

Window Decoration Colors

Each VM's color option controls its KDE window decoration color, providing a visual indicator of which security domain a window belongs to:

nixosVms = {
  banking   = { color = "#2ecc71"; ... };  # Green
  shopping  = { color = "#3498db"; ... };  # Blue
  untrusted = { color = "red";     ... };  # Red (default)
};

The color is passed to KWin via the wayland security context. A KWin patch (included in the module) reads the color and applies it to the window's title bar and frame. Serverside decorations are forced for VM windows so the color is always visible. Text color is automatically chosen (black or white) based on the background luminance.

Supported formats: named colors ("red", "green"), hex ("#FF0000"), RGB ("rgb(255,0,0)").

Commands

After rebuilding NixOS, the following commands are available:

vm-run <name> <command>

Example: vm-run banking firefox

This is the primary way to interact with VMs. The command:

  1. Connects to the VM's socket at /run/vmsilo/<name>-command.socket
  2. Triggers socket activation to start the VM if not running
  3. Sends the command to the guest

Start/Stop VMs

vm-start <name>    # Start VM via systemd (uses polkit, no sudo needed)
vm-stop <name>     # Stop VM via systemd (uses polkit, no sudo needed)

Start VM for debugging

vm-start-debug <name>

Starts crosvm directly in the foreground (requires sudo), bypassing socket activation. Useful for debugging VM boot issues since crosvm output is visible.

Shell access

vm-shell <name>              # Connect to serial console (default)
vm-shell --ssh <name>        # SSH into VM as user
vm-shell --ssh --root <name> # SSH into VM as root

The default serial console mode requires no configuration. Press CTRL+] to escape.

SSH mode requires SSH keys configured in per-VM guestConfig (see Advanced Configuration).

Socket activation

VMs run as system services (for PCI passthrough and sandboxing) and start automatically on first access via systemd socket activation:

# Check socket status
systemctl status vmsilo-banking.socket

# Check VM service status
systemctl status vmsilo-banking-vm.service

Sockets are enabled by default and start on boot.

Network Architecture

Interface Types

TAP interfaces (type = "tap"): For host networking and NAT internet access.

  • Creates a TAP interface on the host with tap.hostAddress or adds it to a bridge with tap.bridge
  • tap.hostAddress and tap.bridge are mutually exclusive
  • Guest uses addresses from addresses option
  • Routes configured via routes option

Interface Naming

Interface names are user-specified via network.interfaces attrset keys. Names are passed to the guest via vmsilo.ifname=<name>,<mac> kernel parameters and applied at early boot via udev rules.

How It Works

  1. Early boot: vfio-pci claims isolated devices before other drivers load
  2. Activation: If devices are already bound, they're rebound to vfio-pci
  3. VM start: IOMMU groups are validated, then devices are passed via --vfio

Architecture

Each NixOS VM gets:

  • An erofs rootfs image with packages baked in (compressed, read-only)
  • Overlayfs root (read-only erofs lower + ephemeral raw disk upper by default, tmpfs fallback)
  • Wayland proxy for GPU passthrough (wayland-proxy-virtwl or sommelier)
  • Session setup via vmsilo-session-setup (imports display variables into user manager, starts graphical-session.target)
  • Socket-activated command listener (vsock-cmd.socket + vsock-cmd@.service, user services gated on graphical-session.target)
  • Optional idle watchdog for auto-shutdown VMs (queries user service instances)
  • Systemd-based init

The host provides:

  • Persistent TAP interfaces via NixOS networking
  • NAT for internet access (optional)
  • Socket activation for commands (/run/vmsilo/<name>-command.socket)
  • Console PTY for serial access (/run/vmsilo/<name>-console)
  • VM services run as root for PCI passthrough and sandboxing (crosvm drops privileges)
  • Polkit rules for the configured user to manage VM services without sudo
  • CLI tools: vm-run, vm-start, vm-stop, vm-start-debug, vm-shell, vmsilo-usb
  • Desktop integration with .desktop files for guest applications

Tray Integration

VM system tray applets (StatusNotifierItems like nm-applet, bluetooth, etc.) automatically appear in the host KDE system tray. No configuration needed — tray proxying is always enabled.

  • Tray items are prefixed with the VM name (e.g., [banking] NetworkManager)
  • User interactions (clicks, scrolls, context menus) are forwarded back to the guest app
  • Host-side whitelist sanitization ensures no untrusted data reaches the host D-Bus unchecked (pixmap dimensions, string lengths, menu depth/count limits, markup stripping)
  • One host service per VM (vmsilo-<name>-tray.service), bound to VM lifecycle