| docs/superpowers | ||
| example | ||
| home-template | ||
| modules | ||
| patches | ||
| rootfs-nixos | ||
| vmsilo-balloond | ||
| vmsilo-dbus-proxy | ||
| vmsilo-device-tray | ||
| vmsilo-tools | ||
| vmsilo-vsock | ||
| vmsilo-wayland-seccontext | ||
| wayland_decoration_tests | ||
| .gitignore | ||
| .gitmodules | ||
| CLAUDE.md | ||
| flake.lock | ||
| flake.nix | ||
| README.md | ||
| screenshot.png | ||
| TODO.md | ||
| update-kde-patches.sh | ||
vmsilo
A NixOS VM compartmentalization system inspired by Qubes OS. Runs programs in isolated VMs using cloud-hypervisor (default) or crosvm, displaying their windows natively on the host desktop:

Thanks to Thomas Leonard (@talex5), who wrote the wayland proxy and made qubes-lite, which made this project possible. https://gitlab.com/talex5/qubes-lite
Warning: this is a vibecoded prototype made for fun. If you need a serious and secure operating system, use Qubes.
The built VMs are full-fat NixOS systems (a bit over 2GB for a VM with firefox). You can reuse the same image for multiple VMs by using the same NixOS config and package set for them. The configuration under programs.vmsilo.nixosVms is all passed through kernel command line so doesn't affect image reuse.
Features
- Qubes-style colored window decorations enforced by patched kwin
- Two-level clipboard like qubes, each VM gets independent clipboard and primary selection buffers
- Fast guest graphics with wayland cross-domain, isolated with per-VM security context
- Supports wayland protocols for things like HDR, fractional scaling and smooth video playback
- Each VM gets a folder in the host menu, automatically populated with its programs
- VMs are launched on demand when apps are started through the menu (uses systemd socket activation)
- Sound playback and capture (capture disabled by default for VMs)
- VMs can be configured fully disposable with no state kept between restarts
- Shared directories over virtiofs for easily sharing files between VMs
- PCI passthrough
- System tray integration (VM tray applets appear in host system tray, with VM color border)
- Desktop notification proxying (VM notifications appear on host, prefixed with
[VMName]) - Dynamic memory control through vmsilo-balloond, when host memory is low it will reclaim memory from VMs
- Auto shutdown idle VMs (optional, can be enabled in VM settings)
Comparison to Qubes
The main benefits compared to Qubes are:
- Fast, modern graphics. Wayland calls are proxied to the host.
- Better power management. Qubes is based on Xen, and its support for modern laptop power management is significantly worse than linux.
- NixOS-based declarative VM config.
The cost for that is security. Qubes is laser-focused on security and hard compartmentalisation. This makes it by far the most secure general-purpose operating system there is.
Ways in which we are less secure than Qubes (list is not even remotely exhaustive):
- The host system is not isolated from the network or USB at all by default. The user needs to explicitly configure a netvm/usbvm if desired.
- VM network connections go through host tap interfaces, so the host kernel needs to handle VM packets. If setting up VM networking, use
tap.bridgeto reduce attack surface by limiting host involvement to bridging layer 2 packets. - Proxying wayland calls means the attack surface from VM to host is way larger than Qubes' raw framebuffer copy approach. We use a whitelist of allowed wayland protocols to mitigate this somewhat.
- Probably a million other things.
If you are trying to defend against a determined, well-resourced attacker targeting you specifically then you should be running Qubes.
Quick Start
Example flake.nix:
{
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
inputs.vmsilo.url = "git+https://git.dsg.is/dsg/vmsilo.git";
outputs = { self, nixpkgs, vmsilo, ... }: {
nixosConfigurations.stofa = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
vmsilo.nixosModules.default
./configuration.nix
];
};
};
}
Configure VMs in your NixOS configuration:
{ config, pkgs, ... }: {
# User must have explicit UID for vmsilo
users.users.david.uid = 1000;
programs.vmsilo = {
enable = true;
user = "david";
nixosVms = {
banking = {
color = "darkgreen";
memory = 4096;
cpus = 4;
autoShutdown = { enable = true; after = 120; };
network = {
netvm = "netvm";
};
guestPrograms = with pkgs; [ firefox konsole ];
};
netvm = {
color = "darkred";
memory = 1024;
cpus = 2;
network = {
isNetvm = true;
netvm = "host"; # Autoconfigure NAT through host networking
nameservers = [ "9.9.9.9" ];
};
guestPrograms = with pkgs; [ konsole ];
};
vault = {
# Offline VM with no networking
color = "green";
memory = 4096;
cpus = 4;
guestPrograms = with pkgs; [ libreoffice ];
};
};
};
}
Usage
Copy/paste
Like Qubes, each VM has its own independent clipboard (and primary selection) buffer. Copying in a VM window only copies to the VM buffer. To copy between VMs, two new shortcuts are defined:
- Copy to Global Clipboard (default ctrl+shift+c): Copies the VM or host clipboard buffer (depending on selected window) to the global clipboard buffer
- Paste from Global Clipboard (default ctrl+shift+v): Copies the global clipboard buffer to the clipboard buffer for the current window (VM or host)
These can be reassigned under Settings -> Shortcuts -> Window Management
Video playback
Video playback in VMs works very well. We support the fifo-v1 and presentation-time wayland protocols, so frame timing should be accurate.
For mpv, make sure you use --vo=wlshm. Other backends probably won't work.
Configuration Options
There are a lot of configuration options but you don't really need to touch most of them. Check the examples for what a basic configuration looks like.
programs.vmsilo
| Option | Type | Default | Description |
|---|---|---|---|
enable |
bool | false |
Enable vmsilo VM management |
user |
string | required | User who owns TAP interfaces and runs VMs (must have explicit UID) |
nixosVms |
attrsOf VM config | {} |
NixOS-based VMs to create (keys are VM names) |
enableBashIntegration |
bool | true |
Enable bash completion for vm-* commands |
gpu.allowWX |
bool | false |
Allow W+X memory in the GPU device backend. Set to true for NVIDIA drivers that require it. Replaces the old nvidiaWeakenSandbox option. |
schedulerIsolation |
"full", "vm", or "off" |
"vm" |
Mitigate hyperthreading attacks using scheduler thread isolation. "full": vCPU threads may not share a core with any other thread. "vm": vCPU threads may share a core with other vCPUs from the same VM only. "off": no mitigations. |
netvmRange |
string | "10.200.0.0/16" |
IP range for auto-allocating /31 subnets for netvm links |
vmsilo-balloond.logLevel |
string | "info" |
Log level for vmsilo-balloond daemon (error, warn, info, debug, trace) |
vmsilo-balloond.pollInterval |
string | "2s" |
Max policy evaluation interval (at 0% PSI pressure) |
vmsilo-balloond.minPollInterval |
string | "250ms" |
Min poll interval under memory pressure |
vmsilo-balloond.psiCeiling |
int | 25 |
PSI avg10 % that maps to minimum poll interval |
vmsilo-balloond.criticalHostPercent |
int | 5 |
Host critical threshold as percentage of total RAM |
vmsilo-balloond.criticalGuestAvailable |
string | "256m" |
Guest critical threshold — hard floor for guest free memory |
vmsilo-balloond.guestAvailableBias |
string | "400m" |
Guest bias term — soft cushion above floor that scales with balloon fullness |
vmsilo-balloond.extraArgs |
list of strings | [] |
Extra command line arguments for vmsilo-balloond daemon |
isolatedPciDevices |
list of strings | [] |
PCI devices to isolate with vfio-pci |
VM Configuration (nixosVms.<name>)
| Option | Type | Default | Description |
|---|---|---|---|
memory |
int | 1024 |
Memory allocation in MB |
cpus |
int | 2 |
Number of virtual CPUs |
color |
string | "darkred" |
Window decoration color (named color or hex, e.g., "#2ecc71") |
network.nameservers |
list of strings | [] |
DNS nameservers for this VM |
network.interfaces |
attrset of interface configs | {} |
Network interfaces (keys are guest-visible names) |
tray.enable |
bool | false |
Enable tray proxy for this VM (proxies guest SNI tray items to host system tray) |
autoShutdown.enable |
bool | false |
Auto-shutdown when idle (after autoShutdown.after seconds) |
autoShutdown.after |
int | 60 |
Seconds to wait before shutdown |
dbus.notifications |
bool | true |
Proxy desktop notifications from this VM to the host |
dbus.tray |
bool | false |
Proxy system tray items from this VM to the host |
dbus.logLevel |
string | "info" |
Log level for vmsilo-dbus-proxy host and guest daemons (error, warn, info, debug, trace) |
autoStart |
bool | false |
Start VM automatically (GPU VMs: on session start; non-GPU VMs: at boot) |
dependsOn |
list of strings | [] |
VM names to also start when this VM starts |
additionalDisks |
list of disk configs | [] |
Additional disks to attach (see Disk Configuration) |
rootDisk |
disk config or null | null | Custom root disk (defaults to built rootfs) |
kernel |
path or null | null | Custom kernel image |
initramfs |
path or null | null | Custom initramfs |
rootDiskReadonly |
bool | true |
Whether root disk is read-only |
sharedHome |
bool or string | true |
Share host dir as /home/user via virtiofs (true=/shared/<vmname>/home, string=custom path, false=disabled) |
copyChannel |
bool | false |
Include NixOS channel in rootfs (same nixpkgs rev used to build the VM) |
kernelParams |
list of strings | [] |
Extra kernel command line parameters |
gpu |
submodule | {} |
GPU config. GPU is enabled when any capability (wayland, opengl, vulkan) is true. wayland defaults true, so GPU is on by default. Set gpu.wayland = false to disable. See GPU Configuration below. |
gpu.backend |
"crosvm" or "vhost-device-gpu" |
"vhost-device-gpu" |
GPU device backend. "crosvm" uses crosvm's built-in GPU device; "vhost-device-gpu" uses vhost-device-gpu. |
gpu.vulkan |
bool | false |
Enable venus capset for Vulkan passthrough. |
gpu.allowWX |
null or bool | null |
Override global gpu.allowWX for this VM (null = inherit global) |
gpu.logLevel |
string | "info" |
Log level for this VM's GPU device service. |
gpu.seccompPolicy |
"enforcing" or "log" |
"enforcing" |
Seccomp policy for GPU device service. "enforcing" blocks unlisted syscalls; "log" only logs them. |
gpu.disableSandbox |
bool | false |
Disable non-seccomp sandboxing for GPU device service. Useful for debugging. |
sound.playback |
bool | true |
Enable sound playback |
sound.capture |
bool | false |
Enable sound capture |
sound.logLevel |
string | "info" |
RUST_LOG level for the sound device service |
sound.seccompPolicy |
"enforcing" or "log" |
"enforcing" |
Seccomp policy for sound device service. "enforcing" blocks unlisted syscalls; "log" only logs them. |
sharedDirectories |
attrsOf submodule | {} |
Shared directories via virtiofsd (keys are fs tags, see below) |
virtiofs.seccompPolicy |
"enforcing" or "log" |
"enforcing" |
Seccomp policy for virtiofsd instances. "enforcing" blocks unlisted syscalls; "log" only logs them. |
virtiofs.disableSandbox |
bool | false |
Disable non-seccomp sandboxing for virtiofsd instances. Useful for debugging. |
pciDevices |
list of attrsets | [] |
PCI devices to passthrough (path + optional kv pairs) |
usb.logLevel |
string | "info" |
RUST_LOG level for the USB passthrough service |
usbDevices |
list of attrsets | [] |
USB devices to passthrough (vendorId, productId, optional serial) |
guestPrograms |
list of packages | [] |
VM-specific packages |
guestConfig |
NixOS module(s) | [] |
VM-specific NixOS configuration (module, list of modules, or path) |
vhostUser |
list of attrsets | [] |
Manual vhost-user devices |
hypervisor |
string | "cloud-hypervisor" |
Select VMM: "cloud-hypervisor" or "crosvm" |
crosvm.logLevel |
string | "info" |
Log level for crosvm (error, warn, info, debug, trace) |
crosvm.extraArgs |
list of strings | [] |
Extra args passed to crosvm before "run" subcommand |
crosvm.extraRunArgs |
list of strings | [] |
Extra args passed to crosvm after "run" subcommand |
cloud-hypervisor.logLevel |
string | "info" |
Log level for cloud-hypervisor (error, warn, info, debug, trace) |
cloud-hypervisor.hugepages |
bool | false |
Use hugetlbfs-backed memory for this VM. Requires pre-allocated hugepages (vm.nr_hugepages). |
cloud-hypervisor.seccompPolicy |
"enforcing" or "log" |
"enforcing" |
Seccomp policy for this VM's cloud-hypervisor instance |
cloud-hypervisor.disableSandbox |
bool | false |
Disable Landlock and systemd hardening. Seccomp controlled separately by seccompPolicy. |
cloud-hypervisor.extraArgs |
list of strings | [] |
Extra args passed to cloud-hypervisor |
cloud-hypervisor.extraConfig |
attrs | {} |
Merged into the JSON VM config passed to cloud-hypervisor |
rootOverlay.type |
"raw" or "tmpfs" |
"raw" |
Overlay upper layer: disk-backed (raw) or RAM-backed (tmpfs) |
rootOverlay.size |
string | "10G" |
Max ephemeral disk size (raw only). Parsed by truncate |
netvm (auto VM-to-VM links)
For simple VM routing topologies, use network.netvm / network.isNetvm instead of
manually configuring interfaces, bridges, and IPs.
programs.vmsilo = {
netvmRange = "10.200.0.0/16"; # default; pool for auto /31 allocation
nixosVms = {
router = {
network.isNetvm = true;
};
client1.network.netvm = "router";
client2.network.netvm = "router";
};
};
You can also set both network.isNetvm=true and network.netvm for the same VM. This is handy for creating VPN VMs.
This automatically creates:
- A host bridge and TAP interfaces connecting each client to the router VM
- Interface
upstreamon each client VM with a /31 IP and default route via the router - Interface
client1/client2on the router VM - Masquerade NAT on the router: client traffic going out any non-loopback interface
- Forward firewall on the router: clients cannot reach each other, only external interfaces
Host as netvm
Set network.netvm = "host" to route a VM's traffic through the host machine directly:
programs.vmsilo.nixosVms = {
browsing = {
network = {
netvm = "host";
nameservers = [ "9.9.9.9" ];
};
guestPrograms = with pkgs; [ firefox ];
};
};
This creates a direct TAP interface between the VM and host (no bridge), assigns /31 IPs, and configures nftables masquerade NAT and forward firewall rules on the host — the same rules that a netvm guest would get.
A common pattern is a netvm VM that itself routes through the host:
netvm = {
network = {
isNetvm = true;
netvm = "host";
};
};
client1.network.netvm = "netvm";
IP allocation
IPs are allocated deterministically from netvmRange by hashing the (netvmName, clientName) pair. To pin a specific address or resolve a collision, set:
client1.network.netvmSubnet = "10.200.5.2/31"; # client gets .2, router gets .3
Constraints:
- A client VM with
network.netvmset cannot also definenetwork.interfaces.upstream network.isNetvmandnetwork.netvmare independent; a VM can be both (e.g., VPN tunnel netvm)- The named netvm VM must have
network.isNetvm = true(does not apply to"host") - The VM name
"host"is reserved and cannot be used
DNS
All VMs have systemd-resolved enabled by default.
Netvm VMs (isNetvm = true) automatically run unbound as a full recursive DNS resolver, listening on localhost and all downstream VM interfaces. resolved is configured to use the local unbound instance.
Downstream VMs (with netvm set to another VM) automatically use their netvm's IP as the DNS nameserver. resolved is configured with no fallback DNS to prevent leaking queries to compiled-in defaults.
VMs with netvm = "host" or no netvm get resolved enabled but no automatic DNS configuration — configure nameservers manually via network.nameservers or guestConfig.
All DNS settings use lib.mkDefault and can be overridden in guestConfig.
Network Interface Configuration (network.interfaces.<name>)
For advanced or non-standard network configuration, you can manually configure interfaces. The network.interfaces option is an attrset where keys become guest-visible interface names (e.g., wan, internal).
| Option | Type | Default | Description |
|---|---|---|---|
type |
"tap" |
"tap" |
Interface type |
macAddress |
string or null | null | MAC address (auto-generated from vmName-ifName hash if null) |
tap.name |
string or null | null | TAP interface name on host (default: <vmname>-<ifIndex>) |
tap.hostAddress |
string or null | null | Host-side IP with prefix (e.g., "10.0.0.254/24"). Mutually exclusive with tap.bridge. |
tap.bridge |
string or null | null | Bridge name to add TAP to (via networking.bridges). Mutually exclusive with tap.hostAddress. |
dhcp |
bool | false |
Enable DHCP for this interface |
addresses |
list of strings | [] |
Static IPv4 addresses with prefix |
routes |
attrs | {} |
IPv4 routes (destination -> { via = gateway; }) |
v6Addresses |
list of strings | [] |
Static IPv6 addresses with prefix |
v6Routes |
attrs | {} |
IPv6 routes |
Shared directories
Shared directories use virtiofsd (vhost-user virtio-fs daemon). Each shared directory runs a dedicated virtiofsd process that is automatically started before the VM and stopped when the VM stops. The attrset keys are used as virtiofs tags.
sharedDirectories = {
data = {
path = "/shared/personal";
mountPath = "/mnt/personal"; # auto-mount inside guest
uidMap = ":1000:1000:1:";
gidMap = ":1000:1000:1:";
};
};
Shared Directory Options (sharedDirectories.<name>)
| Option | Type | Default | Description |
|---|---|---|---|
path |
string | required | Host directory path to share |
threadPoolSize |
int | 0 |
Thread pool size for virtiofsd |
xattr |
bool | true |
Enable extended attributes |
posixAcl |
bool | true |
Enable POSIX ACLs (incompatible with translateUid/translateGid) |
readonly |
bool | false |
Share as read-only |
inodeFileHandles |
"never", "prefer", "mandatory" |
"prefer" |
Inode file handles mode |
cache |
"auto", "always", "never", "metadata" |
"auto" |
Cache policy |
allowMmap |
bool | false |
Allow memory-mapped I/O |
enableReaddirplus |
bool | true |
Enable readdirplus (false passes --no-readdirplus) |
writeback |
bool | false |
Enable writeback caching |
allowDirectIo |
bool | false |
Allow direct I/O |
logLevel |
string | "info" |
virtiofsd log level (error, warn, info, debug, trace, off) |
killprivV2 |
bool | true |
Enable FUSE_HANDLE_KILLPRIV_V2 |
uidMap |
string or null | null |
Map UIDs via user namespace (format: :namespace_uid:host_uid:count:) |
gidMap |
string or null | null |
Map GIDs via user namespace (format: :namespace_gid:host_gid:count:) |
translateUid |
string or null | null |
Translate UIDs internally (format: <type>:<source>:<target>:<count>). Incompatible with posixAcl |
translateGid |
string or null | null |
Translate GIDs internally (format: <type>:<source>:<target>:<count>). Incompatible with posixAcl |
preserveNoatime |
bool | false |
Preserve O_NOATIME flag on files |
mountPath |
string or null | null |
Guest mount path. When set, auto-mounts at this path inside the guest via systemd.mount-extra kernel parameter |
Shared Home
By default, each VM's /home/user is shared from the host via virtiofs (sharedHome = true). The host directory is /shared/<vmname>/home. On first VM start, if the directory doesn't exist, it is initialized by copying /var/lib/vmsilo/home-template. You can seed that template with dotfiles, configs, etc.
sharedHome = true— use default path/shared/<vmname>/home(default)sharedHome = "/custom/path"— use a custom host pathsharedHome = false— disable, guest/home/userlives on the root overlay
Both /shared and /var/lib/vmsilo/home-template are owned by the configured user.
Disk Configuration (additionalDisks items)
Free-form attrsets passed directly to crosvm --block. The path attribute is required and used as a positional argument.
additionalDisks = [{
path = "/tmp/data.qcow2"; # required, positional
ro = false; # read-only
sparse = true; # enable discard/trim
block-size = 4096; # reported block size
id = "data"; # device identifier
direct = false; # O_DIRECT mode
}];
# Results in: --block /tmp/data.qcow2,ro=false,sparse=true,block-size=4096,id=data,direct=false
Wayland Proxy
waylandProxy.logLevel = "debug"; # Log level for wayland-proxy-virtwl (default: info)
GPU Configuration
GPU is enabled when any capability (wayland, opengl, vulkan) is true. Since wayland defaults to true, GPU is on by default.
gpu.wayland = false; # Disable GPU entirely (no capabilities enabled)
gpu = { wayland = true; opengl = true; vulkan = true; }; # Full feature selection
gpu = { backend = "crosvm"; }; # Use crosvm GPU backend instead of default vhost-device-gpu
gpu = { seccompPolicy = "log"; }; # With seccomp logging instead of enforcing
gpu = { seccompPolicy = "log"; disableSandbox = true; }; # Full debug: no sandbox
Available GPU features:
backend(default:"vhost-device-gpu") — GPU device backend:"vhost-device-gpu"(vhost-device-gpu in rutabaga mode) or"crosvm"(crosvm's built-in GPU device)wayland(default: true) — cross-domain capset for Wayland passthroughopengl(default: false) — virgl2 capset for OpenGL accelerationvulkan(default: false) — venus capset for Vulkan passthroughallowWX(default: null) — override globalgpu.allowWXfor this VM (null = inherit global)logLevel(default:"info") — log level for this VM's GPU device serviceseccompPolicy(default:"enforcing") — seccomp policy for the GPU device service."enforcing"blocks unlisted syscalls;"log"only logs them.disableSandbox(default:false) — disable non-seccomp sandboxing (filesystem isolation, capabilities, private network, etc.) for debugging. Use withseccompPolicy = "log"to fully unsandbox.
The GPU device runs as a separate sandboxed service (vmsilo-<name>-gpu). With backend = "vhost-device-gpu" (default), it runs vhost-device-gpu in rutabaga mode; with backend = "crosvm", it runs crosvm device gpu. Both use the same vhost-user socket, sandboxing, and wayland-seccontext service. gpu.allowWX controls W+X memory permission.
GPU-enabled VMs (the default) are automatically stopped when the desktop session ends (logout), since the Wayland connection becomes invalid after the compositor restarts.
OpenGL and Vulkan don't work properly in my tests. I think this is because of missing dmabuf support.
VMs with GPU disabled (gpu.wayland = false and no other GPU capabilities enabled) do not connect to the host Wayland socket and are unaffected by session changes.
Sound Configuration
Sound uses vhost-device-sound with the PipeWire backend. Each VM with sound enabled runs a dedicated vhost-device-sound process. The sound socket is located at /run/vmsilo/<name>/sound/sound.socket. Guest PipeWire gets realtime scheduling via PAM limits (the @audio group gets rtprio=95, nice=-19, memlock=unlimited).
sound.playback = true; # Playback only (default)
sound.capture = true; # Enable capture
sound.playback = false; # Disable all sound
sound.logLevel = "debug"; # Set RUST_LOG for the sound service
sound.seccompPolicy = "log"; # Log blocked syscalls instead of enforcing
Runtime Sound Control
The device tray shows Sound Output and Microphone Input top-level menu items for running VMs that have sound enabled. Each submenu lists running VMs; a checkmark indicates that direction is currently enabled. Clicking a VM name toggles it on or off.
The sound.playback and sound.capture options set the initial enabled state when the VM starts (playback defaults to true, capture to false). The tray communicates with vhost-device-sound via a control socket at /run/vmsilo/<name>/sound/control.socket. VMs without an active control socket (e.g., sound not enabled, or service not yet started) are omitted from the sound menus.
PCI Passthrough Configuration
programs.vmsilo = {
# Devices to isolate from host (claimed by vfio-pci)
isolatedPciDevices = [ "01:00.0" "02:00.0" ];
nixosVms = {
sys-usb = {
memory = 1024;
pciDevices = [{ path = "01:00.0"; }]; # USB controller
};
sys-net = {
memory = 1024;
pciDevices = [{ path = "02:00.0"; }]; # Network card
};
};
};
# Recommended: blacklist native drivers for reliability
boot.blacklistedKernelModules = [ "xhci_hcd" ]; # for USB controllers
USB Passthrough
USB devices can be hot-attached to running VMs individually, without passing through an entire USB controller via PCI passthrough.
Configuration
Use the usbDevices per-VM option to declare persistent device assignments. Devices are matched by vendor/product ID, optionally narrowed by serial number. All matching physical devices are attached when the VM starts and detached when it stops.
banking = {
usbDevices = [
{ vendorId = "17ef"; productId = "60e0"; }
{ vendorId = "046d"; productId = "c52b"; serial = "A02019100900"; }
];
};
Runtime CLI
The vm-usb command manages USB device assignments at runtime:
vm-usb # List all USB devices and which VM they're attached to
vm-usb attach <vm> <vid:pid|devpath> # Attach a device (detaches from current VM if needed)
vm-usb detach <vm> <devpath> # Detach a device from a VM (devpath only, not VID:PID)
Devices can be identified by vid:pid (e.g., 17ef:60e0) or by sysfs devpath (e.g., 1-2.3) for attach. Detach requires the devpath.
USB passthrough works with both crosvm and cloud-hypervisor VMs via usbip-over-vsock. Persistent devices configured via usbDevices are auto-attached at VM start. All USB state is cleaned up when the VM stops.
vhost-user Devices
vhostUser = [{
type = "net";
socket = "/path/to/socket";
}];
# Results in: --vhost-user type=net,socket=/path/to/socket
Each attrset is formatted as key=value pairs for crosvm --vhost-user.
Window Decoration Colors
Each VM's color option controls its KDE window decoration color, providing a visual indicator of which security domain a window belongs to:
nixosVms = {
banking = { color = "#2ecc71"; ... }; # Green
shopping = { color = "#3498db"; ... }; # Blue
untrusted = { color = "red"; ... }; # Red (default)
};
The color is passed to KWin via the wayland security context. A KWin patch (included in the module) reads the color and applies it to the window's title bar and frame. Serverside decorations are forced for VM windows so the color is always visible. Text color is automatically chosen (black or white) based on the background luminance.
Supported formats: named colors ("red", "green"), hex ("#FF0000"), RGB ("rgb(255,0,0)").
Commands
After rebuilding NixOS, the following commands are available:
Run command in VM (recommended)
vm-run <name> <command>
Example: vm-run banking firefox
This is the primary way to interact with VMs. The command:
- Connects to the VM's socket at
/run/vmsilo/<name>/command.socket - Triggers socket activation to start the VM if not running
- Sends the command to the guest
Start/Stop VMs
vm-start <name> # Start VM via systemd (uses polkit, no sudo needed)
vm-stop <name> # Stop VM via systemd (uses polkit, no sudo needed)
vm-stop --all # Stop all running VMs
Shell access
vm-shell <name> # Connect to serial console (default)
vm-shell --ssh <name> # SSH into VM as user
vm-shell --ssh --root <name> # SSH into VM as root
The default serial console mode connects via a screen session. Press Ctrl+A, D to detach. No configuration required.
SSH mode requires SSH keys configured in per-VM guestConfig (see Advanced Configuration). For cloud-hypervisor VMs, --ssh automatically connects via vsock-connect — no IP routing required.
Socket activation
VMs run as system services (for PCI passthrough and sandboxing) and start automatically on first access via systemd socket activation:
# Check socket status
systemctl status vmsilo-banking.socket
# Check VM service status
systemctl status vmsilo-banking-vm.service
Sockets are enabled by default and start on boot.
Network Architecture
Interface Types
TAP interfaces (type = "tap"): For host networking and NAT internet access.
- Creates a TAP interface on the host with
tap.hostAddressor adds it to a bridge withtap.bridge tap.hostAddressandtap.bridgeare mutually exclusive- Guest uses addresses from
addressesoption - Routes configured via
routesoption
Interface Naming
Interface names are user-specified via network.interfaces attrset keys. Names are passed to the guest via vmsilo.ifname=<name>,<mac> kernel parameters and applied at early boot via udev rules.
How It Works
- Early boot: vfio-pci claims isolated devices before other drivers load
- Activation: If devices are already bound, they're rebound to vfio-pci
- VM start: IOMMU groups are validated, then devices are passed via
--vfio
Architecture
Each NixOS VM gets:
- An erofs rootfs image with packages baked in (compressed, read-only)
- Overlayfs root (read-only erofs lower + ephemeral raw disk upper by default, tmpfs fallback)
- Wayland proxy for GPU passthrough (wayland-proxy-virtwl)
- Session setup via
vmsilo-session-setup(imports display variables into user manager, startsgraphical-session.target) - Socket-activated command listener (
vsock-cmd.socket+vsock-cmd@.service, user services gated ongraphical-session.target) - Optional idle watchdog for auto-shutdown VMs (queries user service instances)
- Systemd-based init
The host provides:
- Persistent TAP interfaces via NixOS networking
- NAT for internet access (optional)
- Socket activation for commands (
/run/vmsilo/<name>/command.socket) - Console PTY for serial access (
/run/vmsilo/<name>/console) - VM services run as root for PCI passthrough and sandboxing (crosvm drops privileges)
- Polkit rules for the configured user to manage VM services without sudo
- CLI tools:
vm-run,vm-start,vm-stop,vm-shell,vm-usb,vsock-connect - Desktop integration with .desktop files for guest applications
Note: Runtime sockets use per-VM subdirectories (/run/vmsilo/<name>/*.socket) rather than the older flat layout (/run/vmsilo/<name>-*.socket).
D-Bus Proxy (Tray and Notifications)
The vmsilo-dbus-proxy handles forwarding D-Bus services (system tray and desktop notifications) from guest VMs to the host over vsock:5001.
Notifications (dbus.notifications = true, default): VM notifications appear on the host desktop, prefixed with [VMName]. Notification icons get a colored border matching the VM's window decoration color. Hints x-vmsilo-unit and x-vmsilo-color are set for KDE integration (e.g., colored close button in notification popup).
System tray (dbus.tray = false, default off): VM StatusNotifierItems (nm-applet, bluetooth, etc.) appear in the host KDE system tray, prefixed with the VM name.
- Host-side whitelist sanitization ensures no untrusted data reaches the host D-Bus unchecked (pixmap dimensions, string lengths, menu depth/count limits, markup stripping)
- One host service per VM (
vmsilo-<name>-dbus-proxy.service), bound to VM lifecycle
Fuzzing
Coverage-guided fuzzing for vmsilo-dbus-proxy using two engines: cargo-fuzz (libFuzzer) and AFL++ with SymCC concolic execution.
Quick start
# List available fuzz targets
nix run .#fuzz-cargo-dbus-proxy
nix run .#fuzz-afl-dbus-proxy
# Run a specific target (cargo-fuzz / libFuzzer)
nix run .#fuzz-cargo-dbus-proxy -- fuzz_deserialize
# Run a specific target (AFL++ with SymCC)
nix run .#fuzz-afl-dbus-proxy -- fuzz_deserialize
# Interactive cargo-fuzz (enter devShell first)
nix develop .#fuzz
cd vmsilo-dbus-proxy
cargo fuzz run fuzz_sanitize_snapshot
Parallel fuzzing
cargo-fuzz: Use --fork=N to run N parallel workers. The wrapper automatically restarts the fuzzer when it exits (e.g., after finding a crash), so artifacts accumulate:
nix run .#fuzz-cargo-dbus-proxy -- fuzz_sanitize_snapshot --fork=4
AFL++: Use --jobs=N for 1 main + (N-1) secondary instances plus a SymCC companion. All background processes run as systemd transient units and are cleaned up on exit:
nix run .#fuzz-afl-dbus-proxy -- fuzz_sanitize_snapshot --jobs=4
To manually stop all AFL++ processes for a target:
systemctl --user stop fuzz_afl_dbus_proxy_fuzz_sanitize_snapshot.slice
Cleaning fixed artifacts
After fixing a bug, re-test saved crash artifacts and delete those that no longer reproduce:
# cargo-fuzz artifacts
nix run .#fuzz-clean-cargo-dbus-proxy -- fuzz_sanitize_snapshot
# AFL++ crash files
nix run .#fuzz-clean-afl-dbus-proxy -- fuzz_sanitize_snapshot
Seed corpus
The fuzz_deserialize and fuzz_read_message targets include hand-crafted seed messages (valid GuestToHost variants). To regenerate seeds for both engines after protocol changes:
nix run .#fuzz-gen-corpus
Targets
| Target | Input | What it tests |
|---|---|---|
fuzz_deserialize |
Raw bytes | postcard deserialization of GuestToHost messages |
fuzz_read_message |
Raw bytes | Length-prefixed framing + deserialization |
fuzz_sanitize_snapshot |
Structured Snapshot |
Sanitization invariants (string lengths, pixmap bounds, menu depth/count, property whitelist) |
fuzz_sanitize_notification |
Structured notification | Sanitization invariants (truncation, markup stripping, action limits) |
fuzz_tint_pixmap |
Structured pixmap + color | Pixel manipulation with mismatched dimensions |