- Remove vmsilo-start-* user-facing symlinks from package.nix (internal
VM launcher scripts are only used by systemd ExecStart, not by users)
- Rename vmsilo-usb to vm-usb to match the vm-* naming convention
- Increase socat -t timeout in vm-run from default 0.5s to 5s to fix
missing output from console commands (cloud-hypervisor proxy startup
latency exceeded the default timeout window)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace crosvm xhci-based USB passthrough with usbip-rs over vsock,
enabling USB passthrough for both crosvm and cloud-hypervisor VMs.
Guest runs a persistent usbip-rs client listener on vsock port 5002.
Host runs one sandboxed usbip-rs host connect process per attached
device as a systemd template service (vmsilo-<vm>-usb@<devpath>).
Eliminates the JSON state file, file locking, and crosvm-specific
shell helper library in favor of systemd as the source of truth.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
screen -dmS forks to background (daemon mode), which should work
without a controlling terminal. Type=forking tells systemd to expect
the fork.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
screen requires a controlling terminal which systemd services don't
provide. tmux works without a terminal via new-session without -d.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Set RUST_BACKTRACE=full on VM, GPU, sound, dbus-proxy, balloond, and
wayland-seccontext services for better crash diagnostics. Add per-VM
sound.logLevel option (default "info") that sets RUST_LOG on the
vhost-device-sound service.
Also document previously undocumented options in README: cloud-hypervisor
hugepages, netvmRange, sound.logLevel, sound.seccompPolicy,
cloudHypervisor.hugepages, cloudHypervisor.seccompPolicy.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Tray proxy was unconditionally enabled for all crosvm VMs. Add
tray.enable so it must be opted into per VM. When disabled, neither
the host-side tray service nor the guest-side tray daemon are created.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove vulkan/venus references (option was removed in cloud-hypervisor
refactor) and document gpu.allowWX, gpu.logLevel, gpu.seccompPolicy
per-VM options that were missing from the options table and GPU section.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a `mountPath` option to `sharedDirectories.<name>` that, when set,
automatically mounts the virtiofs share at the specified path inside the
guest via systemd.mount-extra kernel parameter. The implicit sharedHome
entry now uses this same code path (mountPath = "/home/user") instead of
a hardcoded special case.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
xdg-desktop-portal 1.20.3's Realtime portal intercepts PipeWire's RT
scheduling requests but fails silently: it calls fstatat(pidfd, "ns/pid")
to check the caller's PID namespace, which returns ENOTDIR because pidfds
don't support being used as directory FDs on current kernels (6.18/6.19).
PipeWire uses fire-and-forget D-Bus and never sees the error.
Fix by granting the @audio group PAM limits (rtprio=95, nice=-19,
memlock=unlimited) so PipeWire's module-rt can call sched_setscheduler
directly, bypassing both the broken portal and rtkit.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Full isolation has too much impact to be a default. Even on an almost
unloaded machine with a couple of VMs running it results in audio buffer
underruns due to the significant scheduling latency.
This change is fine because with vmsilo, the trust domain is the VM. There
isn't much reason to protect apps from other apps running in the same VM.
Better to run those apps in separate VMs in that case.
qcow2 causes O_DIRECT failures on ext4 due to crosvm doing unaligned
access when parsing the qcow2 header. Since we don't use any qcow2
features (the disk is created fresh and deleted on stop), a raw sparse
file via truncate works just as well and also removes the qemu package
dependency from the VM service.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Route VM traffic through the host directly instead of requiring a
separate netvm VM. Uses the same nftables NAT and forward firewall
rules as VM-based netvms, applied on the host using TAP interface
names. Removes the hostNetworking.nat options in favor of the
unified netvm approach.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds network.netvm / network.isNetvm options that auto-configure
point-to-point VM networking (host bridge, TAP interfaces, guest IPs,
default routes, masquerade NAT, and forward firewall rules) without
manual interface configuration.
New options:
programs.vmsilo.netvmRange — IP pool for /31 auto-allocation (default 10.200.0.0/16)
vm.network.isNetvm — mark VM as a network gateway
vm.network.netvm — route this VM through a named netvm
vm.network.netvmSubnet — override auto-allocated /31 (pin specific address)
Architecture:
modules/netvm.nix computes all (netvm, client) pairs and writes to
_internal.netvmInjections to avoid infinite recursion in the module
system. networking.nix, scripts.nix, and services.nix each have a
getEffectiveInterfaces helper that merges user-configured and
injected interfaces transparently.
Guest nftables config (masquerade NAT, forward isolation between
clients, ip_forward sysctl) is injected via _generatedGuestConfig
and merged into the rootfs build in scripts.nix.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
GPU VMs need a Wayland socket, so starting them at multi-user.target
(boot) fails. The session-bind user service now also starts autoStart
GPU VMs when the graphical session begins. Non-GPU VMs still start at
boot.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add libvulkan.so to crosvm LD_LIBRARY_PATH. Along with the earlier fix
commits, this now enables full guest-side vulkan (tested with vkcube
and vulkaninfo - they're using the host GPU). Huzzah!
Was hoping enabling vulkan and not opengl would make mesa fall back to
zink, so we could skip virgl2 and just translate opengl to vulkan. But
it still ends up using llvmpipe. Forcing zink gives errors, getting that
working will be the next goal.
Replace the low-level gpu attrset (mapped directly to --gpu args) with a
submodule of supported features: wayland (cross-domain), opengl (virgl2),
and vulkan (venus). Vulkan automatically adds --gpu-render-server.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Nvidia GPU drivers require RWX memory pages, which crosvm's seccomp
sandbox blocks. This option switches to a crosvm-nvidia build with
a relaxed W+X memory policy, keeping the default secure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Instead of passing raw crosvm attrsets, sound is now configured
with two booleans: sound.playback (default true) and sound.capture
(default false, implies playback).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Switch shared directories from crosvm's built-in --shared-dir to
external virtiofsd processes connected via --vhost-user. Each shared
directory gets a dedicated virtiofsd systemd service that starts
before the VM and stops when the VM stops.
The sharedDirectories option is now an attrsOf submodule (keyed by
fs tag) with typed options for all virtiofsd flags instead of the
previous freeform list passed directly to crosvm.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Also add crosvm options to hint to the scheduler to put it on the performance
cores, enable vCPU stop on guest suspend and disable some legacy peripheral
emulation.
Instead of using host buffer, use a global vmsilo buffer. Host buffer is
treated just like VM buffers - copying to/from it needs explicit action. This
is more like the Qubes behavior, only difference is that copy/paste to/from
the host clipboard is allowed.
Now that critical_guest_available is a hard floor, lower it from 400m
to 256m (guests can safely operate with 256 MiB free). Increase
guest_available_bias from 300m to 400m for stronger graduated
resistance as the balloon fills, keeping the comfortable equilibrium
point around 656 MiB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The critical_guest_available threshold was not actually a floor —
when both host and guest were below their critical thresholds, the
clamp bounds inverted and were skipped entirely, letting the
equalization formula push guests to dangerously low memory (34 MiB
observed with a 400 MiB "critical" threshold).
Add a hard floor: after computing the balloon delta, clamp positive
deltas so inflation never pushes guest free memory below
critical_guest_available, regardless of host pressure.
Also add --critical-guest-available CLI arg (default 400m) and the
corresponding NixOS option (criticalGuestAvailable) so both knobs
are tunable. To target e.g. 300 MiB as a floor:
--critical-guest-available 250m --guest-available-bias 50m
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The rootfs is read-only, used only as an overlayfs lower layer — ext4
with journaling was a poor fit. erofs is purpose-built for this: compressed
(lz4hc), compact metadata, faster random reads.
The new builder (make-erofs-image.nix) runs nixos-install and mkfs.erofs
under a single fakeroot session, eliminating the QEMU VM previously needed
during the build.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
GPU VMs connect to the host Wayland socket, which is destroyed on logout.
Add per-VM systemd user services that bind to graphical-session.target and
stop the VM system service when the session deactivates. Also make
--wayland-security-context conditional on GPU being enabled.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The vm-switch experiment for VM-to-VM networking via vhost-user-net
didn't work out — it performs poorly under load, with busy connections
saturating the buffer and causing high latency for others.
Removes the vm-switch Rust crate, bufferbloat-test suite, all NixOS
module integration (options, services, networking, assertions, scripts),
and documentation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>