docs: fix syntax issues and struct description in devices

Some description in the device document were inconsistent with the
source code. Also fix some syntax issues to make the sentences more
fluent.

Signed-off-by: Yi Wang <foxywang@tencent.com>
This commit is contained in:
Yi Wang 2026-01-06 19:46:25 +08:00 committed by Rob Bradford
parent 34ee973ee0
commit 7b92a36c4c
8 changed files with 59 additions and 59 deletions

View file

@ -26,8 +26,8 @@ struct BalloonConfig {
Size of the balloon device. It is subtracted from the VM's total size. For
instance, if creating a VM with 4GiB of RAM, along with a balloon of 1GiB, the
guest will be able to use 3GiB of accessible memory. The guest sees all the RAM
and unless it is balloon enlightened is entitled to all of it.
guest will be able to use 3GiB of accessible memory. The guest sees all the RAM,
and unless it is balloon enlightened, it is entitled to all of it.
This parameter is mandatory.
@ -42,7 +42,7 @@ _Example_
### `deflate_on_oom`
Allow the guest to deflate the balloon if running Out Of Memory (OOM). Assuming
Allow the guest to deflate the balloon when running Out Of Memory (OOM). Assuming
the balloon size is greater than 0, this means the guest is allowed to reduce
the balloon size all the way down to 0 if this can help recover from the OOM
event.

View file

@ -11,8 +11,8 @@ to set vCPUs options for Cloud Hypervisor.
```rust
struct CpusConfig {
boot_vcpus: u8,
max_vcpus: u8,
boot_vcpus: u32,
max_vcpus: u32,
topology: Option<CpuTopology>,
kvm_hyperv: bool,
max_phys_bits: u8,
@ -30,12 +30,12 @@ struct CpusConfig {
Number of vCPUs present at boot time.
This option allows to define a specific number of vCPUs to be present at the
This option allows defining a specific number of vCPUs to be present at the
time the VM is started. This option is mandatory when using the `--cpus`
parameter. If `--cpus` is not specified, this option takes the default value
of `1`, starting the VM with a single vCPU.
Value is an unsigned integer of 8 bits.
Value is an unsigned integer of 32 bits.
_Example_
@ -48,14 +48,14 @@ _Example_
Maximum number of vCPUs.
This option defines the maximum number of vCPUs that can be assigned to the VM.
In particular, this option is used when looking for CPU hotplug as it lets the
provide an indication about how many vCPUs might be needed later during the
runtime of the VM.
In particular, this option is used when looking for CPU hotplug as it provides
an indication about how many vCPUs might be needed later during the runtime of
the VM.
For instance, if booting the VM with 2 vCPUs and a maximum of 6 vCPUs, it means
up to 4 vCPUs can be added later at runtime by resizing the VM.
The value must be greater than or equal to the number of boot vCPUs.
The value is an unsigned integer of 8 bits.
The value is an unsigned integer of 32 bits.
By default this option takes the value of `boot`, meaning vCPU hotplug is not
expected and can't be performed.
@ -73,16 +73,16 @@ Topology of the guest platform.
This option gives the user a way to describe the exact topology that should be
exposed to the guest. It can be useful to describe to the guest the same
topology found on the host as it allows for proper usage of the resources and
is a way to achieve better performances.
is a way to achieve better performance.
The topology is described through the following structure:
```rust
struct CpuTopology {
threads_per_core: u8,
cores_per_die: u8,
dies_per_package: u8,
packages: u8,
threads_per_core: u16,
cores_per_die: u16,
dies_per_package: u16,
packages: u16,
}
```
@ -124,7 +124,7 @@ Maximum size for guest's addressable space.
This option defines the maximum number of physical bits for all vCPUs, which
sets a limit for the size of the guest's addressable space. This is mainly
useful for debug purpose.
useful for debugging purposes.
The value is an unsigned integer of 8 bits.
@ -141,16 +141,16 @@ Affinity of each vCPU.
This option gives the user a way to provide the host CPU set associated with
each vCPU. It is useful for achieving CPU pinning, ensuring multiple VMs won't
affect the performance of each other. It might also be used in the context of
NUMA as it is way of making sure the VM can run on a specific host NUMA node.
In general, this option is used to increase the performances of a VM depending
NUMA as it is a way of making sure the VM can run on a specific host NUMA node.
In general, this option is used to increase the performance of a VM depending
on the host platform and the type of workload running in the guest.
The affinity is described through the following structure:
```rust
struct CpuAffinity {
vcpu: u8,
host_cpus: Vec<u8>,
vcpu: u32,
host_cpus: Vec<usize>,
}
```
@ -164,8 +164,8 @@ The outer brackets define the list of vCPUs. And for each vCPU, the inner
brackets attached to `@` define the list of host CPUs the vCPU is allowed to
run onto.
Multiple values can be provided to define each list. Each value is an unsigned
integer of 8 bits.
Multiple values can be provided to define each list. Each value is a
platform-native unsigned integer (`usize`).
For instance, if one needs to run vCPU 0 on host CPUs from 0 to 4, the syntax
using `-` will help define a contiguous range with `affinity=0@[0-4]`. The
@ -220,4 +220,4 @@ _Example_
```
--cpus nested=on
```
```

View file

@ -31,7 +31,7 @@ Simple emulation of a serial port by reading and writing to specific port I/O
addresses. The serial port can be very useful to gather early logs from the
operating system booted inside the VM.
For x86_64, The default serial port is from an emulated 16550A device. It can
For x86_64, the default serial port is from an emulated 16550A device. It can
be used as the default console for Linux when booting with the option
`console=ttyS0`. For AArch64, the default serial port is from an emulated
PL011 UART device. The related command line for AArch64 is `console=ttyAMA0`.
@ -48,7 +48,7 @@ This device is built-in by default, but it can be compiled out with Rust
features. When compiled in, it is always enabled, and cannot be disabled
from the command line.
For AArch64 machines, an ARM PrimeCell Real Time Clock(PL031) is implemented.
For AArch64 machines, an ARM PrimeCell Real Time Clock (PL031) is implemented.
This device is built-in by default for the AArch64 platform, and it is always
enabled, and cannot be disabled from the command line.
@ -136,7 +136,7 @@ flag `--net`.
The `virtio-pmem` implementation emulates a virtual persistent memory device
that `cloud-hypervisor` can e.g. boot from. Booting from a `virtio-pmem` device
allows to bypass the guest page cache and improve the guest memory footprint.
allows bypassing the guest page cache and improve the guest memory footprint.
This device is always built-in, and it is enabled based on the presence of the
flag `--pmem`.

View file

@ -5,7 +5,7 @@ region between a guest and the host. In order for all guests to be able to
pick up the shared memory area, it is modeled as a PCI device exposing said
memory to the guest as a PCI BAR.
Device Specification is
Device Specification is available
at https://www.qemu.org/docs/master/specs/ivshmem-spec.html.
Now we support setting a backend file to share data between host and guest.
@ -16,9 +16,10 @@ supported yet.
`--ivshmem`, an optional argument, can be passed to enable ivshmem device.
This argument takes a file as a `path` value and a file size as a `size` value.
The `size` value must be 2^n.
```
--ivshmem <ivshmem> device backend file "path=</path/to/a/file>,size=<file_size/must=2^n>";
--ivshmem <ivshmem> device backend file "path=</path/to/a/file>,size=<file_size>"
```
## Example
@ -41,11 +42,11 @@ Start application to mmap the file data to a memory region:
--ivshmem path=/tmp/ivshmem.data,size=1M
```
Insmod a ivshmem device driver to enable the device. The file data will be
Insmod an ivshmem device driver to enable the device. The file data will be
mmapped to the PCI `bar2` of ivshmem device,
guest can r/w data by accessing this memory.
A simple example of ivshmem driver can get from:
A simple example of ivshmem driver can be obtained from:
https://github.com/lisongqian/clh-linux/commits/ch-6.12.8-ivshmem
The host process can r/w this data by remmaping the `/tmp/ivshmem.data`.
The host process can r/w this data by remapping the `/tmp/ivshmem.data`.

View file

@ -1,6 +1,6 @@
# Using MACVTAP to Bridge onto Host Network
Cloud Hypervisor supports using a MACVTAP device which is derived from a MACVLAN. Full details of configuring MACVLAN or MACVTAP is out of scope of this document. However the example below indicates how to bridge the guest directly onto the network the host is on. Due to the lack of hairpin mode it not usually possible to reach the guest directly from the host.
Cloud Hypervisor supports using a MACVTAP device which is derived from a MACVLAN. Full details of configuring MACVLAN or MACVTAP are out of scope of this document. However the example below indicates how to bridge the guest directly onto the network the host is on. Due to the lack of hairpin mode it is not usually possible to reach the guest directly from the host.
```bash
# The MAC address must be attached to the macvtap and be used inside the guest
@ -26,7 +26,7 @@ target/debug/cloud-hypervisor \
--disk path=~/workloads/focal.raw \
--cpus boot=1 --memory size=512M \
--cmdline "root=/dev/vda1 console=hvc0" \
--net fd=3,mac=$mac 3<>$"$tapdevice"
--net fd=3,mac=$mac 3<>"$tapdevice"
```
As the guest is now connected to the same L2 network as the host you can obtain an IP address based on your host network (potentially including via DHCP)
As the guest is now connected to the same L2 network as the host, you can obtain an IP address based on your host network (potentially including via DHCP)

View file

@ -20,7 +20,7 @@ struct MemoryConfig {
hugepages: bool,
hugepage_size: Option<u64>,
prefault: bool,
thp: bool
thp: bool,
zones: Option<Vec<MemoryZoneConfig>>,
}
```
@ -119,7 +119,7 @@ By default this option is turned off, which results in performing `mmap(2)`
with `MAP_PRIVATE` flag.
If `hugepages=on` then the value of this field is ignored as huge pages always
requires `MAP_SHARED`.
require `MAP_SHARED`.
_Example_
@ -135,8 +135,7 @@ If no huge page size is supplied the system's default huge page size is used.
By using hugepages, one can improve the overall performance of the VM, assuming
the guest will allocate hugepages as well. Another interesting use case is VFIO
as it speeds up the VM's boot time since the amount of IOMMU mappings are
reduced.
as it speeds up the VM's boot time since the amount of IOMMU mappings is reduced.
The user is responsible for ensuring there are sufficient huge pages of the
specified size for the VMM to use. Failure to do so may result in strange VMM
@ -185,7 +184,7 @@ backing file) should be labelled `MADV_HUGEPAGE` with `madvise(2)` indicating
to the kernel that this memory may be backed with huge pages transparently.
The use of transparent huge pages can improve the performance of the guest as
there will fewer virtualisation related page faults. Unlike using
there will be fewer virtualisation related page faults. Unlike using
`hugepages=on` a specific number of huge pages do not need to be allocated by
the kernel.
@ -295,9 +294,9 @@ vhost-user devices as part of the VM device model, as they will be driven
by standalone daemons needing access to the guest RAM content.
If `hugepages=on` then the value of this field is ignored as huge pages always
requires `MAP_SHARED`.
require `MAP_SHARED`.
By default this option is turned off, which result in performing `mmap(2)`
By default this option is turned off, which results in performing `mmap(2)`
with `MAP_PRIVATE` flag.
_Example_
@ -315,8 +314,7 @@ If no huge page size is supplied the system's default huge page size is used.
By using hugepages, one can improve the overall performance of the VM, assuming
the guest will allocate hugepages as well. Another interesting use case is VFIO
as it speeds up the VM's boot time since the amount of IOMMU mappings are
reduced.
as it speeds up the VM's boot time since the amount of IOMMU mappings is reduced.
The user is responsible for ensuring there are sufficient huge pages of the
specified size for the VMM to use. Failure to do so may result in strange VMM
@ -325,7 +323,7 @@ error with `hugepages` enabled, just disable it or check whether there are enoug
huge pages.
If `hugepages=on` then the value of `shared` is ignored as huge pages always
requires `MAP_SHARED`.
require `MAP_SHARED`.
By default this option is turned off.
@ -434,7 +432,7 @@ it allows for specifying the distance between each NUMA node.
```rust
struct NumaConfig {
guest_numa_id: u32,
cpus: Option<Vec<u8>>,
cpus: Option<Vec<u32>>,
distances: Option<Vec<NumaDistance>>,
memory_zones: Option<Vec<String>>,
}
@ -470,7 +468,7 @@ regarding the CPUs associated with it, which might help the guest run more
efficiently.
Multiple values can be provided to define the list. Each value is an unsigned
integer of 8 bits.
integer of 32 bits.
For instance, if one needs to attach all CPUs from 0 to 4 to a specific node,
the syntax using `-` will help define a contiguous range with `cpus=0-4`. The
@ -493,7 +491,7 @@ _Example_
### `distances`
List of distances between the current NUMA node referred by `guest_numa_id`
and the destination NUMA nodes listed along with distances. This option let
and the destination NUMA nodes listed along with distances. This option lets
the user choose the distances between guest NUMA nodes. This is important to
provide an accurate description of the way non uniform memory accesses will
perform in the guest.
@ -552,7 +550,7 @@ _Example_
### PCI bus
Cloud Hypervisor supports guests with one or more PCI segments. The default PCI segment always
has affinity to NUMA node 0. Be default, all other PCI segments have affinity to NUMA node 0.
has affinity to NUMA node 0. By default, all other PCI segments have affinity to NUMA node 0.
The user may configure the NUMA affinity for any additional PCI segments.
_Example_

View file

@ -29,6 +29,7 @@ parameters available for the vDPA device.
struct VdpaConfig {
path: PathBuf,
num_queues: usize,
iommu: bool,
id: Option<String>,
pci_segment: u16,
}
@ -83,7 +84,7 @@ _Example_
### `pci_segment`
PCI segment number to which the vDPA device should be attached to.
PCI segment number to which the vDPA device should be attached.
This parameter is optional.

View file

@ -37,20 +37,20 @@ sudo sed -i '/vt100/a \n# paravirt console\nhvc0::respawn:/sbin/getty -L hvc0 11
# any sort of production setup
sudo sed -i 's/root:!::0:::::/root:::0:::::/' etc/shadow
# set up init scripts
for i in acpid crond
for i in acpid crond; do
sudo ln -sf /etc/init.d/$i etc/runlevels/default/$i
end
for i in bootmisc hostname hwclock loadkmap modules networking swap sysctl syslog urandom
done
for i in bootmisc hostname hwclock loadkmap modules networking swap sysctl syslog urandom; do
sudo ln -sf /etc/init.d/$i etc/runlevels/boot/$i
end
done
for i in killprocs mount-ro savecache
for i in killprocs mount-ro savecache; do
sudo ln -sf /etc/init.d/$i etc/runlevels/shutdown/$i
end
done
for i in devfs dmesg hwdrivers mdev
for i in devfs dmesg hwdrivers mdev; do
sudo ln -sf /etc/init.d/$i etc/runlevels/sysinit/$i
end
done
# setup network config
echo 'auto lo
iface lo inet loopback
@ -89,4 +89,4 @@ virtiofs
If you find any issues or have suggestions, feel free to reach out to @iggy on
the cloud-hypervisor slack. Also if this works for you, I'd like to know as
well. It would also be nice to get steps for preparing other distribution root
filesystems.
filesystems.