docs: fix some syntax and bash usage

Minor modifications were made to make the sentences sound more natural.
Also fixed some parameter usage issues in bash code block.

Signed-off-by: Yi Wang <foxywang@tencent.com>
This commit is contained in:
Yi Wang 2026-01-07 10:23:33 +08:00 committed by Rob Bradford
parent 34b8aed662
commit 03252f5851
5 changed files with 24 additions and 24 deletions

View file

@ -110,7 +110,7 @@ Mem: 3.0Gi 71Mi 2.8Gi 0.0Ki 47Mi 2.8Gi
Swap: 32Mi 0B 32Mi
```
Due to guest OS limitations is is necessary to ensure that amount of memory added (between currently assigned RAM and that which is desired) is a multiple of 128MiB.
Due to guest OS limitations it is necessary to ensure that amount of memory added (between currently assigned RAM and that which is desired) is a multiple of 128MiB.
The same API can also be used to reduce the desired RAM for a VM but the change will not be applied until the VM is rebooted.
@ -179,7 +179,7 @@ Notice the addition of `--api-socket=/tmp/ch-socket`.
### Add VFIO Device
To ask the VMM to add additional VFIO device then use the `add-device` API.
To ask the VMM to add additional VFIO device, use the `add-device` API.
```shell
./ch-remote --api-socket=/tmp/ch-socket add-device path=/sys/bus/pci/devices/0000:01:00.0/
@ -187,7 +187,7 @@ To ask the VMM to add additional VFIO device then use the `add-device` API.
### Add Disk Device
To ask the VMM to add additional disk device then use the `add-disk` API.
To ask the VMM to add additional disk device, use the `add-disk` API.
```shell
./ch-remote --api-socket=/tmp/ch-socket add-disk path=/foo/bar/cloud.img
@ -195,7 +195,7 @@ To ask the VMM to add additional disk device then use the `add-disk` API.
### Add Fs Device
To ask the VMM to add additional fs device then use the `add-fs` API.
To ask the VMM to add additional fs device, use the `add-fs` API.
```shell
./ch-remote --api-socket=/tmp/ch-socket add-fs tag=myfs,socket=/foo/bar/virtiofs.sock
@ -203,7 +203,7 @@ To ask the VMM to add additional fs device then use the `add-fs` API.
### Add Net Device
To ask the VMM to add additional network device then use the `add-net` API.
To ask the VMM to add additional network device, use the `add-net` API.
```shell
./ch-remote --api-socket=/tmp/ch-socket add-net tap=chtap0
@ -211,7 +211,7 @@ To ask the VMM to add additional network device then use the `add-net` API.
### Add Pmem Device
To ask the VMM to add additional PMEM device then use the `add-pmem` API.
To ask the VMM to add additional PMEM device, use the `add-pmem` API.
```shell
./ch-remote --api-socket=/tmp/ch-socket add-pmem file=/foo/bar.cloud.img
@ -219,7 +219,7 @@ To ask the VMM to add additional PMEM device then use the `add-pmem` API.
### Add Vsock Device
To ask the VMM to add additional vsock device then use the `add-vsock` API.
To ask the VMM to add additional vsock device, use the `add-vsock` API.
```shell
./ch-remote --api-socket=/tmp/ch-socket add-vsock cid=3,socket=/foo/bar/vsock.sock
@ -241,7 +241,7 @@ After a reboot the added PCI device will remain.
### Remove PCI device
Removing a PCI device works the same way for all kind of PCI devices. The unique identifier related to the device must be provided. This identifier can be provided by the user when adding the new device, or by default Cloud Hypervisor will assign one.
Removing a PCI device works the same way for all kinds of PCI devices. The unique identifier related to the device must be provided. This identifier can be provided by the user when adding the new device, or by default Cloud Hypervisor will assign one.
```shell
./ch-remote --api-socket=/tmp/ch-socket remove-device _disk0

View file

@ -27,11 +27,11 @@ Hypervisor provides another three options for limiting I/O operations,
i.e., `ops_size` (I/O operations), `ops_one_time_burst` (I/O operations),
and `ops_refill_time` (ms).
One caveat in the I/O throttling is that every-time the bucket gets
One caveat in the I/O throttling is that every time the bucket gets
empty, it will stop I/O operations for a fixed amount of time
(`cool_down_time`). The `cool_down_time` now is fixed at `100 ms`, it
can have big implications to the actual rate limit (which can be a lot
different the expected "refill-rate" derived from user inputs). For
can have big implications for the actual rate limit (which can be quite
different from the expected "refill-rate" derived from user inputs). For
example, to have a 1000 IOPS limit on a virtio-blk device, users should
be able to provide either of the following two options:
`ops_size=1000,ops_refill_time=1000` or
@ -53,5 +53,5 @@ demonstrates how to throttle the aggregate bandwidth of two disks to 10 MiB/s.
```
--disk path=disk0.raw,rate_limit_group=group0 \
path=disk1.raw,rate_limit_group=group0 \
--rate-limit-group bw_size=1048576,bw_refill_time,bw_refill_time=100
--rate-limit-group bw_size=1048576,bw_refill_time=100
```

View file

@ -15,7 +15,7 @@ to increase the security regarding the memory accesses performed by the virtual
devices (VIRTIO devices), on behalf of the guest drivers.
With a virtual IOMMU, the VMM stands between the guest driver and its device
counterpart, validating and translating every address before to try accessing
counterpart, validating and translating every address before trying accessing
the guest memory. This is standard interposition that is performed here by the
VMM.
@ -75,8 +75,8 @@ Not all devices support this extra option, and the default value will always
be `off` since we want to avoid the performance impact for most users who don't
need this.
Refer to the command line `--help` to find out which device support to be
attached to the virtual IOMMU.
Refer to the command line `--help` to find out which devices can be supported
to be attached to the virtual IOMMU.
Below is a simple example exposing the `virtio-blk` device as attached to the
virtual IOMMU:
@ -128,7 +128,7 @@ When ACPI is disabled, virtual IOMMU is supported through Flattened Device Tree
IOMMU-attached and which should not. No matter how many devices you attached to
the virtual IOMMU by setting `iommu=on` option, all the devices on the PCI bus
will be attached to the virtual IOMMU (except the IOMMU itself). Each of the
devices will be added into a IOMMU group.
devices will be added into an IOMMU group.
As a result, the directory content of `/sys/kernel/iommu_groups` would be:
@ -151,7 +151,7 @@ of requests need to be issued in order to create large mappings.
One use case is even more impacted by the slowdown, the nested VFIO case. When
passing a device through a L2 guest, the VFIO driver running in L1 will update
the DMAR entries for the specific device. Because VFIO pins the entire guest
memory, this means the entire mapping of the L2 guest need to be stored into
memory, this means the entire mapping of the L2 guest needs to be stored into
multiple 4k mappings. Obviously, the bigger the L2 guest RAM is, the longer the
update of the mappings will last. There is an additional problem happening in
this case, if the L2 guest RAM is quite large, it will require a large number
@ -194,7 +194,7 @@ be consumed.
### Nested usage
Let's now look at the specific example of nested virtualization. In order to
reach optimized performances, the L2 guest also need to be mapped based on
reach optimized performances, the L2 guest also needs to be mapped based on
huge pages. Here is how to achieve this, assuming the physical device you are
passing through is `0000:00:01.0`.

View file

@ -16,11 +16,11 @@ permissions.
## Host Setup
Landlock should be enabled in Host kernel to use it with cloud-hypervisor.
Please following [Kernel-Support](https://docs.kernel.org/userspace-api/landlock.html#kernel-support) link to enable Landlock on Host kernel.
Landlock should be enabled in host kernel to use it with cloud-hypervisor.
Please follow [Kernel-Support](https://docs.kernel.org/userspace-api/landlock.html#kernel-support) link to enable Landlock on Host kernel.
Landlock support can be checked with following command:
Landlock support can be checked with the following command:
```
$ sudo dmesg | grep -w landlock
[ 0.000000] landlock: Up and running.
@ -30,8 +30,8 @@ Linux kernel confirms Landlock support with above message in dmesg.
## Enable Landlock
At the time of enabling Landlock, Cloud-Hypervisor process needs the complete
list of files it accesses over its lifetime. So, Landlock is enabled `vm_create`
stage of guest boot.
list of files it accesses over its lifetime. So, Landlock is enabled at the
`vm_create` stage of guest boot.
### Command Line
Append `--landlock` to Cloud-Hypervisor's command line to enable Landlock

View file

@ -6,7 +6,7 @@ in Cloud Hypervisor:
1. local migration - migrating a VM from one Cloud Hypervisor instance to another on the same machine;
1. remote migration - migrating a VM between two machines;
> :warning: These examples place sockets /tmp. This is done for
> :warning: These examples place sockets in /tmp. This is done for
> simplicity and should not be done in production.
## Local Migration (Suitable for Live Upgrade of VMM)