misc: Fix spelling issues

Misspellings were identified by:
  https://github.com/marketplace/actions/check-spelling

* Initial corrections based on forbidden patterns from the action
* Additional corrections by Google Chrome auto-suggest
* Some manual corrections
* Adding markdown bullets to readme credits section

Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
This commit is contained in:
Josh Soref 2024-06-07 11:48:53 -04:00 committed by Liu Wei
parent 46c5fb5f2c
commit 42e9632c53
40 changed files with 89 additions and 89 deletions

View file

@ -270,7 +270,7 @@ definition in XML format:
### Command Line Interface
The Cloud Hypervisor Command Line Interface (CLI) can only be used for launching
the Cloud Hypervisor binary, i.e. it can not be used for controlling the VMM or
the Cloud Hypervisor binary, i.e. it cannot be used for controlling the VMM or
the launched VM once they're up and running.
If you want to inspect the VMM, or control the VM after launching Cloud

View file

@ -159,10 +159,10 @@ as we might need to update the direct kernel boot command line, replacing
Update all references to the previous image name to the new one.
## NVIDIA image for VFIO baremetal CI
## NVIDIA image for VFIO bare-metal CI
Here we are going to describe how to create a cloud image that contains the
necessary NVIDIA drivers for our VFIO baremetal CI.
necessary NVIDIA drivers for our VFIO bare-metal CI.
### Download base image

View file

@ -2,7 +2,7 @@
Cloud Hypervisor uses [cargo-fuzz](https://github.com/rust-fuzz/cargo-fuzz) for fuzzing individual components.
The fuzzers are are in the `fuzz/fuzz_targets` directory
The fuzzers are in the `fuzz/fuzz_targets` directory
## Preparation

View file

@ -53,7 +53,7 @@ On-line CPU(s) list: 0-7
After a reboot the added CPUs will remain.
Removing CPUs works similarly by reducing the number in the "desired_vcpus" field of the reisze API. The CPUs will be automatically offlined inside the guest so there is no need to run any commands inside the guest:
Removing CPUs works similarly by reducing the number in the "desired_vcpus" field of the resize API. The CPUs will be automatically offlined inside the guest so there is no need to run any commands inside the guest:
```shell
./ch-remote --api-socket=/tmp/ch-socket resize --cpus 2

View file

@ -39,7 +39,7 @@ Another reason for having a virtual IOMMU is to allow passing physical devices
from the host through multiple layers of virtualization. Let's take as example
a system with a physical IOMMU running a VM with a virtual IOMMU. The
implementation of the virtual IOMMU is responsible for updating the physical
DMA Remapping table (DMAR) everytime the DMA mapping changes. This must happen
DMA Remapping table (DMAR) every time the DMA mapping changes. This must happen
through the VFIO framework on the host as this is the only userspace interface
to interact with a physical IOMMU.
@ -124,7 +124,7 @@ On AArch64 architecture, the virtual IOMMU can still be used even if ACPI is not
enabled. But the effect is different with what the aforementioned test showed.
When ACPI is disabled, virtual IOMMU is supported through Flattened Device Tree
(FDT). In this case, the guest kernel can not tell which device should be
(FDT). In this case, the guest kernel cannot tell which device should be
IOMMU-attached and which should not. No matter how many devices you attached to
the virtual IOMMU by setting `iommu=on` option, all the devices on the PCI bus
will be attached to the virtual IOMMU (except the IOMMU itself). Each of the

View file

@ -466,7 +466,7 @@ List of virtual CPUs attached to the guest NUMA node identified by the
`guest_numa_id` option. This allows for describing a list of CPUs which
must be seen by the guest as belonging to the NUMA node `guest_numa_id`.
One can use this option for a fine grained description of the NUMA topology
One can use this option for a fine-grained description of the NUMA topology
regarding the CPUs associated with it, which might help the guest run more
efficiently.
@ -573,7 +573,7 @@ _Example_
### PCI bus
Cloud Hypervisor supports guests with one or more PCI segments. The default PCI segment always
has affinity to NUMA node 0. Be default, all other PCI segments have afffinity to NUMA node 0.
has affinity to NUMA node 0. Be default, all other PCI segments have affinity to NUMA node 0.
The user may configure the NUMA affinity for any additional PCI segments.
_Example_

View file

@ -95,7 +95,7 @@ E - EOL
```
### LTS Stablity Considerations
### LTS Stability Considerations
An LTS release is just a `MAJOR` release for which point releases are made for
longer following the same rules for what can be backported to a `POINT` release.

View file

@ -37,6 +37,6 @@ generating traces of the boot. These can be relocated for focus tracing on a
narrow part of the code base.
A `tracer::trace_point!()` macro is also provided for an instantaneous trace
point however this is not in use in the code base currently nor is handled by
point however this is neither in use in the code base currently nor is handled by
the visualisation script due to the difficulty in representation in the SVG.