From d711ccd4896d3c8ca2fac6b4d018a32536bdde91 Mon Sep 17 00:00:00 2001 From: Frank Sundermeyer Date: Thu, 16 Nov 2023 12:44:46 +0100 Subject: [PATCH 1/3] Merge pull request #1614 from SUSE/fs/nfs_improvements --- xml/storage_nfs.xml | 401 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 368 insertions(+), 33 deletions(-) diff --git a/xml/storage_nfs.xml b/xml/storage_nfs.xml index 2ddea99c91..36bdaf8824 100644 --- a/xml/storage_nfs.xml +++ b/xml/storage_nfs.xml @@ -251,16 +251,10 @@ If &firewalld; is active on your system, configure it separately - for NFS (see ). + for NFS (see ). &yast; does not yet have complete support for &firewalld;, so ignore the "Firewall not configurable" message and continue. - - When configuring &firewalld; rules, add the nfs or - nfs service with the port value of 2049 for both - TCP and UDP. Also add the mountd service with - the port value of 20048 for both TCP and UDP. - @@ -287,6 +281,17 @@ see . + + + NFSv4 Domain Name + + Note that the domain name needs to be configured on all NFSv4 + clients as well. Only clients that share the same domain name + as the server can access the server. The default domain name + for server and clients is localdomain. + + + @@ -354,12 +359,12 @@ To start or restart the services, run the command systemctl - restart nfsserver. This also restarts the RPC port mapper + restart nfs-server. This also restarts the RPC port mapper that is required by the NFS server. To make sure the NFS server always starts at boot time, run - sudo systemctl enable nfsserver. + sudo systemctl enable nfs-server. NFSv4 @@ -390,15 +395,43 @@ For example: -/export/data 192.168.1.2(rw,sync) + /nfs_exports/public *(rw,sync,root_squash,wdelay) +/nfs_exports/department1 *.department1.&exampledomain;(rw,sync,root_squash,wdelay) +/nfs_exports/team1 &subnetI;.0/24(rw,sync,root_squash,wdelay) +/nfs_exports/&exampleuser_plain; &subnetI;.2(rw,sync,root_squash) - Here the IP address 192.168.1.2 is used to - identify the allowed client. You can also use the name of the - host, a wild card indicating a set of hosts - (*.abc.com, *, etc.), or - netgroups (@my-hosts). + In this example, the following values for + HOST are used: + + + + *: exports to all clients on the network + + + + + *.department1.&exampledomain;: only exports + to clients on the *.department1.&exampledomain; domain + + + + + &subnetI;.0/24: only exports + to clients with IP adresses in the range of &subnetI;.0/24 + + + + + &subnetI;.2: only exports + to the machine with the IP address &subnetI;.2 + + + + In addition to the examples above, you can also restrict exports + to netgroups (@my-hosts) defined in + /etc/netgroup. For a detailed explanation of all options and their meanings, refer to the man page of /etc/exports: (man @@ -408,7 +441,7 @@ In case you have modified /etc/exports while the NFS server was running, you need to restart it for the changes to become active: sudo systemctl restart - nfsserver. + nfs-server. @@ -793,8 +828,10 @@ nfs4mount -fstype=nfs4 server2:/ If you do not enter the noauto option, the init scripts of the system will handle the mount of those file systems - at start-up. - + at start-up. In that case you may consider adding the option + which prevents scripts from trying to + mount the share before the network is available. + @@ -872,11 +909,11 @@ nfs4mount -fstype=nfs4 server2:/ Refer to to start. Most of the configuration is done by the NFSv4 server. For pNFS, the only - difference is to add the option and the + difference is to add the option and the metadata server MDS_SERVER to your mount command: -&prompt.sudo;mount -t nfs4 -o minorversion=1 MDS_SERVER MOUNTPOINT +&prompt.sudo;mount -t nfs4 -o nfsvers=4.2 MDS_SERVER MOUNTPOINT To help with debugging, change the value in the /proc file system: @@ -886,6 +923,304 @@ nfs4mount -fstype=nfs4 server2:/ + + Operating an NFS server and clients behind a firewall + + Communication between an NFS server and its clients happens via Remote + Procedure Calls (RPC). Several RPC services, such as the mount daemon or the + file locking service, are part of the Linux NFS implementation. If + the server and the clients run behind a firewall, these services and the + firewall(s) need to be configured to not block the client-server + communication. + + + An NFS 4 server is backwards-compatible with NFS version 3, and firewall + configurations vary for both versions. If any of your clients use + NFS 3 to mount shares, configure your firewall to allow + both, NFS 4 and NFS 3. + + + NFS 4.<replaceable>x</replaceable> + + NFS 4 requires TCP port 2049 to be open on the server side only. To + open this port on the firewall, enable the nfs + service in firewalld on the NFS server: + + &prompt.sudo;firewall-cmd --permanent --add-service=nfs --zone=ACTIVE_ZONE +firewall-cmd --reload + + Replace ACTIVE_ZONE with the firewall zone + used on the NFS server. + + + No additional firewall configuration on the client side is needed when + using NFSv4. By default mount defaults to the highest supported + NFS version, so if your client supports NFSv4, shares will + automatically be mounted as version 4.2. + + + + NFS 3 + + NFS 3 requires the following services: + + + + portmapper + + + nfsd + + + mountd + + + lockd + + + statd + + + + These services are operated by rpcbind, which, + by default, dynamically assigns ports. To allow access to these + services behind a firewall, they need to be configured to run on a + static port first. These ports need to be opened in the firewall(s) afterwards. + + + + portmapper + + + On &productname;, portmapper is already configured to + run on a static port. + + + + + + + + Port + 111 + + + Protocol(s) + TCP, UDP + + + Runs on + Client, Server + + + + &prompt.sudo;firewall-cmd --add-service=rpc-bind --permanent --zone=ACTIVE_ZONE + + + + + + + + + nfsd + + + On &productname;, nfsd is already configured to + run on a static port. + + + + + + + + Port + 2049 + + + Protocol(s) + TCP, UDP + + + Runs on + Server + + + + &prompt.sudo;firewall-cmd --add-service=nfs3 --permanent --zone=ACTIVE_ZONE + + + + + + + + + mountd + + + On &productname;, mountd is already configured to + run on a static port. + + + + + + + + Port + 20048 + + + Protocol(s) + TCP, UDP + + + Runs on + Server + + + + &prompt.sudo;firewall-cmd --add-service=mountd --permanent --zone=ACTIVE_ZONE + + + + + + + + + lockd + + + To set a static port for lockd: + + + + + Edit/etc/sysconfig/nfs on the server and + find and set + + LOCKD_TCPPORT=NNNNN +LOCKD_UDPPORT=NNNN + + Replace NNNNN with an unused port of + your choice. Use the same port for both protocols. + + + + + Restart the NFS server: + + &prompt.sudo;systemctl restart nfs-server + + + + + + + + + Port + NNNNN + + + Protocol(s) + TCP, UDP + + + Runs on + Client, Server + + + + &prompt.sudo;firewall-cmd --add-port=NNNNN/{tcp,udp} --permanent --zone=ACTIVE_ZONE + + + + + + + + + statd + + + To set a static port for statd: + + + + + Edit/etc/sysconfig/nfs on the server and + find and set + + STATD_PORT=NNNNN + + Replace NNNNN with an unused port of + your choice. + + + + + Restart the NFS server: + + &prompt.sudo;systemctl restart nfs-server + + + + + + + + + Port + NNNNN + + + Protocol(s) + TCP, UDP + + + Runs on + Client, Server + + + + &prompt.sudo;firewall-cmd --add-port=NNNNN/{tcp,udp} --permanent --zone=ACTIVE_ZONE + + + + + + + + + + + Loading a changed <systemitem + class="daemon">firewalld</systemitem> configuration + + Whenever you change the firewalld configuration, you need to reload + the daemon to activate the changes: + + &prompt.sudo;firewall-cmd --reload + + + + Firewall zone + + Make sure to replace ACTIVE_ZONE with the firewall zone + used on the respective machine. Note that, depending on the firewall + configuration, the active zone can differ from machine to machine. + + + + Managing Access Control Lists over NFSv4 @@ -1251,7 +1586,7 @@ nfs4mount -fstype=nfs4 server2:/ After changing /etc/sysconfig/nfs, services need to be restarted: -systemctl restart nfsserver # for nfs server related changes +systemctl restart nfs-server # for nfs server related changes systemctl restart nfs # for nfs client related changes From 0e0da5c8706bf8e654960a66c909ca72c2a2ff07 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tom=C3=A1=C5=A1=20Ba=C5=BEant?= Date: Thu, 16 Nov 2023 12:58:47 +0100 Subject: [PATCH 2/3] Imported KubeVirt into doc-sle (bsc#1209579) (#1611) --- DC-SLE-kubevirt | 19 + images/src/svg/suse.svg | 10 + xml/MAIN.SLEDS.xml | 1 + xml/art-kubevirt.xml | 767 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 797 insertions(+) create mode 100644 DC-SLE-kubevirt create mode 100644 images/src/svg/suse.svg create mode 100644 xml/art-kubevirt.xml diff --git a/DC-SLE-kubevirt b/DC-SLE-kubevirt new file mode 100644 index 0000000000..68ee0ceb16 --- /dev/null +++ b/DC-SLE-kubevirt @@ -0,0 +1,19 @@ +## --------------------------------- +## KubeVirt +## --------------------------------- +## +## Basics +MAIN="MAIN.SLEDS.xml" +ROOTID=article-kubevirt + +## Profiling +PROFOS="sles" +PROFARCH="x86_64;zseries;power;aarch64" +PROFCONDITION="suse-product" + +## stylesheet location +STYLEROOT="/usr/share/xml/docbook/stylesheet/suse2022-ns" +FALLBACK_STYLEROOT="/usr/share/xml/docbook/stylesheet/suse-ns" + +# Setting the TOC depth to sect 2 +XSLTPARAM="--param toc.section.depth=2" diff --git a/images/src/svg/suse.svg b/images/src/svg/suse.svg new file mode 100644 index 0000000000..7b8bbf809e --- /dev/null +++ b/images/src/svg/suse.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/xml/MAIN.SLEDS.xml b/xml/MAIN.SLEDS.xml index ce513378a8..385bb0c1b0 100644 --- a/xml/MAIN.SLEDS.xml +++ b/xml/MAIN.SLEDS.xml @@ -120,6 +120,7 @@ + diff --git a/xml/art-kubevirt.xml b/xml/art-kubevirt.xml new file mode 100644 index 0000000000..4d4247417d --- /dev/null +++ b/xml/art-kubevirt.xml @@ -0,0 +1,767 @@ + + + %entities; +]> +
+ Using &kubevirt; on &sle; + + + + + https://bugzilla.suse.com/enter_bug.cgi + Documentation + KubeVirt + jfehlig@suse.com + + https://github.com/SUSE/doc-sle/blob/main/xml/ + + + JimFehlig + + Software Engineer + SUSE + + + VasilyUlyanov + + Software Engineer + SUSE + + + + + + &kubevirt; is a virtual machine management add-on for Kubernetes. + &kubevirt; extends Kubernetes by adding additional virtualization + resource types through Kubernetes' Custom Resource Definitions (CRD) + API. Along with the Custom Resources, &kubevirt; includes controllers + and agents that provide virtual machine management capabilities on the + cluster. By using this mechanism, the Kubernetes API can be used to + manage virtual machine resources similar to other Kubernetes resources. + + + + + &kubevirt; components + + + &kubevirt; consists of two RPM-based packages and six container images + that provide the Kubernetes virtual machine management extension. The RPM + packages include kubevirt-virtctl and + kubevirt-manifests. The container images include + virt-api, virt-controller, + virt-handler, virt-launcher, and + virt-operator, libguestfs-tools. + + + + kubevirt-virtctl can be installed on any machine with + administrator access to the cluster. It contains the + virtctl tool, which provides syntactic sugar on top of + the kubectl tool for virtual machine resources. + Although the kubectl tool can be used to manage + virtual machines, it is a bit awkward since, unlike standard Kubernetes + resources, virtual machines maintain state. Migration is also unique to + virtual machines. If a standard Kubernetes resource needs to be evacuated + from a cluster node, it is destroyed and started again on an alternate + node. Since virtual machines are stateful, they cannot be destroyed and + must be live-migrated away if a node is under evacuation. The + virtctl tool abstracts the complexity of managing + virtual machines with kubectl. It can be used to stop, + start, pause, unpause and migrate virtual machines. + virtclt also provides access to the virtual machine's + serial console and graphics server. + + + + kubevirt-manifests contains the manifests, or recipes, + for installing &kubevirt;. The most interesting files are + /usr/share/kube-virt/manifests/release/kubevirt-cr.yaml + and + /usr/share/kube-virt/manifests/release/kubevirt-operator.yaml. + kubevirt-cr.yaml contains the &kubevirt; Custom + Resource definition that represents the &kubevirt; service. + kubevirt-operator.yaml is the recipe for deploying + the &kubevirt; operator, which deploys the &kubevirt; service to the + cluster and manages its' lifecycle. + + + + virt-api is a cluster component that provides the + Kubernetes API extension for virtual machine resources. Like + virt-api, virt-controller is a + cluster component that watches for new objects created via + virt-api, or updates to existing objects, and takes + action to ensure the object state matches the requested state. + virt-handler is a DaemonSet and a node component that + has the job of keeping the cluster-level virtual machine object in sync + with the libvirtd domain running in + virt-launcher. virt-handler can + also perform node-centric operations like configuring networking and/or + storage on the node per the virtual machine specification. + virt-launcher is also a node component and has the job + of running libvirt plus qemu to + provide the virtual machine environment. virt-launcher + is a lowly pod resource. libguestfs-tools is a + component providing a set of utilities for accessing and modifying VM + disk images. + + + + virt-operator implements the Kubernetes operator + pattern. Operators encode the human knowledge required to deploy, run + and maintain an application. Operators are a Kubernetes Deployment + resource type and are often used to manage the custom resources and + custom controllers that together provide a more complex Kubernetes + application such as &kubevirt;. + + + + Installing &kubevirt; on Kubernetes + + + &kubevirt; can be installed on a Kubernetes cluster by installing the + kubevirt-manifests package on an admin node, applying + the virt-operator manifest, and creating the + &kubevirt; custom resource. For example, on a cluster admin node execute + the following: + + +&prompt.sudo;zypper install kubevirt-manifests +&prompt.user;kubectl apply -f /usr/share/kube-virt/manifests/release/kubevirt-operator.yaml +&prompt.user;kubectl apply -f /usr/share/kube-virt/manifests/release/kubevirt-cr.yaml + + + After creating the &kubevirt; custom resource, + virt-operator deploys the remaining &kubevirt; + components. Progress can be monitored by viewing the status of the + resources in the kubevirt namespace: + + +&prompt.user;kubectl get all -n kubevirt + + + The cluster is ready to deploy virtual machines once + virt-api, virt-controller, and + virt-handler are READY with STATUS Running. + + + + Alternatively it is possible to wait until &kubevirt; custom resource + becomes available: + + +&prompt.user;kubectl -n kubevirt wait kv kubevirt --for condition=Available + + + Some &kubevirt; functionality is disabled by default and must be enabled + via feature gates. For example, live migration and the use of HostDisk for + virtual machine disk images are disabled. Enabling &kubevirt; feature + gates can be done by altering an existing &kubevirt; custom resource and + specifying the list of features to enable. For example, you can enable + live migration and the use of HostDisks: + + +&prompt.user;kubectl edit kubevirt kubevirt -n kubevirt + ... + spec: + configuration: + developerConfiguration: + featureGates: + - HostDisk + - LiveMigration + + + + + The names of feature gates are case-sensitive. + + + + + Updating the &kubevirt; deployment + + + Updating &kubevirt; is similar to the initial installation. The updated + operator manifest from the kubevirt-manifests package + is applied to the cluster. + + +&prompt.sudo;zypper update kubevirt-manifests +&prompt.user;kubectl apply -f /usr/share/kube-virt/manifests/release/kubevirt-operator.yaml + + + + Deleting &kubevirt; from a cluster + + + &kubevirt; can be deleted from a cluster by deleting the custom resource + and operator: + + +&prompt.user;kubectl delete -n kubevirt kubevirt kubevirt # or alternatively: kubectl delete -f /usr/share/kube-virt/manifests/release/kubevirt-cr.yaml +&prompt.user;kubectl delete -f /usr/share/kube-virt/manifests/release/kubevirt-operator.yaml + + + + It is important to delete the custom resource first otherwise it + gets stuck in the Terminating state. To fix that the + resource finalizer needs to be manually deleted: + +&prompt.user;kubectl -n kubevirt patch kv kubevirt --type=json -p '[{ "op": "remove", "path": "/metadata/finalizers" }]' + + + + After deleting the resources from Kubernetes cluster the installed + &kubevirt; RPMs can be removed from the system: + + +&prompt.sudo;zypper rm kubevirt-manifests kubevirt-virtctl + + + Containerized Data Importer + + + Containerized Data Importer (CDI) is an add-on for Kubernetes focused on + persistent storage management. It is primarily used for building and + importing Virtual Machine Disks for &kubevirt;. + + + + Installing CDI + + CDI can be installed on a Kubernetes cluster in a way similar to + &kubevirt; by installing the RPMs and applying the operator and custom + resource manifests using kubectl: + +&prompt.sudo;zypper in containerized-data-importer-manifests +&prompt.user;kubectl apply -f /usr/share/cdi/manifests/release/cdi-operator.yaml +&prompt.user;kubectl apply -f /usr/share/cdi/manifests/release/cdi-cr.yaml + + + + Updating and deleting CDI: + + To update CDI: + +&prompt.sudo;zypper update containerized-data-importer-manifests +&prompt.user;kubectl apply -f /usr/share/cdi/manifests/release/cdi-operator.yaml + + To delete CDI: + +&prompt.user;kubectl delete -f /usr/share/cdi/manifests/release/cdi-cr.yaml +&prompt.user;kubectl delete -f /usr/share/cdi/manifests/release/cdi-operator.yaml +&prompt.sudo;zypper rm containerized-data-importer-manifests + + + + Running virtual machines + + + Two of the most interesting custom resources provided by &kubevirt; are + VirtualMachine (VM) and + VirtualMachineInstance (VMI). As + the names imply, a VMI is a running instance of a VM. The lifecycle of a + VMI can be managed independently from a VM, but long-lived, stateful + virtual machines are managed as a VM. The VM is deployed to the cluster + in a shutoff state, then activated by changing the desired state to + running. Changing a VM resource state can be done with the standard + Kubernetes client tool kubectl or with the client + virtctl provided by &kubevirt;. + + + + The VM and VMI custom resources make up part of the &kubevirt; API. To + create a virtual machine, a VM or VMI manifest must be created that + adheres to the API. The API supports setting a wide variety of the common + virtual machine attributes, for example, model of vCPU, number of vCPUs, + amount of memory, disks, network ports, etc. Below is a simple example of + a VMI manifest for a virtual machine with one Nehalem CPU, 2G of memory, + one disk, and one network interface: + + +apiVersion: kubevirt.io/v1 +kind: VirtualMachineInstance +metadata: + labels: + special: vmi-host-disk + name: sles15sp2 +spec: + domain: + cpu: + model: Nehalem-IBRS + devices: + disks: + - disk: + bus: virtio + name: host-disk + interfaces: + - name: green + masquerade: {} + ports: + - port: 80 + machine: + type: "" + resources: + requests: + memory: 2048M + terminationGracePeriodSeconds: 0 + networks: + - name: green + pod: {} + volumes: + - hostDisk: + path: /hostDisks/sles15sp2/disk.raw + type: Disk + shared: true + name: host-disk + + + Applying this VMI manifest to the cluster creates a virt-launcher + container running libvirt and qemu, + providing the familiar KVM virtual machine environment. + + +&prompt.user;kubectl apply -f sles15sp2vmi.yaml +&prompt.user;kubectl get vmis + + + Similar to other Kubernetes resources, VMs and VMIs can be managed with + the kubectl client tool. Any + kubectl operation that works with resource types + works with the &kubevirt; custom resources, for example, describe, delete, + get, log, patch, etc. VM resources are a bit more awkward to manage with + kubectl. Since a VM resource can be in a shutoff + state, turning it on requires patching the manifest to change the desired + state to running. Find an example below: + + +&prompt.user;kubectl patch vm sles15sp2 --type merge -p '{"spec":{"running":true}}' + + + The virtctl tool included in the + kubevirt-virtclt package provides syntactic sugar on + top of kubectl for VM and VMI resources, allowing them + to be stopped, started, paused, unpaused and migrated. + virtctl also provides access to the virtual machine's + serial console and graphics server. Find an example below: + + +&prompt.user;virtctl start VM +&prompt.user;virtctl console VMI +&prompt.user;virtctl stop VM +&prompt.user;virtctl pause VM|VMI +&prompt.user;virtctl unpause VM|VMI +&prompt.user;virtctl vnc VMI +&prompt.user;virtctl migrate VM + + + Live migration + + + &kubevirt; supports live migration of VMs. Though this functionality must + first be activated by adding LiveMigration to the list + of feature gates in the &kubevirt; custom resource. + + +&prompt.user;kubectl edit kubevirt kubevirt -n kubevirt + + +spec: + configuration: + developerConfiguration: + featureGates: + - LiveMigration + + + Prerequisites + + + + All the Persistent Volume Claims (PVCs) used by a VM must have + `ReadWriteMany` (RWX) access mode. + + + + + VM pod network binding must be of type + masquerade: + + +spec: + domain: + devices: + interfaces: + - name: green + masquerade: {} + + + + Whether live migration is possible or not can be checked via the + VMI.status.conditions field of a running VM spec: + +&prompt.user;kubectl describe vmi sles15sp2 + +Status: + Conditions: + Status: True + Type: LiveMigratable + Migration Method: BlockMigration + + + + Initiating live migration + + Live migration of a VMI can be initiated by applying the following yaml + file: + + +apiVersion: kubevirt.io/v1 +kind: VirtualMachineInstanceMigration +metadata: + name: migration-job +spec: + vmiName: sles15sp2 +&prompt.user;kubectl apply -f migration-job.yaml + + Alternatively it is possible to migrate a VM using + virtctl tool: + +&prompt.user;virtctl migrate VM + + + + Cancelling live migration + + Live migration can be canceled by deleting the existing migration + object: + +&prompt.user;kubectl delete VirtualMachineInstanceMigration migration-job + + + + Volume hotplugging + + + &kubevirt; allows hotplugging additional storage into a running VM. Both + block and file system volume types are supported. The hotplug volumes + feature can be activated via the HotplugVolumes + feature gate: + + +&prompt.user;kubectl edit kubevirt kubevirt -n kubevirt + + +spec: + configuration: + developerConfiguration: + featureGates: + - HotplugVolumes + + + Assuming that hp-volume is an existing DataVolume or + PVC, virtctl can be used to operate with the volume on + a runnig VM: + + +&prompt.user;virtctl addvolume sles15sp2 --volume-name=hp-volume +&prompt.user;virtctl removevolume sles15sp2 --volume-name=hp-volume + + + Running Windows VMs with VMDP ISO + + + The VMDP ISO is provided in the form of a container image which can be + consumed by &kubevirt;. To run a Windows VM with VMDP ISO attached, the + corresponding containerDisk needs to be added to the + VM definition: + + + +spec: + domain: + devices: + disks: + - name: vmdp + cdrom: + bus: sata +volumes: + - containerDisk: + image: registry.suse.com/suse/vmdp/vmdp:latest + name: vmdp + + + + The sequence in which the disks are defined affects the boot order. It + is possible to specify the bootOrder explicitly or + otherwise sort the disk items as needed. + + + + + Supported features + + + + + Guest Agent Information + + + + + Live migration + + + + + Hotplug volumes + + + + + VMI Dedicated CPU resource + + + + + + VMI virtual hardware + + + + machine type + + + + + BIOS/UEFI/SMBIOS + + + + + cpu + + + + + clock + + + + + RNG + + + + + CPU/Memory limits and requirements + + + + + tablet input + + + + + hugepage + + + + + + + VMI disks and volumes + + Disk types: + + + + + lun + + + + + disk + + + + + cdrom + + + + + Volume sources: + + + + + cloudInitNoCloud + + + + + cloudInitConfigDrive + + + + + persistentVolumeClaim + + + + + dataVolume + + + + + ephemeral + + + + + containerDisk + + + + + emptyDisk + + + + + hostDisk + + + + + configMap + + + + + secret + + + + + serviceAccount + + + + + downwardMetrics + + + + + High performance features: + + + + + IO threads + + + + + Virtio Block Multi-Queue + + + + + Disk cache + + + + + + + VMI interfaces and networks + + Network (back-end) types: + + + + + pod + + + + + multus + + + + + Interface (front-end) types: + + + + + bridge + + + + + masquerade + + + + + + + Debugging + + + If issues are encountered the following debug resources are available to + help identify the problem. + + + + The status of all &kubevirt; resources can be examined with the + kubectl get command: + + +&prompt.user;kubectl get all -n kubevirt + + + Resources with failed status can be further queried by examining their + definition and expanded status information. + + +&prompt.user;kubectl describe deployment virt-operator +&prompt.user;kubectl get deployment virt-operator -o yaml -n kubevirt +&prompt.user;kubectl describe pod virt-handler-xbjkg -n kubevirt +&prompt.user;kubectl get pod virt-handler-xbjkg -o yaml -n kubevirt + + + Logs from the problematic &kubevirt; pod can contain a wealth of + information since stderr and service logging from + within the pod is generally available via the Kubernetes log service: + + +&prompt.user;kubectl logs virt-operator-558c57bc4-mg68w -n kubevirt + &prompt.user;kubectl logs virt-handler-xbjkg -n kubevirt + + + If the underlying pod is running but there are problems with the service + running in it, a shell can be accessed to inspect the pod environment and + poke at its service: + + +&prompt.user;kubectl -n kubevirt exec -it virt-handler-xbjkg -- /bin/bash + +
From 1c887b84f81bb4637e8bc932f88e9db3a554fca7 Mon Sep 17 00:00:00 2001 From: Jana Halackova Date: Tue, 21 Nov 2023 16:04:40 +0100 Subject: [PATCH 3/3] Rewritten shebang usage in Combustion. --- xml/deployment_images_combustion.xml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xml/deployment_images_combustion.xml b/xml/deployment_images_combustion.xml index 6191147801..981288c430 100644 --- a/xml/deployment_images_combustion.xml +++ b/xml/deployment_images_combustion.xml @@ -106,8 +106,8 @@ Include interpreter declaration - As the script file is interpreted by bash, make sure - to start the file with the interpreter declaration at the first line: + As the script file is interpreted by shell, make sure + to start the file with the interpreter declaration at the first line, for example for Bash: #!/bin/bash