Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ Perform advanced migration operations, such as changing precopy snapshot interva

include::../modules/changing-precopy-intervals.adoc[leveloffset=+1]


== Creating custom rules for the Validation service

The `Validation` service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The `Validation` service generates a list of _concerns_ for each VM, which are stored in the `Provider Inventory` service as VM attributes. The web console displays the concerns for each VM in the provider inventory.
Expand All @@ -40,7 +39,21 @@ include::../modules/adding-hook-using-ui.adoc[leveloffset=+2]

include::../modules/adding-hook-using-cli.adoc[leveloffset=+2]

include::../modules/about-udn.adoc[leveloffset=+2]
include::../modules/about-udn.adoc[leveloffset=+1]

== Scheduling target VMs

By default, {virt} assigns the destination nodes during VM migration. However, you can use the scheduling target VMs feature to define the destination nodes and apply specific conditions to schedule when the VMs are switched from `pending` to `on`.

include::../modules/about-configuring-target-vm-scheduling.adoc[leveloffset=+2]

include::../modules/target-vm-scheduling-prerequisites.adoc[leveloffset=+2]

include::../modules/target-vm-scheduling-options.adoc[leveloffset=+2]

include::../modules/configuring-target-vm-scheduling-cli.adoc[leveloffset=+2]

include::../modules/configuring-target-vm-scheduling-ui.adoc[leveloffset=+2]

ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
19 changes: 19 additions & 0 deletions documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -426,6 +426,21 @@
:ova:

include::modules/proc_migrating-virtual-machines-cli.adoc[leveloffset=+2]
<<<<<<< HEAD

Check failure on line 429 in documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [RedHat.MergeConflictMarkers] Do not commit Git merge conflict markers in source code. Raw Output: {"message": "[RedHat.MergeConflictMarkers] Do not commit Git merge conflict markers in source code.", "location": {"path": "documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc", "range": {"start": {"line": 429, "column": 1}}}, "severity": "ERROR"}
=======

Check failure on line 430 in documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [RedHat.MergeConflictMarkers] Do not commit Git merge conflict markers in source code. Raw Output: {"message": "[RedHat.MergeConflictMarkers] Do not commit Git merge conflict markers in source code.", "location": {"path": "documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc", "range": {"start": {"line": 430, "column": 1}}}, "severity": "ERROR"}

include::modules/canceling-migration-cli.adoc[leveloffset=+3]

include::modules/canceling-migration-cli-entire.adoc[leveloffset=+4]

include::modules/canceling-migration-cli-specific.adoc[leveloffset=+4]

:ova!:
:context: cnv
:cnv:

include::modules/proc_migrating-virtual-machines-cli.adoc[leveloffset=+2]
>>>>>>> fb05b30 (beginning of draft)

Check failure on line 443 in documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [RedHat.MergeConflictMarkers] Do not commit Git merge conflict markers in source code. Raw Output: {"message": "[RedHat.MergeConflictMarkers] Do not commit Git merge conflict markers in source code.", "location": {"path": "documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc", "range": {"start": {"line": 443, "column": 1}}}, "severity": "ERROR"}

include::modules/canceling-migration-cli.adoc[leveloffset=+3]

Expand Down Expand Up @@ -462,6 +477,10 @@

include::modules/about-udn.adoc[leveloffset=+2]

include::modules/about-configuring-target-vm-scheduling.adoc[leveloffset=+2]

include::modules/target-vm-scheduling-prerequisites.adoc[leveloffset=+3]

include::modules/upgrading-mtv-ui.adoc[leveloffset=+1]

[id="uninstalling-mtv_{context}"]
Expand Down
22 changes: 22 additions & 0 deletions documentation/modules/about-configuring-target-vm-scheduling.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
// Module included in the following assemblies:
//
// * documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

:_content-type: CONCEPT
[id="about-configuring-target-vm-scheduling_{context}"]
= About scheduling target VMs

[role="_abstract"]
Starting with {project-first} 2.10, you can use the _target VM scheduling_ feature to direct {project-short} to migrate virtual machines (VMs) to specific nodes of {virt} as well as to schedule when the VMs are powered on. Using the feature, you can design and enforce rules that you set using either the UI or command-line interface.

Previously, when you migrated VMs to {virt}, {virt} automatically determined the node the VMs would be migrated to. Although this served many customers' needs, there are certain situations in which it is very useful to be able to specify the target node of a VM or the conditions under which the VM is powered on, regardless of the type of migration involved.

Check failure on line 12 in documentation/modules/about-configuring-target-vm-scheduling.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [proselint.Very] Remove 'very'. Raw Output: {"message": "[proselint.Very] Remove 'very'.", "location": {"path": "documentation/modules/about-configuring-target-vm-scheduling.adoc", "range": {"start": {"line": 12, "column": 205}}}, "severity": "ERROR"}

== Use cases

Target VM scheduling is designed to help you with the following use cases, among others:

* *Prioritizing critical workloads*: In many migrations, there are VMs that must be among the first migrated and powered up. Node selector rules let you ensure that specific VMs are migrated first to support other VMs that are migrated afterwards.

* *Business continuity and disaster recovery*: You can use scheduling rules to migrate and power up critical VMs to several sites, in different time zones or otherwise geographically separated by significant distances. This allows you to deploy these VMs as strategic assets for business continuity, such as disaster recovery.

* *Working with fluctuating demands*: In situations where demand for a service might vary significantly, rules for scheduling when to spin up VMs based on demand allows you to use your resources more efficiently.

Check failure on line 22 in documentation/modules/about-configuring-target-vm-scheduling.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [RedHat.TermsErrors] Use 'on-demand' rather than 'on demand'. Raw Output: {"message": "[RedHat.TermsErrors] Use 'on-demand' rather than 'on demand'.", "location": {"path": "documentation/modules/about-configuring-target-vm-scheduling.adoc", "range": {"start": {"line": 22, "column": 153}}}, "severity": "ERROR"}
91 changes: 91 additions & 0 deletions documentation/modules/configuring-target-vm-scheduling-cli.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
// Module included in the following assemblies:
//
// * documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

:_content-type: PROCEDURE
[id="configuring-target-vm-scheduling-cli_{context}"]
= Scheduling target VMs from the command-line interface

[role="_abstract"]
You can use the command-line interface (CLI) to tell {project-first} to migrate virtual machines (VMs) to specific nodes or workloads (pods) of {virt} as well as to schedule when the VMs are powered on.

The {project-soft} CLI supports the following scheduling-related labels, all of which are added to the `Plan` CR:

`targetAffinity`: Implements placement policies such as co-locating related workloads or, for disaster recovery, ensuring that specific VMs are migrated to different nodes. This type of label uses hard (requirements) and soft (preferences) conditions combined with logical operators, such as `and`, `or,` and `not`, to provide greater flexibility than the `targetLabelSelector` label discussed following.
`targetLabels`: Applies organizational or operational labels to migrated VMs for identification and management.
`targetNodeSelector`: Ensures VMs are scheduled on nodes that are an exact match for key-value pairs you create. This type of label is often used for nodes with special capabilities, such as GPU nodes or storage nodes.

[IMPORTANT]
====
System-managed labels, such as migration, plan, VM ID, or application labels, override any user-defined labels.
====

.Prerequisites

Migrations that use target VM scheduling require the following prerequisites, in addition to the prerequisites for your source provider:

* {project-first} 2.10 or later.
* Version of {virt} that is compatible with your version of {project-short}. For {project-short} 2.10, the compatible versions of {virt} are 4.18, 4.19, and 4.20 only.
* `cluster-admin` or equivalent security privileges that allow managing `VirtualMachineInstance` objects and associated Kubernetes scheduling primitives.

.Procedure

. Create custom resources (CR)s for the migration according to the procedure for the provider.
. In the `Plan` CR, add the following labels before `spec:targetNamespace`. All are optional.
+
[source,yaml,subs="attributes+"]
...
targetAffinity: <affinity rule, which may be quite complex, is entered in lines following this label. See example that follows>
targetLabels:
label: <label>
targetNodeSelector:
<key>:<value>
targetNamespace:<target_namespace>
...

.Example

The following scheduling rule migrates the VMs in the plan to different nodes for disaster recovery:
+
[source,yaml,subs="attributes+"]
----
targetLabels:
label: test1
targetAffinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: label
operator: In
values:
- test1
topologyKey: kubernetes.io/hostname
----

As a result of the preceding rule, the VMs are migrated accordingly to the resulting `spec`.
+
[source,yaml,subs="attributes+"]
----
spec:
runStrategy: Always
template:
metadata:
creationTimestamp: null
labels:
app: mtv-rhel8-sanity-ceph-rbd
label: test1
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: label
operator: In
values:
- test1
topologyKey: kubernetes.io/hostname
----


28 changes: 28 additions & 0 deletions documentation/modules/configuring-target-vm-scheduling-ui.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
// Module included in the following assemblies:
//
// * documentation/doc-Migration_Toolkit_for_Virtualization/master.adoc

:_content-type: PROCEDURE
[id="configuring-target-vm-scheduling-ui_{context}"]
= Scheduling target VMs from the user interface

[role="_abstract"]
You can use the {project-first} user interface, which is located in the {ocp} web console, to tell {project-first} to migrate virtual machines (VMs) to specific nodes or workloads (pods) of {virt} as well as to schedule when the VMs are powered on.

The *Virtualization* section of the {ocp} web console supports the following options for scheduling target VMs:

* *VM target node selector*: Ensures VMs are scheduled on nodes that are an exact match for key-value pairs you create. This type of label is often used for nodes with special capabilities, such as GPU nodes or storage nodes.
* *VM target labels*: Applies organizational or operational labels to migrated VMs for identification and management.
* *VM target affinity rules*: Implements placement policies such as co-locating related workloads or, for disaster recovery, ensuring that specific VMs are migrated to different nodes. This type of rule uses hard (requirements) and soft (preferences) conditions combined with logical operators, such as `Exists` or `DoesNotExist` instead of using the rigid key-value pairs used by a VM target node selector. As a result, target affinity rules are more flexible than target affinity rules.
+
The {project-short} supports the following affinity rules:
+
** Node affinity rules
** Workload (pod) affinity and anti-affinity rules

You configure target VM scheduling options on the *Plan details* page of the relevant migration plan. The options apply to all VMs that are included in that migration.

Instructions for the VM target scheduling options are included in the procedures for creating migration plans. The same options are supported for all vendors ({virt}, {rhv-full}, {osp}, Open Virtual Appliance (OVA), and {virt}).



52 changes: 47 additions & 5 deletions documentation/modules/creating-plan-wizard-vmware.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -192,10 +192,10 @@ The wizard opens to the page where you defined the item.
When your plan is validated, the *Plan details* page for your plan opens in the *Details* tab.
+
The *Plan settings* section of the page includes settings that you specified in the *Other settings (optional)* page and some additional optional settings. The steps below refer to the additional optional steps, but all of the settings can be edited by clicking the {kebab}, making the change, and then clicking *Save*.

// Plan settings page
. Check the following items on the *Plan settings* section of the page:

* *Volume name template*: Specifies a template for the volume interface name for the VMs in your plan.
.. *Volume name template*: Specifies a template for the volume interface name for the VMs in your plan.
+
The template follows the Go template syntax and has access to the following variables:

Expand Down Expand Up @@ -230,7 +230,7 @@ Variable names cannot exceed 63 characters.
Changes you make on the *Virtual Machines* tab override any changes on the *Plan details* page.
====

* *PVC name template*: Specifies a template for the name of the persistent volume claim (PVC) for the VMs in your plan.
.. *PVC name template*: Specifies a template for the name of the persistent volume claim (PVC) for the VMs in your plan.
+
The template follows the Go template syntax and has access to the following variables:

Expand Down Expand Up @@ -267,7 +267,7 @@ Variable names cannot exceed 63 characters.
Changes you make on the *Virtual Machines* tab override any changes on the *Plan details* page.
====

* *Network name template*: Specifies a template for the network interface name for the VMs in your plan.
.. *Network name template*: Specifies a template for the network interface name for the VMs in your plan.
+
The template follows the Go template syntax and has access to the following variables:

Expand Down Expand Up @@ -304,12 +304,54 @@ Variable names cannot exceed 63 characters.
Changes you make on the *Virtual Machines* tab override any changes on the *Plan details* page.
====

* *Raw copy mode*: By default, during migration, virtual machines (VMs) are converted using a tool named `virt-v2v` that makes them compatible with {virt}. For more information about the virt-v2v conversion process, see 'How {project-short} uses the virt-v2v tool' in _Migrating your virtual machines to Red Hat {virt}_. _Raw copy mode_ copies VMs without converting them. This allows for faster conversions, migrating VMs running a wider range of operating systems, and supporting migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on {virt}.
.. *Raw copy mode*: By default, during migration, virtual machines (VMs) are converted using a tool named `virt-v2v` that makes them compatible with {virt}. For more information about the virt-v2v conversion process, see 'How {project-short} uses the virt-v2v tool' in _Migrating your virtual machines to Red Hat {virt}_. _Raw copy mode_ copies VMs without converting them. This allows for faster conversions, migrating VMs running a wider range of operating systems, and supporting migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on {virt}.

** To use raw copy mode for your migration plan, do the following:
*** Click the *Edit* icon.
*** Toggle the *Raw copy mode* switch.
*** Click *Save*.

.. *VM target node selector*, *VM target labels*, and *VM target affinity rules* are options that support VM target scheduling, a feature that lets you direct {project-short} to migrate virtual machines (VMs) to specific nodes or workloads (pods) of {virt} as well as to schedule when the VMs are powered on.
+
For more information on the feature in general, see TBD Target VM scheduling options. For more details on using the feature with the UI, see TBD Scheduling target VMs from the user interface.

* *VM target node selector* allows you to create mandatory exact match key-value label pairs that the target node must possess. If no node on the cluster has all the labels specified, the VM is not scheduled and it remains in a `Pending` state until there is space on a node that fits the key-value label pairs.

** To use the VM target node selector for your migration plan, do the following:
*** Click the *Edit* icon.
*** Enter a key-value label pair. For example, to require that all VMs in the plan be migrated to your `east` data center, enter `dataCenter` as your *key* and `east` as your *label*.
*** To add another key-value label pair, click *+* and enter another key-value pair.
*** Click *Save*.

* *VM target labels* allows you to apply organizational or operational labels to migrated VMs for identification and management. One use for these labels is to use them to specify a different scheduler for your migrated VMs, by creating a specific target VM label for it.

** To use the VM target node selector for your migration plan, do the following:
*** Click the *Edit* icon.
*** Enter one or more VM target labels.
*** Click *Save*.

* *VM target affinity rules*: Target affinity rules let you use conditions to either require or prefer scheduling on specific nodes or workloads (pods).
+
Target anti-affinity rules let you prevent VMs from being scheduled to run on selected workloads (pods) or prefer that they are not scheduled. These kind of rules offer more flexible placement control than rigid Node Selector rules, because they support conditionals such as `In`, or `NotIn`. For example, you could require that a VM be powered on "only if it is migrated to node A _or_ if it is migrated to an SSD disk, but it _cannot_ be migrated to a node for which `license-tier=silver` is true."
+
Additionally, both types of rules allow you to include both _hard_ and _soft_ conditions in the same rule. A hard condition is a requirement, and a soft condition is a preference. The previous example used only hard conditions. A rule that states that "a VM can be powered on if it is migrated to node A _or_ if it is migrated to an SSD disk, but it is preferred not to migrate it to a node for which `license-tier=silver` is true" is an example of a rule that uses soft conditions.
+
{project-short} supports target affinity rules at both the node level and the workload (pod) level. It supports anti-affinity rules at the workload (pod) level only.

** To use VM target affinity rules for your migration plan, do the following:
*** Click the *Edit* icon.
*** Click *Add affinity rule*.
*** Select the *Type* of affinity rule from the list. Valid options: Node Affinity, Workload (pod) Affinity, Workload (pod) Anti-Affinity.
*** Select the *Condition* rom the list. Valid options: Preferred during scheduling (soft condition), Required during scheduling (hard condition).
*** Soft condition only: Enter a numerical *Weight*. The higher the weight, the stronger the preference. Valid options: whole numbers from 1-100.
*** Enter a *Typology key*, the key for the node label that the system uses to denote the domain.
*** Optional: Select the *Workload labels* that you want to set by doing the following:
**** Enter a *Key*.
**** Select an *Operator* from the list. Valid options: `Exists`, `DoesNotExist`, `In`, and `NotIn`.
**** Enter a *Value*.
*** To add another label, click *Add expression* and add another key-value pair with an operator.
*** Click *Save affinity rule*.
*** To add another affinity rule, click *Add affinity rule*. Rules with a preferred condition will stack with an `AND` relation between them. Rules with a required condition will stack with an `OR` relation between them.
+
{project-short} validates any changes you made on this page.

Expand Down
Loading
Loading