{project-first} 2.7 has the following resolved issues:
In earlier releases of {project-short}, the main controller container of forklift-container
crashed during VMware migrations because users created snapshots of the virtual machines (VMs) during migrations. As a result, some VMs migrated, but others failed to migrate. This issue has been resolved in {project-short} 2.7.11, but users are cautioned not to create snapshots of VMs during migrations. (MTV-2164)
In earlier releases of {project-short}, the OVN secondary network was not working with Multus default network override. This issue caused the importer pod to become stuck, as the importer pod was created when there were multiple networks and the wrong default network override annotation was configured. This issue has been resolved in {project-short} 2.7.10. (MTV-1645)
In earlier releases of {project-short}, it was difficult to understand the user interface relating to the Preserve IPs VM setting in the chosen setting mode. This issue has been resolved in {project-short} 2.7.10. (MTV-1743)
In earlier releases of {project-short}, when migrating a Windows Server 2022 VMware virtual machine (VM) to {virt}, the VMI had the Trusted Platform Module (TPM) module configured, even though the module was not configured in the original VM. This issue has been resolved in {project-short} 2.7.10. (MTV-1939)
In earlier releases of {project-short}, the forklift-contoller
inventory container would panic, and the VM would not be migrated. This issue has been resolved in {project-short} 2.7.9. (MTV-1610)
In earlier releases of {project-short}, when migrating virtual machines (VMs) from VMware vSphere 7 to {virt}, vCenter sessions kept increasing. This issue has been resolved in {project-short} 2.7.9. (MTV-1929)
:
charIn earlier releases of {project-short}, the character :
in the NFS URL of OVAs was not handled, which created malformed URLs. This issue has been resolved in {project-short} 2.7.8. (MTV-1856)
In earlier releases of {project-short}, when changing the network from VMkernel to Management network, the migration network changed. However, an error message was returned that the request could not be completed due to an incorrect user name or password
. This issue has been resolved in {project-short} 2.7.8. (MTV-1862)
In earlier releases of {project-short}, migrating Red Hat Enterprise Linux (RHEL) virtual machines (VMs) with disks encrypted with Linux Unified Key Setup (LUKS) enabled from VMWare fails at the DiskTransferV2v stage. This issue has been resolved in {project-short} 2.7.8. (MTV-1864)
In earlier releases of {project-short}, VMs imported from vSphere always had the Q35 machine type set. However, the machine type is floating
, and thus the VM application binary interface (ABI) can change between reboots. This issue has been resolved in {project-short} 2.7.8. (MTV-1865)
memory.requests
and memory.guest
are set during import from VMwareIn earlier releases of {project-short}, {project-short} imported VMs with the requests.memory
field and the memory.guest
field set. This was problematic as it prevented memory overcommit, memory hot-plug, and caused unnecessary memory pressure on VMs that could lead to out-of-memory (OOM) errors. This issue has been resolved in {project-short} 2.7.8. (MTV-1866)
virt-v2v
failure when converting dual-boot VMIn earlier releases of {project-short}, there was an issue when attempting the migration of a VM with two disks with two different operating systems installed in a dual-boot configuration. This issue has been resolved in {project-short} 2.7.8. (MTV-1877)
In earlier releases of {project-short}, when performing a warm migration using 200 VMs, the {project-short} Controller could pause the migration during snapshot creation. This issue has been resolved in {project-short} 2.7.7. (MTV-1775)
In earlier releases of {project-short}, in some situations there was an extended delay before all VMs started to migrate. This issue has been resolved in {project-short} 2.7.7. (MTV-1774)
In earlier releases of {project-short}, the migration plan with multi-VMs from an ESXi host provider failed in the cutover phase. This issue has been resolved in {project-short} 2.7.7. (MTV-1753)
In earlier releases of {project-short}, when migrating a virtual machine (VM) with Secure Boot enabled, the VM had Secure Boot disabled after being migrated. This issue has been resolved in {project-short} 2.7.7. (MTV-1632)
In earlier releases of {project-short}, after creating a migration plan from a VMware provider, the VDDK validation failed due to a LimitRange
not being provided to add requests and limits to any container that do not define them. This issue has been resolved in {project-short} 2.7.7, with {project-short} setting limits by default to make the migration plan work out of the box. (MTV-1493)
In earlier releases of {project-short}, there was no warning message for preserving static IPs while using Pod networking. This issue has been resolved in {project-short} 2.7.6. (MTV-1503)
In earlier releases of {project-short}, the option to schedule a cutover for an archived plan was currently available in the UI. This issue has been resolved in {project-short} 2.7.6 with the cutover action disabled for an archived plan. (MTV-1729)
In earlier releases of {project-short}, when creating a new Migration Plan with the Create new plan wizard, after filling in the form, the Next button was misplaced to the left of the Back option. This issue has been resolved in {project-short} 2.7.6. (MTV-1732)
In earlier releases of {project-short}, all Debian-based operating systems could have the network configurations in the /etc/network/interfaces
, but information was not fetched from these config files when creating the udev rule to set the interface name. This issue has been resolved in {project-short} 2.7.6. (MTV-1711)
In earlier releases of {project-short}, VMS were removed from archived or archiving plans. This issue has been resolved in {project-short} 2.7.6, and if a plan’s status is either archiving or archived, the option to remove VMs for that plan is blocked. (MTV-1713)
In earlier releases of {project-short}, after the first Disk Transfer step set was completed, the cutover was set. However, during the Image Conversion step, not all data volumes were completed, with some of them being stuck in the import in progress phase and 100% progress. This issue has been resolved in {project-short} 2.7.6. (MTV-1717)
In earlier releases of {project-short}, virtual machines (VM) were presenting XFS filesystem and other data corruption after the warm migration from VMware to {virt} using {project-short}. This issue has been resolved in {project-short} 2.7.5. (MTV-1679)
In earlier releases of {project-short}, after creating a migration plan for VMs with a NSX-T network attached from vSphere, the VM network mapping was missing, and also adding network mapping could not list NSX-T networks as source networks. This issue has been resolved in {project-short} 2.7.5. (MTV-1695) and (MTV-1140)
In earlier releases of {project-short}, cold migrating Windows Server 2019 VM from {rhv-full} ({rhv-short}) to a remote cluster returned a firmware.bootloader setting
error of admission webhook "virtualmachine-validator.kubevirt.io" denied the request
during the VirtualMachineCreation phase. This issue has been resolved in {project-short} 2.7.5. (MTV-1613)
In earlier releases of {project-short}, PreferredUseEfi
was applied when the BIOS was already enabled within the VirtualMachineInstanceSpec
. In MTV 2.7.5, PreferredEfi
is only applied when a user has not provided their own EFI configuration and the BIOS is additionally not enabled. (CNV-49381)
In earlier releases of {project-short}, in some cases, the destination VMware virtual machine (VM) was observed to have XFS filesystem corruption after being migrated to {virt} using {project-short}. This issue has been resolved in {project-short} 2.7.4. (MTV-1656)
Did not find CDI importer pod for DataVolume
is recorded in the forklift-controller
logs during the CopyDisks
phaseIn earlier releases of {project-short}, the forklift-controller
incorrectly logged an error Did not find CDI importer pod for DataVolume
during the CopyDisks
phase. This issue has been resolved in {project-short} 2.7.4. (MTV-1627)
In earlier releases of {project-short}, when running the virt-v2v
guest conversion, the migration plan did not fail if the conversion pod failed, as expected. This issue has been resolved in {project-short} 2.7.3. (MTV-1569)
In earlier releases of {project-short}, having a large number of virtual machines (VMs) in the inventory could cause the inventory controller to panic and return a concurrent write to websocket connection
warning. This issue was caused by the concurrent write to the WebSocket connection and has been addressed by the addition of a lock, so the Go routine
waits before sending the response from the server. This issue has been resolved in {project-short} 2.7.3. (MTV-1220)
In earlier releases of {project-short}, the VM selection checkbox disappeared after selecting multiple VMs in the Migration Plan. This issue has been resolved in {project-short} 2.7.3. (MTV-1546)
forklift-controller
crashing during OVA plan migrationIn earlier releases of {project-short}, the forklift-controller
would crash during an OVA plan migration, returning a runtime error: invalid memory address or nil pointer dereference
panic. This issue has been resolved in {project-short} 2.7.3. (MTV-1577)
In earlier releases of {project-short}, after creating a plan with an {virt} source provider, the Migration Plan failed with the error The plan is not ready - VMNetworksNotMapped
. This issue has been resolved in {project-short} 2.7.2. (MTV-1201)
In earlier releases of {project-short}, when creating a Migration Plan for an {virt} to {virt} migration using the Plan Creation Form, the network map generated was missing the source namespace, which caused a VMNetworkNotMapped
error on the plan. This issue has been resolved in {project-short} 2.7.2. (MTV-1297)
In earlier releases of {project-short}, the DataVolume (DV), PersistentVolumeClaim (PVC), and PersistentVolume (PV) continued to exist after the migration plan was archived and deleted. This issue has been resolved in {project-short} 2.7.2. (MTV-1477)
In earlier releases of {project-short}, when warm migrating a virtual machine (VM) that has several disks, you had to wait for the complete VM to get migrated, and the scheduler was halted until all the disks finished before the migration would be started. This issue has been resolved in {project-short} 2.7.2. (MTV-1537)
In earlier releases of {project-short}, warm migration did not function as expected. When running the warm migration with VMs larger than the MaxInFlight disks, the VMs over this number did not start the migration until the cutover. This issue has been resolved in {project-short} 2.7.2. (MTV-1543)
In earlier releases of {project-short}, when attempting to migrate a VMware VM with a non-compliant Kubernetes name, the Openshift console returned a warning that the VM would be renamed. However, after starting the Migration Plan, it hangs since the migration pod is in an Error
state. This issue has been resolved in {project-short} 2.7.2. This issue has been resolved in {project-short} 2.7.2. (MTV-1555)
In earlier releases of {project-short}, when migrating the VM using the warm migration, if there were more disks than the MAX_VM_INFLIGHT
the VM was not scheduled and the migration was not started. This issue has been resolved in {project-short} 2.7.2. (MTV-1573)
In earlier releases of {project-short}, when running a VM in VMware, if the CBT flag was enabled while the VM was running by adding both ctkEnabled=TRUE
and scsi0:0.ctkEnabled=TRUE
parameters, an error message Danger alert:The plan is not ready - VMMissingChangedBlockTracking
was returned, and the migration plan was prevented from working. This issue has been resolved in {project-short} 2.7.2. (MTV-1576)
.
to -
in the names of VMs that are migratedIn earlier releases of {project-short}, if the name of the virtual machines (VMs) contained .
, this was changed to -
when they were migrated. This issue has been resolved in {project-short} 2.7.0. (MTV-1292)
In earlier releases of {project-short}, a status condition indicating a failed mapping resource of a plan was not added to the plan. This issue has been resolved in {project-short} 2.7.0, with a status condition indicating the failed mapping being added. (MTV-1461)
In earlier releases of {project-short}, interface configuration (ifcfg) files with a hardware address (HWaddr) of the Ethernet interface caused the name of the network interface controller (NIC) to change. This issue has been resolved in {project-short} 2.7.0. (MTV-1463)
In earlier releases of {project-short}, imports failed when there were special characters in the parameters of the VMX file. This issue has been resolved in {project-short} 2.7.0. (MTV-1472)
invalid memory address or nil pointer dereference
panicIn earlier releases of {project-short}, an invalid memory address or nil pointer dereference
panic was observed, which was caused by a refactor and could be triggered when there was a problem with the inventory pod. This issue has been resolved in {project-short} 2.7.0. (MTV-1482)
In earlier releases of {project-short}, the static Internet Protocol version 4 (IPv4) address was changed after a warm migration of Windows Server 2022 and Windows Server 2019 VMs. This issue has been resolved in {project-short} 2.7.0. (MTV-1491)
In earlier releases of {project-short}, virt-v2v-in-place
for the warm migration was missing arguments that were available in virt-v2v
for the cold migration. This issue has been resolved in {project-short} 2.7.0. (MTV-1495)
preserve static IPs
In earlier releases of {project-short}, the default gateway settings were changed after migrating Windows Server 2022 VMs with the preserve static IPs
setting. This issue has been resolved in {project-short} 2.7.0. (MTV-1497)