Skip to content

Conversation

noahlehmann
Copy link

@noahlehmann noahlehmann commented May 26, 2025

This PR will resolve #1236.

The option ìmport_fromwas added in https://github.com/Telmate/terraform-provider-proxmox/pull/606 but never made the cut to the version3.x.x` updates made to the provider.

Simply adding the option to the SDK (which added that feature with Telmate/proxmox-api-go#412) will create the disks.

However the size of the disk will not be changed, even if a different size than the imported disk is desired.
A second plan will show that the sizes need to be changed (and applied) again.

There are still open points to be done here:

  • ignore import_from for present disks, as proxmox doesn't save this option and won't return it either. It is only used for disk creation.
  • let the TF provider resize the disk after creation so the state fits the planned resources.

@Tinyblargon Tinyblargon added type/feature Completely new functionality resource/qemu Issue or PR related to Qemu resource labels May 26, 2025
@noahlehmann
Copy link
Author

TL;DR;

This took me way longer than expected, the nesting of disk but especially disks and the way TF core and provider SDK handle the state and the passed ResourceData almost drove me insane.
Tested manually for now, resizing the disks is still a TODO.
Won't get to continue until end of next week.

@Tinyblargon some of this is rather messy, let me know what you think and if you have any other approaches.

Current State

For now the import_from value is ignored from the API, the value is set as WriteOnly = true. This prevents it from being set to state and therefore not being persistet, as the TF Core/SDK compare the saved state with the values fetched from the Proxmox SDK, which always returns null for import_from, as the API won't return a value for that either (one-time value).

The implementation ALWAYS passes the import_from value to the SDK and therefore to the Proxmox API, but the API ignores that for already created disks. My conclusion to this is we can safely continue to do so.

Regarding the WriteOnly attribute

See: https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/write-only-arguments

Unfortunately we need to use some rather experimental features of the v2 SDK (reading the raw config) to actually use the import_from when set to WriteOnly = true.
The nested structure and the possibility to have redundant disk or disks fields (continuing in the nesting) makes it hard to have an elegant solution (though honestly it might be my lack of the Golang knowledge).

See the proxmox/Internal/resource/qemu/disk/sdk changes here: 9ba620e

As far as I have tested this it works reliably - as long as the assumption that the nested lists always only have one attribute (e.g. disks.0.ide.0.ide0.0.disk.0.import_from) is true.

Todo

The disks will still only have the size of the imported image, a second plan and apply will fix that, but this should be addressed.

@Tinyblargon
Copy link
Collaborator

Thank you for your work on this feature.

A few points of feedback.

WriteOnly

We get around using WriteOnly for the password in CloudInit with this trick:

d.Set(RootPassword, d.Get(RootPassword).(string))
Your WriteOnly implementation is clearer and would be preferable.

Setting ImportFrom

Why isn't the logic for setting the import_from handled in the following places instead of mapWriteOnlyImportFromDisksToType?
As I understand it, you loop through the disks and when import_from is set all the settings of the disk will be cleared and import_from is added back?

} else { // normal disk
ide.Disk = &pveAPI.QemuIdeDisk{
AsyncIO: pveAPI.QemuDiskAsyncIO(schema[schemaAsyncIO].(string)),
Backup: schema[schemaBackup].(bool),
Bandwidth: sdk_Disk_QemuDiskBandwidth(schema),
Cache: pveAPI.QemuDiskCache(schema[schemaCache].(string)),
Discard: schema[schemaDiscard].(bool),
EmulateSSD: schema[schemaEmulateSSD].(bool),
Format: default_format(schema[schemaFormat].(string)),
Replicate: schema[schemaReplicate].(bool),
Serial: pveAPI.QemuDiskSerial(schema[schemaSerial].(string)),
WorldWideName: pveAPI.QemuWorldWideName(schema[schemaWorldWideName].(string))}
var tmpDiags diag.Diagnostics
ide.Disk.SizeInKibibytes, tmpDiags = sdk_Disk_Size(slot, schema)
diags = append(diags, tmpDiags...)
ide.Disk.Storage, tmpDiags = sdk_Disk_Storage(slot, schema)
diags = append(diags, tmpDiags...)
if schema[schemaDiskFile].(string) != "" {
diags = append(diags, warningDisk(slot, schemaDiskFile, schemaType, enumDisk, ""))
}
}

if ok && len(tmpDisk) == 1 && tmpDisk[0] != nil {
diskMap := tmpDisk[0].(map[string]any)
disk := pveAPI.QemuIdeDisk{
Backup: diskMap[schemaBackup].(bool),
Bandwidth: sdk_Disks_QemuDiskBandwidth(diskMap),
Discard: diskMap[schemaDiscard].(bool),
EmulateSSD: diskMap[schemaEmulateSSD].(bool),
Format: pveAPI.QemuDiskFormat(diskMap[schemaFormat].(string)),
Replicate: diskMap[schemaReplicate].(bool),
SizeInKibibytes: pveAPI.QemuDiskSize(convert_SizeStringToKibibytes_Unsafe(diskMap[schemaSize].(string))),
Storage: diskMap[schemaStorage].(string)}
if asyncIO, ok := diskMap[schemaAsyncIO].(string); ok {
disk.AsyncIO = pveAPI.QemuDiskAsyncIO(asyncIO)
}
if cache, ok := diskMap[schemaCache].(string); ok {
disk.Cache = pveAPI.QemuDiskCache(cache)
}
if serial, ok := diskMap[schemaSerial].(string); ok {
disk.Serial = pveAPI.QemuDiskSerial(serial)
}
return &pveAPI.QemuIdeStorage{Disk: &disk}
}

Sugestion

Maybe with getting the previous value of the disk and disks we can check if the user changed import_from?
This could allow for importing a disk over an existing one when the source changes.

oldVal, newVal := d.GetChange("my_setting")

Upstream

The disks will still only have the size of the imported image, a second plan and apply will fix that, but this should be addressed.

This feels like an issue that should be resolved in the SDK/library itself, not in Terraform.
Ideally, in the future, providing ImportFrom to the upstream SDK should trigger a full re-import of the disk.

@noahlehmann
Copy link
Author

@Tinyblargon thanks for taking the time to assist! Greatly appreciated.
Before I continue, here are some thoughts on your replies.


Why isn't the logic for setting the import_from handled in the following places instead of mapWriteOnlyImportFromDisksToType?

Mostly because I didn't want to change the functions parameters passed in sdk_disks.go and sdk_disk.go, as I would need to pass either the ResourceData as whole or the ResourceData.GetRawConfigAt('RootDisk[s]').

E.g. for IDE disks (top level)

func sdk_Disk_QemuIdeDisks(ide *pveAPI.QemuIdeDisks, id string, schema map[string]interface{}) diag.Diagnostics {

func sdk_Disks_QemuIdeDisks(schema map[string]any) *pveAPI.QemuIdeDisks {

+ the lower level functions and all these for the 3 other types of disks as well.

If you think passing the raw config could be helpful in the future, i could refactor.


As I understand it, you loop through the disks and when import_from is set all the settings of the disk will be cleared and import_from is added back?

The docs on WriteOnly unfortunately state that a value set as such needs to be read from the raw practitioners config, so thats why i loop the settings. I only set the import_from value though, all other settings are untouched.


Maybe with getting the previous value of the disk and disks we can check if the user changed import_from?

If we keep import_from as WriteOnly we can't set the state for the value, we would need to handle this differently then, what's your thought on that?

This could allow for importing a disk over an existing one when the source changes.

I do like the idea of that feature, it would allow users to use some sort of "immutable" cloud image and do upgrades via import, however i would probably set this as new feature when this is done, we do need to consider the WriteOnly point from above when preparing though. Maybe this should need some flag as well (like overwrite_import_from) as this would behave different as the PVE API.


This feels like an issue that should be resolved in the SDK/library itself, not in Terraform.
Ideally, in the future, providing ImportFrom to the upstream SDK should trigger a full re-import of the disk.

It depends, if the Provider SDK should behave like the actual PVE API (which I as user would suppose) I would add this to the terraform logic. But that's personal preference.
We should decide on this though, because it either needs to be done in this PR or the upstream should be fixed first.

@noahlehmann
Copy link
Author

@Tinyblargon any updates on this? thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
resource/qemu Issue or PR related to Qemu resource type/feature Completely new functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

regression: No way to set 'import-from' parameter since 3.0.1-rc2
2 participants