You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On trying to upgrade the version of hyperkube on the rancher cluster, the terraform is throwing the below error and the hyperkube upgrade is is not happening on terraform apply. Would like to understand if there is way to overcome this problem
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.nginx_wait to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.create_cattle_namespace to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.cert_manager_webhook_wait to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.cert_manager_wait to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.cert_manager_cainjector_wait to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
The text was updated successfully, but these errors were encountered:
@Sankarsh-vittal can you describe what steps you took to get this error. What version of rke are you using? what did you change in your tf module before the apply?
On trying to upgrade the version of hyperkube on the rancher cluster, the terraform is throwing the below error and the hyperkube upgrade is is not happening on terraform apply. Would like to understand if there is way to overcome this problem
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.nginx_wait to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.create_cattle_namespace to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.cert_manager_webhook_wait to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.cert_manager_wait to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.rancher.module.management_cluster.kubernetes_job.cert_manager_cainjector_wait to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value for .spec[0].selector[0].match_labels: was null, but now cty.MapValEmpty(cty.String).
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
The text was updated successfully, but these errors were encountered: