Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: aws_s3_bucket_lifecycle_configuration times out when another Lifecycle config is present. #41199

Open
Tesseract99 opened this issue Feb 3, 2025 · 4 comments
Labels
bug Addresses a defect in current functionality. needs-triage Waiting for first response or review from a maintainer. service/s3 Issues and PRs that pertain to the s3 service. waiting-response Maintainers are waiting on response from community or contributor.

Comments

@Tesseract99
Copy link

Tesseract99 commented Feb 3, 2025

Terraform Core Version

1.5.7

AWS Provider Version

5.84.0

Affected Resource(s)

aws v5.84.0 - aws_s3_bucket_lifecycle_configuration

Expected Behavior

aws_s3_bucket_lifecycle_configuration.my_bucket creation should have completed without any error.

Infact, same code works fine when

  1. Using aws provider v4.67.0
  2. Or, When I manually delete the additional Lifecycle that get autocreated by the AWS organization for all new S3 buckets - while the creation is in progress. (before timeout)

Actual Behavior

Terraform apply fails with the below error. But, the S3 bucket and even the Life Cycle configuration would have been created successfully if I check from the AWS console.

 aws_s3_bucket_lifecycle_configuration.my_bucket_cfg: Still creating... [3m0s elapsed]
 ╷
 │ Error: waiting for S3 Bucket Lifecycle Configuration (my-bucket-name) create: timeout while waiting for state to become 'true' (last state: 'false', timeout: 3m0s)
 │ 
 │   with aws_s3_bucket_lifecycle_configuration.my_bucket_cfg,
 │   on main.tf line 37, in resource "aws_s3_bucket_lifecycle_configuration" "my_bucket_cfg":
 │   37: resource "aws_s3_bucket_lifecycle_configuration" "my_bucket_cfg" {

Relevant Error/Panic Output Snippet

Terraform Configuration Files

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5"
    }
  }
}

##bucket
resource "aws_s3_bucket" "my_bucket" {
  bucket = "my-bucket-name"
  force_destroy = "true"

}

## versioning
resource "aws_s3_bucket_versioning" "my_bucket_vsn" {
  bucket = aws_s3_bucket.my_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}


##lifecycle
resource "aws_s3_bucket_lifecycle_configuration" "my_bucket_cfg" {
  bucket = aws_s3_bucket.my_bucket.id
  depends_on = [aws_s3_bucket_versioning.my_bucket_vsn]

  rule {
    id      = "delete"
    status  = "Enabled"
  filter {
    prefix  = "/"
    }
    expiration {
      days = "7"
    }
  }

}

Steps to Reproduce

  1. Terraform v1.5.7
  2. Use the code above - change the bucket name
  3. Have a default Lifecycle policy created when a new S3 bucket is created (from the AWS organization)
  4. terraform apply

Debug Output

HTTP Response Received: tf_provider_addr=registry.terraform.io/hashicorp/aws tf_resource_type=aws_s3_bucket_lifecycle_configuration  rpc.system=aws-api tf_aws.sdk=aws-sdk-go-v2 http.response.header.x_amz_transition_default_minimum_object_size=all_storage_classes_128K http.status_code=200 
http.response.body="http.response.body="
<?xml version="1.0" encoding="UTF-8"?>
<LifecycleConfiguration
	xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
	<Rule>
		<ID>delete</ID>
		<Filter>
			<Prefix>/</Prefix>
		</Filter>
		<Status>Enabled</Status>
		<Expiration>
			<Days>7</Days>
		</Expiration>
	</Rule>
<Rule>
organization default rule (auto added)
</Rule>
</LifecycleConfiguration>

Panic Output

No response

Important Factoids

  1. The same terraform code works (without timeout error) for AWS provider version v4.67.0
  2. During the Life Cycle creation process, If I manually delete the Additional S3 Lifecycle that gets auto added to all new S3 bucket by AWS org, the creation completes without any time out error.
  3. So, I'm suspecting - it has to do with the Organization LifeCycle that get added to any new S3 bucket is "confusing" the terraform APIs. But, then the same code works fine for provider v4.67 - so has some terraform API changed for 5.8 ?

References

#25939

Would you like to implement a fix?

None

@Tesseract99 Tesseract99 added the bug Addresses a defect in current functionality. label Feb 3, 2025
Copy link

github-actions bot commented Feb 3, 2025

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please see our prioritization guide for information on how we prioritize.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@github-actions github-actions bot added service/s3 Issues and PRs that pertain to the s3 service. needs-triage Waiting for first response or review from a maintainer. labels Feb 3, 2025
@rishabhToshniwal
Copy link

Hi,

I am facing a similar issue. Additionally when I am adding a sleep time of 60 seconds between the 2 aws_s3_bucket_lifecycle_configuration resources, I see alternatively one is getting created and other one getting destroyed.

resource "aws_s3_bucket_lifecycle_configuration" "logging_bucket" {
  count = var.enable_lifecycle_configuration ? 1 : 0
  bucket = aws_s3_bucket.logging_bucket.id
  rule {
    id     = "noncurrent-version-expiration-object-delete-marker"
    status = "Enabled"
    filter { }
    noncurrent_version_expiration {
      # days = "${var.s3_lifecycle_noncurrent_version_expiration}}"
      noncurrent_days = "${var.s3_lifecycle_noncurrent_version_expiration}"
    }
    expiration {
       days = "${var.s3_lifecycle_noncurrent_version_expiration}"
       expired_object_delete_marker = true
    }
    abort_incomplete_multipart_upload {
       days_after_initiation = "${var.s3_lifecycle_noncurrent_version_expiration}"
    }
  }
}

resource "time_sleep" "wait_before_first_s3_lifecyle_creation" {
  count = var.enable_lifecycle_configuration ? 1 : 0
  create_duration = "60s"
  depends_on = [aws_s3_bucket_lifecycle_configuration.logging_bucket]
}


resource "aws_s3_bucket_lifecycle_configuration" "logging_bucket_delete_expired_object" {
  count = var.enable_lifecycle_configuration ? 1 : 0
  bucket = aws_s3_bucket.logging_bucket.id
  rule {
    id     = "delete-expired-object-delete-markers-incomplete-multipart-uploads"
    status = "Enabled"
    filter { }
    # noncurrent_version_expiration {
    #   # days = "${var.s3_lifecycle_noncurrent_version_expiration}}"
    #   noncurrent_days = "${var.s3_lifecycle_noncurrent_version_expiration}"
    # }
    # expiration {
    #    days = "${var.s3_lifecycle_noncurrent_version_expiration}"
    #    expired_object_delete_marker = true
    # }
    abort_incomplete_multipart_upload {
       days_after_initiation = "${var.s3_lifecycle_noncurrent_version_expiration}"
    }
  }
  depends_on = [time_sleep.wait_before_first_s3_lifecyle_creation]
}

The terraform plan shows that its replacing the rule

aws_s3_bucket_lifecycle_configuration.logging_bucket[0] will be updated in-place

~ resource "aws_s3_bucket_lifecycle_configuration" "logging_bucket" {
id = "mys3-dev-bkplog20250202203612984400000003"
# (2 unchanged attributes hidden)
~ rule {
~ id = "delete-expired-object-delete-markers-incomplete-multipart-uploads" -> "noncurrent-version-expiration-object-delete-marker"
# (1 unchanged attribute hidden)
~ expiration {
~ expired_object_delete_marker = false -> true
# (1 unchanged attribute hidden)
}
+ noncurrent_version_expiration {
+ noncurrent_days = 1
}
# (2 unchanged blocks hidden)
}
}

aws_s3_bucket_lifecycle_configuration.logging_bucket_delete_expired_object[0] will be updated in-place

~ resource "aws_s3_bucket_lifecycle_configuration" "logging_bucket_delete_expired_object" {
id = "mys3-dev-bkplog20250202203612984400000003"
# (2 unchanged attributes hidden)
~ rule {
id = "delete-expired-object-delete-markers-incomplete-multipart-uploads"
# (1 unchanged attribute hidden)
- expiration {
- days = 1 -> null
- expired_object_delete_marker = false -> null
}
# (2 unchanged blocks hidden)
}
}

@justinretzolk
Copy link
Member

Hey @Tesseract99 👋 Thank you for taking the time to raise this! I suspect what's happening here is a result of the following note in the aws_s3_bucket_lifecycle_configuration resource:

S3 Buckets only support a single lifecycle configuration. Declaring multiple aws_s3_bucket_lifecycle_configuration resources to the same S3 Bucket will cause a perpetual difference in configuration.

While you're not defining two separate lifecycle configuration resources, as you mentioned, you've got an additional organizationally-defined rule being added. The aws_s3_bucket_lifecycle_configuration resource does check the result of the "create" command to ensure that the lifecycle matches what was sent, since there can be a delay. Looking back at version 4.67.0, that check was not performed, which makes sense given the behavior you're experiencing.

I tried to look around a bit at setting a default rule like you mentioned, but didn't find a way to do so. Can you give me a better idea of how that's being done? That may give a better indication as to how to resolve this. My initial thought is that doing so would involve adding whatever the "default" rule is to the aws_s3_bucket_lifecycle_configuration in this configuration, but more details may make it easier to confirm that.


@rishabhToshniwal 👋 Your issue relates to the quote that I called out above from the aws_s3_bucket_lifecycle_configuration documentation. Adding two of those resources for a single S3 bucket is not a valid configuration and will result in perpetual drift as the resources fight with each other.

@justinretzolk justinretzolk added the waiting-response Maintainers are waiting on response from community or contributor. label Feb 3, 2025
@rishabhToshniwal
Copy link

@justinretzolk You are right, I resolved my issue with setting 2 rules in the same aws_s3_bucket_lifecycle_configuration.

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label Feb 5, 2025
@justinretzolk justinretzolk added the waiting-response Maintainers are waiting on response from community or contributor. label Feb 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Addresses a defect in current functionality. needs-triage Waiting for first response or review from a maintainer. service/s3 Issues and PRs that pertain to the s3 service. waiting-response Maintainers are waiting on response from community or contributor.
Projects
None yet
Development

No branches or pull requests

3 participants