-
Notifications
You must be signed in to change notification settings - Fork 111
Option to filter tags by service or resource type in aws_tagging_resource table #2466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
c7120ff
to
8e75ea2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
Hi @thomasklemm, I was reviewing the code changes and had a few suggestions for improvement:
Thanks! |
8e75ea2
to
3c2a4eb
Compare
Hi @thomasklemm, just checking in to see if you had a chance to review the suggestions and comments above. |
Hi @ParthaI, thanks for the detailed suggestions! I made them locally, need to test them in our cluster, would update the PR after |
Hi @thomasklemm, just checking in to see if you’ve had a chance to test it out and push the changes based on your findings? |
3c2a4eb
to
9779b47
Compare
Hi @ParthaI, made the changes and adjusted the initial query examples to match the Tried these queries and all looks correct: select arn from aws_tagging_resource where resource_types = '["ec2", "rds", "s3:bucket"]' order by arn;
select count(arn) from aws_tagging_resource where resource_types = '["ec2", "rds", "s3:bucket"]';
select arn from aws_tagging_resource where resource_types = '["ec2:instance", "rds:db", "s3:bucket"]' order by arn;
select count(arn) from aws_tagging_resource where resource_types = '["ec2:instance", "rds:db", "s3:bucket"]';
select arn from aws_tagging_resource where resource_types = '[]' order by arn;
select arn from aws_tagging_resource order by arn;
select count(arn) from aws_tagging_resource where resource_types = '[]';
select count(arn) from aws_tagging_Resource; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a new filter option for the aws_tagging_resource table that allows filtering resources by specific AWS service or resource types using a JSON array of strings.
- Added documentation in aws_tagging_resource.md to explain the new filtering option with examples.
- Modified aws/table_aws_tagging_resource.go to support a new key column "resource_types" and to parse the JSON array of resource types from query qualifiers.
- Enhanced error handling for invalid JSON input in resource_types.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
File | Description |
---|---|
docs/tables/aws_tagging_resource.md | Updated documentation to describe resource type filters. |
aws/table_aws_tagging_resource.go | Added key column support, JSON parsing, and error handling for resource_types filter. |
Comments suppressed due to low confidence (1)
aws/table_aws_tagging_resource.go:127
- [nitpick] Consider renaming 'resource_types' to 'rawResourceTypes' to better distinguish it from the parsed slice 'resourceTypes' and to align with Go naming conventions.
resource_types := d.EqualsQuals["resource_types"].GetJsonbValue()
@ParthaI There's two length constraints mentioned in the API docs: 100 array items, and 256 characters in total for the string that gets sent to the API. Wondering if we should handle that here and raise an error to the user? If I see correctly AWS just starts to ignore resource types after the character limit (but need to verify this better) |
Hello @thomasklemm, thank you for sharing the detailed information. Approach 1: We can pass the Approach 2: Alternatively, if the In my opinion, I’d prefer Approach 1, as it aligns with the default behavior of the AWS CLI. Please let me know your thoughts. Thanks! |
@ParthaI I think the API in this case in not returning an error in either case (array > 100 entries, complete string > 256 characters) based on what I observed earlier, it will just silently drop the additional items. The string limit is actually quite easy to it if you're querying for more than 20-25 services at the same time, w/ complete resource types much less is actually usable. I think it might make sense to raise an error in the code to allow the user to adjust their query, or even better do the chunking you describe in approach 2. Is there another place in the AWS plugin where this strategy is being used? Another thing I just noticed: I think the caching isn't working in the case where |
We have implemented a similar pattern (though not exactly the same) in the
Since we are using |
Hello @thomasklemm, did you get a chance to take a look at the above comment? |
9779b47
to
9fdcdbf
Compare
…ource The AWS Resource Groups Tagging API has a 256-character limit for the ResourceTypeFilters parameter when resource types are comma-separated. This change implements automatic batching to handle large lists of resource types that would exceed this limit. How the batching works: - The splitResourceTypes function calculates the total length of resource types when joined with commas - When adding a new resource type would exceed 256 characters, it starts a new batch - Each batch is processed as a separate API call with its own pagination - Results are deduplicated by ARN to prevent the same resource appearing multiple times when it matches multiple resource type filters This follows a similar pattern to aws_codecommit_repository.go which batches repository names for the BatchGetRepositories API that has a 25-item limit. The implementation maintains full backward compatibility and transparency - users can specify large lists of resource types without manual batching.
…acter limit After testing, the actual limitation for ResourceTypeFilters in the AWS Resource Groups Tagging API is 100 items per request, not 256 characters when comma-separated as suggested in some documentation. Changes: - Renamed splitResourceTypes to batchResourceTypes for clarity - Simplified batching logic to count items instead of character length - Changed batch size from 256 characters to 100 items - Removed complex string length calculations in favor of simple item counting This makes the code much simpler and aligns with the actual API behavior observed during testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces filtering functionality by resource types for the aws_tagging_resource table. Key changes include:
- Updating documentation to describe the new JSON array filter for resource types.
- Adding a "resource_types" column and KeyColumns configuration to support filtering.
- Implementing JSON parsing, batching of resource type qualifiers, and deduplication of results based on ARN.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
File | Description |
---|---|
docs/tables/aws_tagging_resource.md | Added documentation for filtering resources by resource types. |
aws/table_aws_tagging_resource.go | Implemented parsing, batching, and deduplication for the new filter. |
Comments suppressed due to low confidence (2)
aws/table_aws_tagging_resource.go:122
- [nitpick] Consider renaming 'resource_types' to 'resourceTypesJSON' to follow Go's camelCase naming conventions and to clearly differentiate the qualifier value from other variables.
resource_types := d.EqualsQuals["resource_types"].GetJsonbValue()
aws/table_aws_tagging_resource.go:222
- [nitpick] Consider renaming 'currentItems' to 'batchCount' to improve clarity and align with idiomatic Go naming practices.
currentItems := 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Adds support for filtering the aws_tagging_resource
table by AWS service or specific resource types.
- Introduce a new
resource_types
qualifier, parsing JSON arrays of strings. - Implement batching, pagination, and deduplication in
listTaggingResources
. - Extend documentation with examples and a new column.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
docs/tables/aws_tagging_resource.md | Added a "Filter Resources by Resource Types" section with usage examples and a table of common types. |
aws/table_aws_tagging_resource.go | Implemented parseResourceTypesFilter , batching (createResourceTypeBatches ), and processing logic. |
Comments suppressed due to low confidence (2)
aws/table_aws_tagging_resource.go:155
- [nitpick] Expand the example formats in the error message to include both service-only and service:resourceType patterns (e.g., ["ec2", "ec2:instance"]) for clearer guidance.
errors.New("failed to parse 'resource_types' qualifier: value must be a JSON array of strings, e.g. [\"ec2:instance\", \"s3:bucket\", \"rds\"]")
aws/table_aws_tagging_resource.go:121
- Add unit tests for
parseResourceTypesFilter
,createResourceTypeBatches
, andprocessResourceBatch
to verify behavior with empty, multiple, and invalidresource_types
inputs.
resourceTypes, err := parseResourceTypesFilter(d)
@ParthaI I have now confirmed that the 256 character limit that the API docs mention doesn't exist in reality, not sure why it made it to the docs. However the 100 items limit exists, so I adjusted the implementation to do automatic batching if more than 100 services/resource types get provided to the resource types filter. Locally it's working very well, but I'd like to confirm it in our production environment too w/ access to larger AWS organizations, so see if there's any issues. Without the batching, this error would get returned: |
Thanks, @thomasklemm , for diving deeper into this. The implementation looks good. Please let me know once you've pushed the changes to this PR, and I’ll do a final review. Thanks again for all your efforts! |
Adds the option to filter the
aws_tagging_resource
table by resource types, e.g.ec2:instance,s3:bucket,auditmanager
for limiting the response to only Amazon EC2 instances, Amazon S3 buckets, or any AWS Audit Manager resource.Integration test logs
Logs
Example queries