Skip to content
This repository was archived by the owner on Nov 30, 2023. It is now read-only.

AWS CLI feature #1326

Merged
merged 8 commits into from
Mar 8, 2022
Merged

AWS CLI feature #1326

merged 8 commits into from
Mar 8, 2022

Conversation

Rabadash8820
Copy link
Contributor

@Rabadash8820 Rabadash8820 commented Feb 26, 2022

Adds a container feature for the missing AWS CLI (as mentioned in #1122). I think I followed the contribution guidelines correctly, but please let me know if there are issues with this PR. In particular:

  • Am I targeting the correct branch (main)?
  • I wasn't sure if this was a use case for just a container feature or a whole new container. I've started with the former.
  • There are some other non-essential changes in the first commit. If these are out of scope then I can drop/revert that commit.
  • I have a hard-coded GPG public key and fingerprint in the install script; wasn't sure if this should be added to the shared/settings.env file in some way.
  • I put VS Code and GitHub Codespaces teams in the Maintainers sections, in the hopes that that team would take over this feature, since it seems like a pretty essential one. But if not, I guess I'm willing to put my username instead.
  • I'm not sure if I included all the appropriate containers in the include section of devcontainer-features.json. Seems to me like the association should really be the other way around, with containers specifying which features they include, but still.
  • I hope to add features and/or containers for the SAM CLI and CDK CLIs also, but wanted to make sure that I'm doing things correctly here before opening more PRs.

VS Code automatically applied some whitespace cleanup when I opened these files. Obviously these changes aren't strictly necessary, but they seemed worthwhile.

Also replaced "EG" with the correct "E.g." in the container features README, and replaced https URLs with local ones. This makes it easier to jump to those files for editing while reading the README locally.
@ghost
Copy link

ghost commented Feb 26, 2022

CLA assistant check
All CLA requirements met.

if [ "${AWSCLI_VERSION}" != "latest" ]; then
local versionStr=-${AWSCLI_VERSION}
fi
local scriptUrl=https://awscli.amazonaws.com/awscli-exe-linux-x86_64${versionStr}.zip
Copy link
Contributor

@sktan sktan Feb 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Rabadash8820 it might be useful to determine the architecture running the devcontainer so that it will work for both x64 and ARM architectures when installing awscli:

# Install x86 or ARM version of awscliv2 based on current machine architecture
if [ "$(uname -m)" == "x86_64" ]; then
  local scriptUrl=https://awscli.amazonaws.com/awscli-exe-linux-x86_64${versionStr}.zip
else
  local scriptUrl=https://awscli.amazonaws.com/awscli-exe-linux-aarch64${versionStr}.zip
fi

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great suggestion! Just updated. Also included an error message for machines that are not x84-64 or ARM64.

@Chuxel
Copy link
Member

Chuxel commented Mar 1, 2022

This is great. Yeah walking through your questions:

Am I targeting the correct branch (main)?

Yep!

I wasn't sure if this was a use case for just a container feature or a whole new container. I've started with the former.

Yeah given how this would be consumed, I think a feature would make sense.

I have a hard-coded GPG public key and fingerprint in the install script; wasn't sure if this should be added to the shared/settings.env file in some way.

There is a utility function that will go and check that file (assuming it can access it) to see if there is a more up-to-date file and get the specified variable. This helps insulate situations where a GPG key, signature, etc are invalidated and need to be updated. See here.

I'm not sure if I included all the appropriate containers in the include section of devcontainer-features.json. Seems to me like the association should really be the other way around, with containers specifying which features they include, but still.

Yeah there is both an "include" and "exclude" option. The challenge with "exclude" is that the testing combinations get out of hand quickly.

I put VS Code and GitHub Codespaces teams in the Maintainers sections, in the hopes that that team would take over this feature, since it seems like a pretty essential one. But if not, I guess I'm willing to put my username instead.

Right now, we're looking at ways to better enable self-publishing and we have found it is best to have the contributor involved in ongoing work even if we're still keeping an eye on things. This allows individuals and organizations to publish updates on a schedule that makes sense for the specific scenario - folks like yourself will work day to day in particular ecosystem and will best know when things shift. (This is what @bamurtaugh summarized in #1291.)

@Rabadash8820
Copy link
Contributor Author

Rabadash8820 commented Mar 2, 2022

@Chuxel Thanks for the responses, and for the link to that other repo; very interesting to see how this space is evolving!

So to clarify:

  1. I did see that receive_gpg_keys util function, but I was unclear how it works. I'm not super experienced with GPG...it looks like that function is downloading keys from a public key server, but idk which server(s), if any, hold the AWS CLI key.
  2. Is the include metadata good as is? When, if ever, would it need to change?
  3. Should I put myself as the Maintainer or leave it as is?

@Chuxel
Copy link
Member

Chuxel commented Mar 2, 2022

@Rabadash8820 We chatted about this internally and agree that having us help maintain this particular one does make sense given its criticality. That said, if it's okay with you, we'd add you on as an additional contributor/maintainer given the primary folks maintaining this repository do not live in AWS day to day, so we may not be proactively aware of shifts that happen that would affect the feature. We really appreciate the hard work!

The include metadata looks just fine!

I did see that receive_gpg_keys util function, but I was unclear how it works. I'm not super experienced with GPG...it looks like that function is downloading keys from a public key server, but idk which server(s), if any, hold the AWS CLI key.

Yeah, everyone manages GPG keys a bit differently from what we've seen. Sometimes it's a key server, sometimes it's a file, other times it's the data as you have here. For AWS, it looks like they have the hard coded key on a cursory glance.

What you can do to future proof a bit is add that key into this file. The GPG_KEY_SERVERS variable is an example of one with newlines in it. In the script, you then use utility function from here to fetch it. If it needs to change down the road, we just update the .env file and anyone using the script gets the updated key. Here's how using that function could look.

AWS_CLI_KEY="-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBF2Cr7UBEADJZHcgusOJl7ENSyumXh85z0TRV0xJorM2B/JL0kHOyigQluUG
ZMLhENaG0bYatdrKP+3H91lvK050pXwnO/R7fB/FSTouki4ciIx5OuLlnJZIxSzx
PqGl0mkxImLNbGWoi6Lto0LYxqHN2iQtzlwTVmq9733zd3XfcXrZ3+LblHAgEt5G
TfNxEKJ8soPLyWmwDH6HWCnjZ/aIQRBTIQ05uVeEoYxSh6wOai7ss/KveoSNBbYz
gbdzoqI2Y8cgH2nbfgp3DSasaLZEdCSsIsK1u05CinE7k2qZ7KgKAUIcT/cR/grk
C6VwsnDU0OUCideXcQ8WeHutqvgZH1JgKDbznoIzeQHJD238GEu+eKhRHcz8/jeG
94zkcgJOz3KbZGYMiTh277Fvj9zzvZsbMBCedV1BTg3TqgvdX4bdkhf5cH+7NtWO
lrFj6UwAsGukBTAOxC0l/dnSmZhJ7Z1KmEWilro/gOrjtOxqRQutlIqG22TaqoPG
fYVN+en3Zwbt97kcgZDwqbuykNt64oZWc4XKCa3mprEGC3IbJTBFqglXmZ7l9ywG
EEUJYOlb2XrSuPWml39beWdKM8kzr1OjnlOm6+lpTRCBfo0wa9F8YZRhHPAkwKkX
XDeOGpWRj4ohOx0d2GWkyV5xyN14p2tQOCdOODmz80yUTgRpPVQUtOEhXQARAQAB
tCFBV1MgQ0xJIFRlYW0gPGF3cy1jbGlAYW1hem9uLmNvbT6JAlQEEwEIAD4WIQT7
Xbd/1cEYuAURraimMQrMRnJHXAUCXYKvtQIbAwUJB4TOAAULCQgHAgYVCgkICwIE
FgIDAQIeAQIXgAAKCRCmMQrMRnJHXJIXEAChLUIkg80uPUkGjE3jejvQSA1aWuAM
yzy6fdpdlRUz6M6nmsUhOExjVIvibEJpzK5mhuSZ4lb0vJ2ZUPgCv4zs2nBd7BGJ
MxKiWgBReGvTdqZ0SzyYH4PYCJSE732x/Fw9hfnh1dMTXNcrQXzwOmmFNNegG0Ox
au+VnpcR5Kz3smiTrIwZbRudo1ijhCYPQ7t5CMp9kjC6bObvy1hSIg2xNbMAN/Do
ikebAl36uA6Y/Uczjj3GxZW4ZWeFirMidKbtqvUz2y0UFszobjiBSqZZHCreC34B
hw9bFNpuWC/0SrXgohdsc6vK50pDGdV5kM2qo9tMQ/izsAwTh/d/GzZv8H4lV9eO
tEis+EpR497PaxKKh9tJf0N6Q1YLRHof5xePZtOIlS3gfvsH5hXA3HJ9yIxb8T0H
QYmVr3aIUes20i6meI3fuV36VFupwfrTKaL7VXnsrK2fq5cRvyJLNzXucg0WAjPF
RrAGLzY7nP1xeg1a0aeP+pdsqjqlPJom8OCWc1+6DWbg0jsC74WoesAqgBItODMB
rsal1y/q+bPzpsnWjzHV8+1/EtZmSc8ZUGSJOPkfC7hObnfkl18h+1QtKTjZme4d
H17gsBJr+opwJw/Zio2LMjQBOqlm3K1A4zFTh7wBC7He6KPQea1p2XAMgtvATtNe
YLZATHZKTJyiqA==
=vYOk
-----END PGP PUBLIC KEY BLOCK-----
"

get_common_setting AWS_CLI_KEY
echo "${AWS_CLI_KEY}" > "${awsCliPublicKeyFile}"
gpg --quiet --import "${awsCliPublicKeyFile}"

@Chuxel Chuxel requested a review from joshspicer March 2, 2022 16:11
@Rabadash8820
Copy link
Contributor Author

@Chuxel That all sounds good! I also asked about when, if ever, the include/exclude metadata will need to be updated. Do you have recommendations on that?

Regarding the GPG key, I'm not really sure I see the value in adding this key to the shared settings file. TBH, most of the settings in that file aren't really "shared", and look like they are only relevant to one or two specific feature install scripts. So in theory, if no other feature is going to need the AWS CLI GPG key, why should it be a shared setting?

@Rabadash8820
Copy link
Contributor Author

Rabadash8820 commented Mar 2, 2022

This is a bit out of scope for this PR, but I'm wondering if there should be separate utility functions for the different GPG scenarios that you described:

  1. receive_gpg_keys: looks up one or more keys from various keyservers, as the current function does (maybe with additional ability to provide a feature-specific keyserver)
  2. import_gpg_key: has options to import a single key from a file path, or from hard-coded text (presumably from a variable)

@joshspicer
Copy link
Member

Regarding the GPG key, I'm not really sure I see the value in adding this key to the shared settings file. TBH, most of the settings in that file aren't really "shared", and look like they are only relevant to one or two specific feature install scripts.

The shared file is mostly there to keep us sane maintaining all these definitions/features (it's nice to have it in one spot). Also, if these settings are centralized, we may be able to (down the line) do something more advanced with the tooling to push out fixes for "commonly breakable" things. (At least that's how I see it :) )

If nothing else, it makes the script file itself look a bit cleaner

@joshspicer
Copy link
Member

  • receive_gpg_keys: looks up one or more keys from various keyservers, as the current function does (maybe with additional ability to provide a feature-specific keyserver)
  • import_gpg_key: has options to import a single key from a file path, or from hard-coded text (presumably from a variable)

I think the only issue with this idea is that every vendor does seems to use a slightly different means of validating their binary/package/etc..

@Rabadash8820
Copy link
Contributor Author

Rabadash8820 commented Mar 3, 2022

@joshspicer Works for me. 🤷‍♂️ Just updated to use get_common_setting for the GPG key/fingerprint. This was the first instance of an entire GPG key in the shared settings file, so I went with the naming convention *_GPG_KEY_MATERIAL. Also I wasn't quite sure how to test this; before, I could just copy the feature install script to the .devcontainer folder of a new workspace, but now the dependence on the shared settings/utils files is throwing me off a bit. The CI tests should catch things though, right?

I think the only issue with this idea is that every vendor does seems to use a slightly different means of validating their binary/package/etc..

I mean, I think "receiving" and "importing" cover the two primary GPG use cases, but I agree that if these shared functions are only used by the handful of features that use GPG for validation...might not be worth it.

Copy link
Member

@Chuxel Chuxel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Rabadash8820 Did an in-depth review and have some suggestions. You can use "commit suggestions" if they make sense. If it's easier we can also make these tweaks post merge.

One thing I noticed is that the regression test appears to be giving a false positive which is why some of this did not appear.

Comment on lines 30 to 32
(
) > "${awsCliPublicKeyFile}"
gpg --quiet --import "${awsCliPublicKeyFile}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is left over content - I'm getting an error from here when running the script.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops. 😅 Fixed in eb14d69.

echo -e 'Script must be run as root. Use sudo, su, or add "USER root" to your Dockerfile before running this script.'
exit 1
fi

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To get this model to work, you'd still include the key as the default along with the get_common_setting script. Add the following:

Suggested change
AWSCLI_GPG_KEY=FB5DB77FD5C118B80511ADA8A6310ACC4672475C
AWSCLI_GPG_KEY_MATERIAL="-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBF2Cr7UBEADJZHcgusOJl7ENSyumXh85z0TRV0xJorM2B/JL0kHOyigQluUG
ZMLhENaG0bYatdrKP+3H91lvK050pXwnO/R7fB/FSTouki4ciIx5OuLlnJZIxSzx
PqGl0mkxImLNbGWoi6Lto0LYxqHN2iQtzlwTVmq9733zd3XfcXrZ3+LblHAgEt5G
TfNxEKJ8soPLyWmwDH6HWCnjZ/aIQRBTIQ05uVeEoYxSh6wOai7ss/KveoSNBbYz
gbdzoqI2Y8cgH2nbfgp3DSasaLZEdCSsIsK1u05CinE7k2qZ7KgKAUIcT/cR/grk
C6VwsnDU0OUCideXcQ8WeHutqvgZH1JgKDbznoIzeQHJD238GEu+eKhRHcz8/jeG
94zkcgJOz3KbZGYMiTh277Fvj9zzvZsbMBCedV1BTg3TqgvdX4bdkhf5cH+7NtWO
lrFj6UwAsGukBTAOxC0l/dnSmZhJ7Z1KmEWilro/gOrjtOxqRQutlIqG22TaqoPG
fYVN+en3Zwbt97kcgZDwqbuykNt64oZWc4XKCa3mprEGC3IbJTBFqglXmZ7l9ywG
EEUJYOlb2XrSuPWml39beWdKM8kzr1OjnlOm6+lpTRCBfo0wa9F8YZRhHPAkwKkX
XDeOGpWRj4ohOx0d2GWkyV5xyN14p2tQOCdOODmz80yUTgRpPVQUtOEhXQARAQAB
tCFBV1MgQ0xJIFRlYW0gPGF3cy1jbGlAYW1hem9uLmNvbT6JAlQEEwEIAD4WIQT7
Xbd/1cEYuAURraimMQrMRnJHXAUCXYKvtQIbAwUJB4TOAAULCQgHAgYVCgkICwIE
FgIDAQIeAQIXgAAKCRCmMQrMRnJHXJIXEAChLUIkg80uPUkGjE3jejvQSA1aWuAM
yzy6fdpdlRUz6M6nmsUhOExjVIvibEJpzK5mhuSZ4lb0vJ2ZUPgCv4zs2nBd7BGJ
MxKiWgBReGvTdqZ0SzyYH4PYCJSE732x/Fw9hfnh1dMTXNcrQXzwOmmFNNegG0Ox
au+VnpcR5Kz3smiTrIwZbRudo1ijhCYPQ7t5CMp9kjC6bObvy1hSIg2xNbMAN/Do
ikebAl36uA6Y/Uczjj3GxZW4ZWeFirMidKbtqvUz2y0UFszobjiBSqZZHCreC34B
hw9bFNpuWC/0SrXgohdsc6vK50pDGdV5kM2qo9tMQ/izsAwTh/d/GzZv8H4lV9eO
tEis+EpR497PaxKKh9tJf0N6Q1YLRHof5xePZtOIlS3gfvsH5hXA3HJ9yIxb8T0H
QYmVr3aIUes20i6meI3fuV36VFupwfrTKaL7VXnsrK2fq5cRvyJLNzXucg0WAjPF
RrAGLzY7nP1xeg1a0aeP+pdsqjqlPJom8OCWc1+6DWbg0jsC74WoesAqgBItODMB
rsal1y/q+bPzpsnWjzHV8+1/EtZmSc8ZUGSJOPkfC7hObnfkl18h+1QtKTjZme4d
H17gsBJr+opwJw/Zio2LMjQBOqlm3K1A4zFTh7wBC7He6KPQea1p2XAMgtvATtNe
YLZATHZKTJyiqA==
=vYOk
-----END PGP PUBLIC KEY BLOCK-----"
get_common_setting() {
if [ "${common_settings_file_loaded}" != "true" ]; then
curl -sfL "https://aka.ms/vscode-dev-containers/script-library/settings.env" 2>/dev/null -o /tmp/vsdc-settings.env || echo "Could not download settings file. Skipping."
common_settings_file_loaded=true
fi
if [ -f "/tmp/vsdc-settings.env" ]; then
local multi_line=""
if [ "$2" = "true" ]; then multi_line="-z"; fi
local result="$(grep ${multi_line} -oP "$1=\"?\K[^\"]+" /tmp/vsdc-settings.env | tr -d '\0')"
if [ ! -z "${result}" ]; then declare -g $1="${result}"; fi
fi
echo "$1=${!1}"
}

This will update the setting if one is found and otherwise use what is embedded in the script. This is particularly critical for testing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I think I follow... I don't love having this giant base64 blob in two places, but I get that the script needs a default. Why would I add the get_common_setting function though, if it's already in shared/utils.sh? I see that this function exists in the related azcli-debian.sh script, but I don't get why its necessary there either. The body is identical to the shared function.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not download and source that file in separately right now. Users do tend to go to these files directly and curl the script in without the utility as well, so we try to keep them self-contained. So, there's a legacy here we're dealing with TBH.

As the features functionality stabilizes, we'll shuffle things around, but there's a bit of a dual use here today.

So, this is point-in-time.

Copy link
Member

@Chuxel Chuxel Mar 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh and on the blob, yes - the idea with the central file is for out of band updates. There are pros-and-cons with having a default value here. Technically we could separate updating the settings.env part of the PR out, merge that first, then merge the script itself. That would be the alternative which would work too.

What having a value here does is that it creates a fallback value if for some reason the script is unable to pull common settings.env file for some reason (network blips, etc). The get_common_setting script intentionally prints just a warning (Could not download settings file. Skipping.) and continues on using whatever the value was before that point.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, I can accept "because legacy" lol. Added get_common_setting in 72b8a9a.

Comment on lines 51 to 55
arch=$(uname -m)
if [ "$arch" != "x86_64" -a "$arch" != "aarch64" ]; then
echo "AWS CLI does not support machine architecture '$arch'. Please use an x86-64 or ARM64 machine."
exit 1
fi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uname -m can come back with a number of differrent values for arm64 unfortunatley... arm64 and armv8l are two other examples in addition to aarch64.

Using dpkg --print-architecture gives you two reliable values on debian - either amd64 or arm64 is more reliable. This should do what you are trying to do I think:

Suggested change
arch=$(uname -m)
if [ "$arch" != "x86_64" -a "$arch" != "aarch64" ]; then
echo "AWS CLI does not support machine architecture '$arch'. Please use an x86-64 or ARM64 machine."
exit 1
fi
arch=$(dpkg --print-architecture)
case "${arch}" in
amd64) arch=x86_64 ;;
arm64) arch=aarch64 ;;
*)
echo "AWS CLI does not support machine architecture '$arch'. Please use an x86-64 or ARM64 machine."
exit 1
esac

Copy link
Contributor Author

@Rabadash8820 Rabadash8820 Mar 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, I didn't know about the dpkg option. Fixed in eb14d69.


rm -rf ./aws
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that this script fails with the standard debian and ubuntu images due to the fact that curl and gpg are not in them. You can add the following to make sure these are installed if they're missing:

Suggested change
# Function to run apt-get if needed
apt_get_update_if_needed()
{
if [ ! -d "/var/lib/apt/lists" ] || [ "$(ls /var/lib/apt/lists/ | wc -l)" = "0" ]; then
echo "Running apt-get update..."
apt-get update
else
echo "Skipping apt-get update."
fi
}
# Checks if packages are installed and installs them if not
check_packages() {
if ! dpkg -s "$@" > /dev/null 2>&1; then
apt_get_update_if_needed
apt-get -y install --no-install-recommends "$@"
fi
}
export DEBIAN_FRONTEND=noninteractive
check_packages curl ca-certificates gnupg2 dirmngr

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in eb14d69

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I actually just noticed that these apt_get_update_if_needed and check_packages functions are already defined in the shared/utils.sh script. Why do they need to be defined again here (and in the Azure CLI install script)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not download and source that file in separately right now. Users do tend to go to these files directly and curl the script in without the utility as well, so we try to keep them self-contained. So, there's a legacy here we're dealing with TBH.

As the features functionality stabilizes, we'll shuffle things around, but there's a bit of a dual use here today.

@Chuxel
Copy link
Member

Chuxel commented Mar 7, 2022

Pulled in the regression test fix so you can see errors as the CI jobs run now as well.

- Remove some duplicate GPG verification code
- Use `dpkg --print-architecture` to get more reliable machine architecture strings
- Ensure `curl` and `gpg` packages are installed (using new functions similar to Azure CLI install script)
Copied from Azure CLI install script
@Chuxel
Copy link
Member

Chuxel commented Mar 8, 2022

There's a couple of issues I can quickly fix post-merge but otherwise, LGTM! Thanks for the contribution!

@Chuxel Chuxel merged commit c858742 into microsoft:main Mar 8, 2022
@Rabadash8820
Copy link
Contributor Author

Awesome, thanks @Chuxel, this was fun. :P Any idea when I can expect to see this feature live in the VS Code list?

@Chuxel
Copy link
Member

Chuxel commented Mar 8, 2022

@Rabadash8820 It will appear in the UX at the latest with the next VS Code release (start of next month) - if there's "recovery builds" it could go earlier.

However, if you switch to the preview version of the extension (which is the default for VS Code Insiders), you'll see it sooner. Typically whenever there's a release of this repository, we generally get it into insiders the next day. Largely the next time @chrmarti publishes an update with new features for preview/insiders - which will happen multiple times during the development iteration.

Over time as we decouple release processes a bit, the updates in stable will happen sooner as well.

@joshspicer
Copy link
Member

Additionally, next time we create a release from this repo, you'll be able to reference the feature directly from a devcontainer.json with our experimental "remote features" syntax

devcontainer.json:
"features": {
   "microsoft/vscode-dev-containers/aws-cli": "latest"
}

This syntax will pull directly from the latest GitHub release, whereas (at the moment) the shorthand aws-cli is pulling from a cache that we release alongside the Remote-Containers/Codespaces extensions.

@Rabadash8820
Copy link
Contributor Author

Awesome, thanks for the info @Chuxel @joshspicer. Looking forward to the future of devcontainers!

@Rabadash8820 Rabadash8820 deleted the awscli-feature branch March 9, 2022 03:31
@Chuxel
Copy link
Member

Chuxel commented Mar 18, 2022

Deployed!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants