Skip to content

fix: AWS self-hosting setup issues#2110

Open
ya-luotao wants to merge 5 commits intoe2b-dev:mainfrom
ya-luotao:fix/aws-self-host-docs
Open

fix: AWS self-hosting setup issues#2110
ya-luotao wants to merge 5 commits intoe2b-dev:mainfrom
ya-luotao:fix/aws-self-host-docs

Conversation

@ya-luotao
Copy link
Contributor

Summary

  • Fix PREFIX in env templates: Remove quotes and inline comments from PREFIX in .env.aws.template and .env.gcp.template. Make includes env files literally, so PREFIX="e2b-" # comment results in the value "e2b-" (with literal quotes and trailing space), which breaks BUCKET_PREFIX concatenation — the shell sees 528893196824- as a command instead of part of the variable value.
  • Fix S3 bucket creation for us-east-1: AWS rejects --create-bucket-configuration LocationConstraint=us-east-1 since us-east-1 is the default region. The Makefile now conditionally omits this flag for us-east-1.
  • Fix Packer AMI build path in self-host.md: The directory is iac/provider-aws/nomad-cluster-disk-image, not the non-existent iac/provider-aws/packer. Also clarified that it builds a single shared AMI (not per-node-type AMIs) and added the actual make init + make build commands.

Test plan

  • Verify make init works in us-east-1 with fresh S3 bucket creation
  • Verify make init still works in non-us-east-1 regions
  • Verify Packer AMI build works from the documented path

🤖 Generated with Claude Code

- Remove quotes and inline comments from PREFIX in env templates. Make
  includes env files literally, so quotes and trailing spaces before #
  comments become part of the value, breaking variable concatenation
  (e.g. BUCKET_PREFIX becomes "e2b-" 528893196824- instead of
  e2b-528893196824-).
- Handle S3 bucket creation for us-east-1. AWS rejects
  LocationConstraint=us-east-1 since it's the default region; the
  Makefile now omits that flag for us-east-1.
- Fix Packer AMI build step in self-host.md: the directory is
  iac/provider-aws/nomad-cluster-disk-image, not iac/provider-aws/packer.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ya-luotao and others added 2 commits March 12, 2026 16:26
The source GCS bucket is publicly readable, so files can be downloaded
via HTTPS + curl instead of requiring gsutil. This is useful for AWS
users who don't want to install the Google Cloud SDK for a one-time
setup step.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The prep-cluster step fails with "relation teams does not exist" if
migrations haven't been run first. Also documents the Supabase
workaround: the first migration creates auth schema objects that
Supabase already provides, so it must be skipped.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
self-host.md Outdated
Comment on lines +185 to +189
11. Run database migrations: `cd packages/db && make migrate`
> If using Supabase, the first migration (`20000101000000_auth.sql`) will fail because Supabase already provides the `auth` schema, `authenticated` role, and `auth.users` table. To skip it, manually mark it as applied before running `make migrate`:
> ```sh
> psql "$POSTGRES_CONNECTION_STRING" -c "INSERT INTO _migrations (version_id, is_applied) VALUES (20000101000000, true);"
> ```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not needed, api deployed as part of make plan && make apply will execute migrations.

self-host.md Outdated
Comment on lines +149 to +182
8. Run `make copy-public-builds` to copy Firecracker kernels and rootfs to your S3 buckets
> This requires `gsutil` to download from the public GCS bucket and `aws` CLI to upload to your S3 buckets
8. Copy Firecracker kernels and rootfs to your S3 buckets. You have two options:

**Option A** — Using `make` (requires [`gsutil`](https://cloud.google.com/storage/docs/gsutil_install)):
```sh
make copy-public-builds
```

**Option B** — Without `gsutil` (download from public GCS bucket via HTTPS, then upload to S3):
```sh
# Set your bucket prefix (PREFIX + AWS_ACCOUNT_ID + "-")
BUCKET_PREFIX="e2b-YOUR_ACCOUNT_ID-"

# Download kernels and firecrackers
mkdir -p .kernels .firecrackers
for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=kernels/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do
mkdir -p ".kernels/$(dirname "${name#kernels/}")"
curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".kernels/${name#kernels/}"
done
for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=firecrackers/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do
mkdir -p ".firecrackers/$(dirname "${name#firecrackers/}")"
curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".firecrackers/${name#firecrackers/}"
done

# Upload to S3
aws s3 cp .kernels/ "s3://${BUCKET_PREFIX}fc-kernels/" --recursive --profile default
aws s3 cp .firecrackers/ "s3://${BUCKET_PREFIX}fc-versions/" --recursive --profile default
rm -rf .kernels .firecrackers
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can maybe replace copy-public-builds aws implementation to something like this instead:

mkdir -p ./.kernels
mkdir -p ./.firecrackers
aws s3 cp s3://e2b-prod-public-builds/kernels/ ./.kernels/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com
aws s3 cp s3://e2b-prod-public-builds/firecrackers/ ./.firecrackers/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com
aws s3 cp ./.kernels/ s3://${AWS_BUCKET_PREFIX}fc-kernels/ --recursive --profile ${AWS_PROFILE}
aws s3 cp ./.firecrackers/ s3://${AWS_BUCKET_PREFIX}fc-versions/ --recursive --profile ${AWS_PROFILE}
rm -rf ./.kernels
rm -rf ./.firecrackers

- Remove manual migration step: API's db-migrator prestart task runs
  migrations automatically during `make plan && make apply`
- Simplify Option B for copy-public-builds: use aws CLI with GCS
  S3-compatible endpoint (--no-sign-request --endpoint-url) instead
  of curl/python approach

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
self-host.md Outdated
Comment on lines +162 to +170
**Option B** — Without `gsutil` (download from public GCS bucket via HTTPS, then upload to S3):
**Option B** — Without `gsutil` (uses `aws` CLI with GCS S3-compatible endpoint):
```sh
# Set your bucket prefix (PREFIX + AWS_ACCOUNT_ID + "-")
BUCKET_PREFIX="e2b-YOUR_ACCOUNT_ID-"

# Download kernels and firecrackers
mkdir -p .kernels .firecrackers
for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=kernels/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do
mkdir -p ".kernels/$(dirname "${name#kernels/}")"
curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".kernels/${name#kernels/}"
done
for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=firecrackers/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do
mkdir -p ".firecrackers/$(dirname "${name#firecrackers/}")"
curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".firecrackers/${name#firecrackers/}"
done

# Upload to S3
aws s3 cp .kernels/ "s3://${BUCKET_PREFIX}fc-kernels/" --recursive --profile default
aws s3 cp .firecrackers/ "s3://${BUCKET_PREFIX}fc-versions/" --recursive --profile default
rm -rf .kernels .firecrackers
# Download from public GCS bucket via S3-compatible API
mkdir -p ./.kernels ./.firecrackers
aws s3 cp s3://e2b-prod-public-builds/kernels/ ./.kernels/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com
aws s3 cp s3://e2b-prod-public-builds/firecrackers/ ./.firecrackers/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com

# Upload to your S3 buckets
aws s3 cp ./.kernels/ s3://${BUCKET_PREFIX}fc-kernels/ --recursive --profile ${AWS_PROFILE}
aws s3 cp ./.firecrackers/ s3://${BUCKET_PREFIX}fc-versions/ --recursive --profile ${AWS_PROFILE}
rm -rf ./.kernels ./.firecrackers
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove all this and just replace make copy-public-builds command for aws to only use aws cli.

Per review feedback: instead of documenting an alternative, replace the
actual Makefile implementation. Uses aws s3 cp with --no-sign-request
and --endpoint-url https://storage.googleapis.com to download from the
public GCS bucket via the S3-compatible API. This removes the gsutil
dependency for AWS deployments entirely.

Also simplify the docs back to just `make copy-public-builds`.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ValentaTomas ValentaTomas removed their request for review March 12, 2026 21:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants