Open
Conversation
- Remove quotes and inline comments from PREFIX in env templates. Make includes env files literally, so quotes and trailing spaces before # comments become part of the value, breaking variable concatenation (e.g. BUCKET_PREFIX becomes "e2b-" 528893196824- instead of e2b-528893196824-). - Handle S3 bucket creation for us-east-1. AWS rejects LocationConstraint=us-east-1 since it's the default region; the Makefile now omits that flag for us-east-1. - Fix Packer AMI build step in self-host.md: the directory is iac/provider-aws/nomad-cluster-disk-image, not iac/provider-aws/packer. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The source GCS bucket is publicly readable, so files can be downloaded via HTTPS + curl instead of requiring gsutil. This is useful for AWS users who don't want to install the Google Cloud SDK for a one-time setup step. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The prep-cluster step fails with "relation teams does not exist" if migrations haven't been run first. Also documents the Supabase workaround: the first migration creates auth schema objects that Supabase already provides, so it must be skipped. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sitole
reviewed
Mar 12, 2026
self-host.md
Outdated
Comment on lines
+185
to
+189
| 11. Run database migrations: `cd packages/db && make migrate` | ||
| > If using Supabase, the first migration (`20000101000000_auth.sql`) will fail because Supabase already provides the `auth` schema, `authenticated` role, and `auth.users` table. To skip it, manually mark it as applied before running `make migrate`: | ||
| > ```sh | ||
| > psql "$POSTGRES_CONNECTION_STRING" -c "INSERT INTO _migrations (version_id, is_applied) VALUES (20000101000000, true);" | ||
| > ``` |
Member
There was a problem hiding this comment.
This is not needed, api deployed as part of make plan && make apply will execute migrations.
sitole
requested changes
Mar 12, 2026
self-host.md
Outdated
Comment on lines
+149
to
+182
| 8. Run `make copy-public-builds` to copy Firecracker kernels and rootfs to your S3 buckets | ||
| > This requires `gsutil` to download from the public GCS bucket and `aws` CLI to upload to your S3 buckets | ||
| 8. Copy Firecracker kernels and rootfs to your S3 buckets. You have two options: | ||
|
|
||
| **Option A** — Using `make` (requires [`gsutil`](https://cloud.google.com/storage/docs/gsutil_install)): | ||
| ```sh | ||
| make copy-public-builds | ||
| ``` | ||
|
|
||
| **Option B** — Without `gsutil` (download from public GCS bucket via HTTPS, then upload to S3): | ||
| ```sh | ||
| # Set your bucket prefix (PREFIX + AWS_ACCOUNT_ID + "-") | ||
| BUCKET_PREFIX="e2b-YOUR_ACCOUNT_ID-" | ||
|
|
||
| # Download kernels and firecrackers | ||
| mkdir -p .kernels .firecrackers | ||
| for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=kernels/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do | ||
| mkdir -p ".kernels/$(dirname "${name#kernels/}")" | ||
| curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".kernels/${name#kernels/}" | ||
| done | ||
| for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=firecrackers/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do | ||
| mkdir -p ".firecrackers/$(dirname "${name#firecrackers/}")" | ||
| curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".firecrackers/${name#firecrackers/}" | ||
| done | ||
|
|
||
| # Upload to S3 | ||
| aws s3 cp .kernels/ "s3://${BUCKET_PREFIX}fc-kernels/" --recursive --profile default | ||
| aws s3 cp .firecrackers/ "s3://${BUCKET_PREFIX}fc-versions/" --recursive --profile default | ||
| rm -rf .kernels .firecrackers | ||
| ``` |
Member
There was a problem hiding this comment.
We can maybe replace copy-public-builds aws implementation to something like this instead:
mkdir -p ./.kernels
mkdir -p ./.firecrackers
aws s3 cp s3://e2b-prod-public-builds/kernels/ ./.kernels/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com
aws s3 cp s3://e2b-prod-public-builds/firecrackers/ ./.firecrackers/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com
aws s3 cp ./.kernels/ s3://${AWS_BUCKET_PREFIX}fc-kernels/ --recursive --profile ${AWS_PROFILE}
aws s3 cp ./.firecrackers/ s3://${AWS_BUCKET_PREFIX}fc-versions/ --recursive --profile ${AWS_PROFILE}
rm -rf ./.kernels
rm -rf ./.firecrackers
- Remove manual migration step: API's db-migrator prestart task runs migrations automatically during `make plan && make apply` - Simplify Option B for copy-public-builds: use aws CLI with GCS S3-compatible endpoint (--no-sign-request --endpoint-url) instead of curl/python approach Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sitole
reviewed
Mar 12, 2026
self-host.md
Outdated
Comment on lines
+162
to
+170
| **Option B** — Without `gsutil` (download from public GCS bucket via HTTPS, then upload to S3): | ||
| **Option B** — Without `gsutil` (uses `aws` CLI with GCS S3-compatible endpoint): | ||
| ```sh | ||
| # Set your bucket prefix (PREFIX + AWS_ACCOUNT_ID + "-") | ||
| BUCKET_PREFIX="e2b-YOUR_ACCOUNT_ID-" | ||
|
|
||
| # Download kernels and firecrackers | ||
| mkdir -p .kernels .firecrackers | ||
| for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=kernels/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do | ||
| mkdir -p ".kernels/$(dirname "${name#kernels/}")" | ||
| curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".kernels/${name#kernels/}" | ||
| done | ||
| for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=firecrackers/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do | ||
| mkdir -p ".firecrackers/$(dirname "${name#firecrackers/}")" | ||
| curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".firecrackers/${name#firecrackers/}" | ||
| done | ||
|
|
||
| # Upload to S3 | ||
| aws s3 cp .kernels/ "s3://${BUCKET_PREFIX}fc-kernels/" --recursive --profile default | ||
| aws s3 cp .firecrackers/ "s3://${BUCKET_PREFIX}fc-versions/" --recursive --profile default | ||
| rm -rf .kernels .firecrackers | ||
| # Download from public GCS bucket via S3-compatible API | ||
| mkdir -p ./.kernels ./.firecrackers | ||
| aws s3 cp s3://e2b-prod-public-builds/kernels/ ./.kernels/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com | ||
| aws s3 cp s3://e2b-prod-public-builds/firecrackers/ ./.firecrackers/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com | ||
|
|
||
| # Upload to your S3 buckets | ||
| aws s3 cp ./.kernels/ s3://${BUCKET_PREFIX}fc-kernels/ --recursive --profile ${AWS_PROFILE} | ||
| aws s3 cp ./.firecrackers/ s3://${BUCKET_PREFIX}fc-versions/ --recursive --profile ${AWS_PROFILE} | ||
| rm -rf ./.kernels ./.firecrackers |
Member
There was a problem hiding this comment.
Please remove all this and just replace make copy-public-builds command for aws to only use aws cli.
Per review feedback: instead of documenting an alternative, replace the actual Makefile implementation. Uses aws s3 cp with --no-sign-request and --endpoint-url https://storage.googleapis.com to download from the public GCS bucket via the S3-compatible API. This removes the gsutil dependency for AWS deployments entirely. Also simplify the docs back to just `make copy-public-builds`. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
PREFIXin.env.aws.templateand.env.gcp.template. Make includes env files literally, soPREFIX="e2b-" # commentresults in the value"e2b-"(with literal quotes and trailing space), which breaksBUCKET_PREFIXconcatenation — the shell sees528893196824-as a command instead of part of the variable value.--create-bucket-configuration LocationConstraint=us-east-1since us-east-1 is the default region. The Makefile now conditionally omits this flag for us-east-1.iac/provider-aws/nomad-cluster-disk-image, not the non-existentiac/provider-aws/packer. Also clarified that it builds a single shared AMI (not per-node-type AMIs) and added the actualmake init+make buildcommands.Test plan
make initworks in us-east-1 with fresh S3 bucket creationmake initstill works in non-us-east-1 regions🤖 Generated with Claude Code