Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .env.aws.template
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ DOMAIN_NAME=
# These variables need to be specified but have default values

PROVIDER=aws
PREFIX="e2b-" # prefix identifier for all resources
# prefix identifier for all resources
PREFIX=e2b-

# prod, staging, dev
TERRAFORM_ENVIRONMENT=dev
3 changes: 2 additions & 1 deletion .env.gcp.template
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ POSTGRES_CONNECTION_STRING=
# These variables need to be specified but have default values

PROVIDER=gcp
PREFIX="e2b-" # prefix identifier for all resources
# prefix identifier for all resources
PREFIX=e2b-

# prod, staging, dev
TERRAFORM_ENVIRONMENT=dev
Expand Down
7 changes: 6 additions & 1 deletion iac/provider-aws/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,12 @@ switch:
.PHONY: init
init:
# Create S3 bucket for Terraform state if it doesn't exist
aws s3api create-bucket --bucket $(TERRAFORM_STATE_BUCKET) --region $(TEMPLATE_BUCKET_LOCATION) --profile $(AWS_PROFILE) --create-bucket-configuration LocationConstraint=$(TEMPLATE_BUCKET_LOCATION) >/dev/null 2>&1 || true
# us-east-1 requires omitting LocationConstraint; all other regions require it
@if [ "$(TEMPLATE_BUCKET_LOCATION)" = "us-east-1" ]; then \
aws s3api create-bucket --bucket $(TERRAFORM_STATE_BUCKET) --region $(TEMPLATE_BUCKET_LOCATION) --profile $(AWS_PROFILE) 2>/dev/null || true; \
else \
aws s3api create-bucket --bucket $(TERRAFORM_STATE_BUCKET) --region $(TEMPLATE_BUCKET_LOCATION) --profile $(AWS_PROFILE) --create-bucket-configuration LocationConstraint=$(TEMPLATE_BUCKET_LOCATION) 2>/dev/null || true; \
fi
$(tf_vars) $(TF) init -upgrade -reconfigure -backend-config="bucket=${TERRAFORM_STATE_BUCKET}"
$(tf_vars) $(TF) apply -target=module.init -input=false -compact-warnings

Expand Down
51 changes: 44 additions & 7 deletions self-host.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,12 @@ Check if you can use config for terraform state management
- e2b-posthog-api-key (optional, for monitoring)
9. Run `make plan-without-jobs` and then `make apply`
10. Run `make plan` and then `make apply`. Note: This will work after the TLS certificates was issued. It can take some time; you can check the status in the Google Cloud Console
11. Setup data in the cluster by running `make prep-cluster` in `packages/shared` to create an initial user, team, and build a base template.
11. Run database migrations: `cd packages/db && make migrate`
> If using Supabase, the first migration (`20000101000000_auth.sql`) will fail because Supabase already provides the `auth` schema, `authenticated` role, and `auth.users` table. To skip it, manually mark it as applied before running `make migrate`:
> ```sh
> psql "$POSTGRES_CONNECTION_STRING" -c "INSERT INTO _migrations (version_id, is_applied) VALUES (20000101000000, true);"
> ```
12. Setup data in the cluster by running `make prep-cluster` in `packages/shared` to create an initial user, team, and build a base template.
- You can also run `make seed-db` in `packages/db` to create more users and teams.

### GCP Troubleshooting
Expand Down Expand Up @@ -140,17 +145,49 @@ Now, you should see the right quota options in `All Quotas` and be able to reque
- `{prefix}supabase-jwt-secrets` - Supabase JWT secret (optional / required for the [E2B dashboard](https://github.com/e2b-dev/dashboard))
- `{prefix}grafana` - JSON with `API_KEY`, `OTLP_URL`, `OTEL_COLLECTOR_TOKEN`, `USERNAME` keys (optional, for monitoring)
- `{prefix}launch-darkly-api-key` - LaunchDarkly SDK key (optional, for feature flags)
6. Build Packer AMIs for the cluster nodes:
6. Build the Packer AMI for cluster nodes (a single shared AMI used by all node types):
```sh
cd iac/provider-aws/packer
# Build AMIs for control server, API, client, clickhouse, and build nodes
cd iac/provider-aws/nomad-cluster-disk-image
make init # install Packer plugins
make build # build the AMI (~5 min, launches a t3.large)
```
7. Run `make build-and-upload` to build and push container images and binaries
8. Run `make copy-public-builds` to copy Firecracker kernels and rootfs to your S3 buckets
> This requires `gsutil` to download from the public GCS bucket and `aws` CLI to upload to your S3 buckets
8. Copy Firecracker kernels and rootfs to your S3 buckets. You have two options:

**Option A** — Using `make` (requires [`gsutil`](https://cloud.google.com/storage/docs/gsutil_install)):
```sh
make copy-public-builds
```

**Option B** — Without `gsutil` (download from public GCS bucket via HTTPS, then upload to S3):
```sh
# Set your bucket prefix (PREFIX + AWS_ACCOUNT_ID + "-")
BUCKET_PREFIX="e2b-YOUR_ACCOUNT_ID-"

# Download kernels and firecrackers
mkdir -p .kernels .firecrackers
for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=kernels/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do
mkdir -p ".kernels/$(dirname "${name#kernels/}")"
curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".kernels/${name#kernels/}"
done
for name in $(curl -s "https://storage.googleapis.com/storage/v1/b/e2b-prod-public-builds/o?prefix=firecrackers/" | python3 -c "import sys,json; [print(i['name']) for i in json.load(sys.stdin).get('items',[]) if int(i.get('size',0))>0]"); do
mkdir -p ".firecrackers/$(dirname "${name#firecrackers/}")"
curl -sL "https://storage.googleapis.com/e2b-prod-public-builds/$name" -o ".firecrackers/${name#firecrackers/}"
done

# Upload to S3
aws s3 cp .kernels/ "s3://${BUCKET_PREFIX}fc-kernels/" --recursive --profile default
aws s3 cp .firecrackers/ "s3://${BUCKET_PREFIX}fc-versions/" --recursive --profile default
rm -rf .kernels .firecrackers
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can maybe replace copy-public-builds aws implementation to something like this instead:

mkdir -p ./.kernels
mkdir -p ./.firecrackers
aws s3 cp s3://e2b-prod-public-builds/kernels/ ./.kernels/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com
aws s3 cp s3://e2b-prod-public-builds/firecrackers/ ./.firecrackers/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com
aws s3 cp ./.kernels/ s3://${AWS_BUCKET_PREFIX}fc-kernels/ --recursive --profile ${AWS_PROFILE}
aws s3 cp ./.firecrackers/ s3://${AWS_BUCKET_PREFIX}fc-versions/ --recursive --profile ${AWS_PROFILE}
rm -rf ./.kernels
rm -rf ./.firecrackers

9. Run `make plan-without-jobs` and then `make apply` to provision the cluster infrastructure
10. Run `make plan` and then `make apply` to deploy all Nomad jobs
11. Setup data in the cluster by running `make prep-cluster` in `packages/shared` to create an initial user, team, and build a base template
11. Run database migrations: `cd packages/db && make migrate`
> If using Supabase, the first migration (`20000101000000_auth.sql`) will fail because Supabase already provides the `auth` schema, `authenticated` role, and `auth.users` table. To skip it, manually mark it as applied before running `make migrate`:
> ```sh
> psql "$POSTGRES_CONNECTION_STRING" -c "INSERT INTO _migrations (version_id, is_applied) VALUES (20000101000000, true);"
> ```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not needed, api deployed as part of make plan && make apply will execute migrations.

12. Setup data in the cluster by running `make prep-cluster` in `packages/shared` to create an initial user, team, and build a base template

### AWS Architecture

Expand Down