Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .env.aws.template
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ DOMAIN_NAME=
# These variables need to be specified but have default values

PROVIDER=aws
PREFIX="e2b-" # prefix identifier for all resources
# prefix identifier for all resources
PREFIX=e2b-

# prod, staging, dev
TERRAFORM_ENVIRONMENT=dev
3 changes: 2 additions & 1 deletion .env.gcp.template
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ POSTGRES_CONNECTION_STRING=
# These variables need to be specified but have default values

PROVIDER=gcp
PREFIX="e2b-" # prefix identifier for all resources
# prefix identifier for all resources
PREFIX=e2b-

# prod, staging, dev
TERRAFORM_ENVIRONMENT=dev
Expand Down
7 changes: 6 additions & 1 deletion iac/provider-aws/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,12 @@ switch:
.PHONY: init
init:
# Create S3 bucket for Terraform state if it doesn't exist
aws s3api create-bucket --bucket $(TERRAFORM_STATE_BUCKET) --region $(TEMPLATE_BUCKET_LOCATION) --profile $(AWS_PROFILE) --create-bucket-configuration LocationConstraint=$(TEMPLATE_BUCKET_LOCATION) >/dev/null 2>&1 || true
# us-east-1 requires omitting LocationConstraint; all other regions require it
@if [ "$(TEMPLATE_BUCKET_LOCATION)" = "us-east-1" ]; then \
aws s3api create-bucket --bucket $(TERRAFORM_STATE_BUCKET) --region $(TEMPLATE_BUCKET_LOCATION) --profile $(AWS_PROFILE) 2>/dev/null || true; \
else \
aws s3api create-bucket --bucket $(TERRAFORM_STATE_BUCKET) --region $(TEMPLATE_BUCKET_LOCATION) --profile $(AWS_PROFILE) --create-bucket-configuration LocationConstraint=$(TEMPLATE_BUCKET_LOCATION) 2>/dev/null || true; \
fi
$(tf_vars) $(TF) init -upgrade -reconfigure -backend-config="bucket=${TERRAFORM_STATE_BUCKET}"
$(tf_vars) $(TF) apply -target=module.init -input=false -compact-warnings

Expand Down
35 changes: 28 additions & 7 deletions self-host.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Check if you can use config for terraform state management
> Get Supabase JWT Secret: go to the [Supabase dashboard](https://supabase.com/dashboard) -> Select your Project -> Project Settings -> Data API -> JWT Settings
- e2b-posthog-api-key (optional, for monitoring)
9. Run `make plan-without-jobs` and then `make apply`
10. Run `make plan` and then `make apply`. Note: This will work after the TLS certificates was issued. It can take some time; you can check the status in the Google Cloud Console
10. Run `make plan` and then `make apply`. Note: This will work after the TLS certificates was issued. It can take some time; you can check the status in the Google Cloud Console. Database migrations run automatically via the API's db-migrator task.
11. Setup data in the cluster by running `make prep-cluster` in `packages/shared` to create an initial user, team, and build a base template.
- You can also run `make seed-db` in `packages/db` to create more users and teams.

Expand Down Expand Up @@ -140,16 +140,37 @@ Now, you should see the right quota options in `All Quotas` and be able to reque
- `{prefix}supabase-jwt-secrets` - Supabase JWT secret (optional / required for the [E2B dashboard](https://github.com/e2b-dev/dashboard))
- `{prefix}grafana` - JSON with `API_KEY`, `OTLP_URL`, `OTEL_COLLECTOR_TOKEN`, `USERNAME` keys (optional, for monitoring)
- `{prefix}launch-darkly-api-key` - LaunchDarkly SDK key (optional, for feature flags)
6. Build Packer AMIs for the cluster nodes:
6. Build the Packer AMI for cluster nodes (a single shared AMI used by all node types):
```sh
cd iac/provider-aws/packer
# Build AMIs for control server, API, client, clickhouse, and build nodes
cd iac/provider-aws/nomad-cluster-disk-image
make init # install Packer plugins
make build # build the AMI (~5 min, launches a t3.large)
```
7. Run `make build-and-upload` to build and push container images and binaries
8. Run `make copy-public-builds` to copy Firecracker kernels and rootfs to your S3 buckets
> This requires `gsutil` to download from the public GCS bucket and `aws` CLI to upload to your S3 buckets
8. Copy Firecracker kernels and rootfs to your S3 buckets. You have two options:

**Option A** — Using `make` (requires [`gsutil`](https://cloud.google.com/storage/docs/gsutil_install)):
```sh
make copy-public-builds
```

**Option B** — Without `gsutil` (uses `aws` CLI with GCS S3-compatible endpoint):
```sh
# Set your bucket prefix (PREFIX + AWS_ACCOUNT_ID + "-")
BUCKET_PREFIX="e2b-YOUR_ACCOUNT_ID-"

# Download from public GCS bucket via S3-compatible API
mkdir -p ./.kernels ./.firecrackers
aws s3 cp s3://e2b-prod-public-builds/kernels/ ./.kernels/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com
aws s3 cp s3://e2b-prod-public-builds/firecrackers/ ./.firecrackers/ --recursive --no-sign-request --endpoint-url https://storage.googleapis.com

# Upload to your S3 buckets
aws s3 cp ./.kernels/ s3://${BUCKET_PREFIX}fc-kernels/ --recursive --profile ${AWS_PROFILE}
aws s3 cp ./.firecrackers/ s3://${BUCKET_PREFIX}fc-versions/ --recursive --profile ${AWS_PROFILE}
rm -rf ./.kernels ./.firecrackers
```
9. Run `make plan-without-jobs` and then `make apply` to provision the cluster infrastructure
10. Run `make plan` and then `make apply` to deploy all Nomad jobs
10. Run `make plan` and then `make apply` to deploy all Nomad jobs (this also runs database migrations automatically via the API's db-migrator task)
11. Setup data in the cluster by running `make prep-cluster` in `packages/shared` to create an initial user, team, and build a base template

### AWS Architecture
Expand Down