Tools
-
- Used for building the disk image of the orchestrator client and server
-
Terraform (v1.5.x)
- We ask for v1.5.x because starting from v1.6 Terraform switched their license from Mozilla Public License to Business Source License.
- The last version of Terraform that supports Mozilla Public License is v1.5.7
-
- Used for managing the infrastructure on Google Cloud
- Be sure to authenticate:
gcloud auth login gcloud auth application-default login
Accounts
- Cloudflare account
- Domain on Cloudflare
- GCP account + project
- PostgreSQL database (Supabase's DB only supported for now)
Optional
Recommended for monitoring and logging
- Grafana Account & Stack
- Posthog Account
Check if you can use config for terraform state management
- Go to
console.cloud.google.comand create a new GCP projectMake sure your Quota allows you to have at least 2500G for
Persistent Disk SSD (GB)and at least 24 forCPUs - Create
.env.prod,.env.staging, or.env.devfrom.env.template. You can pick any of them. Make sure to fill in the values. All are required if not specified otherwise.Get Postgres database connection string from your database, e.g. from Supabase: Create a new project in Supabase and go to your project in Supabase -> Settings -> Database -> Connection Strings -> Postgres -> Direct or Shared The variant needs to be IPv4 compatible. You can either use Shared or use the IPv4 add-on in Connect screen
- Run
make set-env ENV={prod,staging,dev}to start using your env - Run
make provider-loginto login togcloud - Run
make init. If this errors, run it a second time--it's due to a race condition on Terraform enabling API access for the various GCP services; this can take several seconds. A full list of services that will be enabled for API access: - Run
make build-and-upload - Run
make copy-public-builds. This will copy kernel and rootfs builds for Firecracker to your bucket. You can build your own kernel and Firecracker roots. - For following secrets terraform creates only an empty secret containers in GCP Secrets Manager. You need to add a secret version with the actual value. Go to GCP Secrets Manager, click on the secret, then click "New Version" to add the value for the following secrets:
- e2b-cloudflare-api-token
Get Cloudflare API Token: go to the Cloudflare dashboard -> Manage Account -> Account API Tokens -> Create Token -> Edit Zone DNS -> in "Zone Resources" select your domain and generate the token
- e2b-postgres-connection-string (required)
- e2b-supabase-jwt-secrets (optional / required to self-host the E2B dashboard)
Get Supabase JWT Secret: go to the Supabase dashboard -> Select your Project -> Project Settings -> Data API -> JWT Settings
- e2b-posthog-api-key (optional, for monitoring)
- Run
make plan-without-jobsand thenmake apply - Run
make planand thenmake apply. Note: This will work after the TLS certificates was issued. It can take some time; you can check the status in the Google Cloud Console - Setup data in the cluster by running
make prep-clusterinpackages/sharedto create an initial user, team, and build a base template.
- You can also run
make seed-dbinpackages/dbto create more users and teams.
When using SDK pass domain when creating new Sandbox in JS/TS SDK
import { Sandbox } from "e2b";
const sandbox = await Sandbox.create({
domain: "<your-domain>",
});or in Python SDK
from e2b import Sandbox
sandbox = Sandbox.create(domain="<your-domain>")When using CLI you can pass domain as well
E2B_DOMAIN=<your-domain> e2b <command>To access the nomad web UI, go to https://nomad.<your-domain.com>. Go to sign in, and when prompted for an API token, you can find this in GCP Secrets Manager. From here, you can see nomad jobs and tasks for both client and server, including logging.
To update jobs running in the cluster look inside iac/provider-gcp/nomad/jobs/ for config files. This can be useful for setting your logging and monitoring agents.
If any problems arise, open a Github Issue on the repo and we'll look into it.
E2B is using Firecracker for Sandboxes.
You can build your own kernel and Firecracker version from source by running make build-and-upload-fc-components
- Note: This needs to be done on a Linux machine due to case-sensitive requirements for the file system--you'll error out during the automated git section with a complaint about unsaved changes. Kernel and versions could alternatively be sourced elsewhere.
make init- setup the terraform environmentmake plan- plans the terraform changesmake apply- applies the terraform changes, you have to runmake planbefore this onemake plan-without-jobs- plans the terraform changes without provisioning nomad jobsmake plan-only-jobs- plans the terraform changes only for provisioning nomad jobsmake destroy- destroys the clustermake version- increments the repo versionmake build-and-upload- builds and uploads the docker images, binaries, and cluster disk imagemake copy-public-builds- copies the old envd binary, kernels, and firecracker versions from the public bucket to your bucketmake migrate- runs the migrations for your databasemake provider-login- logs in to cloud providermake switch-env ENV={prod,staging,dev}- switches the environmentmake import TARGET={resource} ID={resource_id}- imports the already created resources into the terraform statemake setup-ssh- sets up the ssh key for the environment (useful for remote-debugging)make connect-orchestrator- establish the ssh connection to the remote orchestrator (for testing API locally)
Quotas not available
If you can't find the quota in All Quotas in GCP's Console, then create and delete a dummy VM before proceeding to step 2 in self-deploy guide. This will create additional quotas and policies in GCP
gcloud compute instances create dummy-init --project=YOUR-PROJECT-ID --zone=YOUR-ZONE --machine-type=e2-medium --boot-disk-type=pd-ssd --no-address
Wait a minute and destroy the VM:
gcloud compute instances delete dummy-init --zone=YOUR-ZONE --quiet
Now, you should see the right quota options in All Quotas and be able to request the correct size.