This project provides a minimal microservice architecture for remote code execution supporting Go and C++.
- api-gateway: Accepts submissions, exposes status/result endpoints.
- executor-go: Consumes SQS messages for Go jobs, runs
go runinside container, uploads output to S3, updates DynamoDB. - executor-cpp: Consumes SQS messages for C++ jobs, compiles with
g++ -std=c++17 -O2, runs, uploads output to S3, updates DynamoDB. - shared: Common models, config, store (DynamoDB helpers), and queue helpers.
- Client POSTs
/submitwith{ userId, language: go|cpp, code, input }. api-gatewaystores job (status=queued) in DynamoDB and pushes minimal message{executionId, language}to SQS.- Appropriate executor polls SQS, fetches job from DynamoDB, marks
running. - Code executed with timeout (
EXEC_TIMEOUT_SEC, default 10s). Output + stderr combined -> S3 objectoutputs/<executionId>.txt. - DynamoDB updated with
status(completed|failed|timeout),outputPath,stdoutPreview, timestamps, and error if any. - Client polls
/status?id=<executionId>or requests/result?id=<executionId>for a presigned URL + preview.
| Field | Description | | ----------------------------------------------- | ----------------------------------------- | ------- | --------- | ------ | ------- | | executionId (PK) | Unique job id | | userId | Arbitrary user identifier | | language | go or cpp | | code | Raw source code | | input | Optional stdin content | | status | queued | running | completed | failed | timeout | | error | Error/truncated stderr message | | outputPath | S3 URI of combined output file | | stdoutPreview | First ~500 chars of stdout | | createdAt / updatedAt / startedAt / completedAt | Timestamps (RFC3339) | | execDurationMs | Milliseconds runtime (excludes S3 upload) |
Set these for every service (compose passes through):
AWS_REGIONAWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY(dummy acceptable for LocalStack)DYNAMODB_TABLECODE_EXEC_BUCKETSQS_QUEUE_URLEXEC_TIMEOUT_SEC(executors only, optional)
You can use real AWS resources or LocalStack (not yet wired here—future enhancement).
# Ensure env vars exported or placed in a .env file for docker-compose
docker compose build
docker compose up -dcurl -X POST http://localhost:8080/submit \
-H 'Content-Type: application/json' \
-d '{"userId":"u1","language":"go","code":"package main\nimport (\n\t\"fmt\"\n)\nfunc main(){fmt.Println(\"hi\")}","input":""}'Response:
{
"executionId": "...",
"status": "queued",
"language": "go",
...
}curl "http://localhost:8080/status?id=<executionId>"Will include: status, stdoutPreview (after completion), error, outputPath.
curl "http://localhost:8080/result?id=<executionId>"Returns JSON with url and stdoutPreview.
curl -X POST http://localhost:8080/submit \
-H 'Content-Type: application/json' \
-d '{"userId":"u1","language":"cpp","code":"#include <bits/stdc++.h>\nusing namespace std; int main(){string s; if(!(cin>>s)) return 0; cout<<s<<\\n;}","input":"hello"}'If execution exceeds EXEC_TIMEOUT_SEC, status becomes timeout and error field contains a message.
- Current containers run arbitrary code with full process privileges of container.
- Recommended enhancements: cgroup CPU/mem limits, seccomp profiles, gVisor/Firecracker isolation, restrict network egress, sanitize code size.
- Consider separate per-job ephemeral containers instead of in-process
go run/ compiled binary reuse.
- LocalStack docker-compose integration.
- Per-language SQS queues / SNS fan-out.
- Rate limiting & auth (API keys / JWT).
- Output size streaming & pagination.
- Persistent logs & metrics (CloudWatch / OpenTelemetry).
- Websocket / SSE for real-time status updates.
Language support is configured explicitly where the API server is built (see api-gateway/server.go). There is no implicit default resolver; you must list every supported language when constructing the resolver.
Steps:
- Edit
api-gateway/server.goand locate thelanguages.NewResolver([...])call. Add a new entry to the slice:
langResolver: languages.NewResolver([]languages.Language{
{Name: "go", Aliases: []string{"golang"}, DisplayName: "Go"},
{Name: "cpp", Aliases: []string{"c++"}, DisplayName: "C++"},
{Name: "python", Aliases: []string{"py"}, DisplayName: "Python"}, // <--- added
})-
(Optional but recommended) If you need shared normalization data, you can create a helper in
shared/languages(e.g. a function returning the slice) and reference it fromserver.goto avoid duplication across tests. -
Create a new executor service directory (e.g.
executor-python) modeled afterexecutor-go/executor-cpp:- Poll SQS messages.
- Skip messages whose
languagedoes not matchpython. - Write user code to a temp file (e.g.
main.py). - Execute with a timeout (
python3 main.py). - Combine stdout + stderr, upload to S3, update DynamoDB (status + preview) exactly like other executors.
-
Add a Dockerfile for the new executor (Python base image) and extend
docker-compose.ymlwith a new service referencing it (mounting env vars identical to other executors). -
Rebuild and start:
docker compose build executor-python api-gateway
docker compose up -d executor-python- Submit jobs with
"language": "python"(aliases likepywill normalize topython).
Because construction is explicit, you can feature‑flag or dynamically construct the slice (e.g. from a config file) before passing it to languages.NewResolver. If you later remove a language from the slice, the API will start rejecting it immediately with a 400 (unsupported language).
Ensure you delete S3 objects and DynamoDB items for test jobs to control costs in real AWS.
Client -> API Gateway -> DynamoDB (create item)
|-> SQS (enqueue {id,lang})
SQS -> executor-go (filter lang=go) -> fetch item -> run -> S3 upload -> DynamoDB update
SQS -> executor-cpp (filter lang=cpp) -> fetch item -> run -> S3 upload -> DynamoDB update
Client -> /status -> DynamoDB
Client -> /result -> DynamoDB -> (presign) S3
- Stuck in queued: check executors logs & SQS queue size.
- Missing outputPath: verify S3 bucket exists & permissions.
- Timeout quickly: adjust
EXEC_TIMEOUT_SEC. - Docker build issues: ensure relative
sharedmodule path matches build context.
This README covers how to run, extend, and harden the system.