A secure, Node.js-based Docker proxy that forwards OpenAI-compatible requests to Tinfoil AI. This project leverages the Tinfoil SecureClient to ensure all interactions are encrypted and attested, guaranteeing execution privacy.
- OpenAI Compatibility: Drop-in replacement for OpenAI API clients (proxies
/v1/models,/v1/chat/completions, etc.). - Privacy & Security: Uses the Tinfoil OS
SecureClientto perform remote attestation and end-to-end encryption. - Automatic Recovery: Automatically handles HPKE key mismatches by resetting the secure client and retrying.
- Streaming Support: Fully supports streaming responses for chat completions (Node.js & Web Streams).
- Dockerized: specific
Dockerfileincluded for easy deployment. - Header Management: Automatically handles Tinfoil API authentication and cleans up hop-by-hop headers.
- Docker installed.
- Tinfoil API Key. You can obtain one from the Tinfoil Dashboard. (Optional if provided in requests)
- (Optional) Node.js v18+ for local development.
git clone https://github.com/d1vanloon/tinfoil-docker-proxy.git
cd tinfoil-docker-proxyCreate a .env file in the root directory (or use environment variables in Docker):
# Optional: Your Tinfoil API Key
# If not provided here, it must be provided in the Authorization header of each request.
TINFOIL_API_KEY=your_tinfoil_api_key_here
# Optional: Server Port (default: 3000)
PORT=3000
# Optional: Reset Client Interval in Seconds (default: 3600 = 1 hour)
# Defines how often the secure client should be reset to ensure freshness.
TINFOIL_RESET_INTERVAL=3600Pre-built Docker images are automatically published to GitHub Container Registry. You can pull and run them directly without building locally.
-
Pull the latest image:
docker pull ghcr.io/d1vanloon/tinfoil-docker-proxy:latest
-
Run the container:
docker run -d \ -e TINFOIL_API_KEY=your_api_key \ -p 3000:3000 \ --name tinfoil-proxy \ ghcr.io/d1vanloon/tinfoil-docker-proxy:latest
Using a specific version:
docker pull ghcr.io/d1vanloon/tinfoil-docker-proxy:v1.0.0 docker run -d \ -e TINFOIL_API_KEY=your_api_key \ -p 3000:3000 \ --name tinfoil-proxy \ ghcr.io/d1vanloon/tinfoil-docker-proxy:v1.0.0
If you prefer to build the image yourself:
-
Build the image:
docker build -t tinfoil-proxy . -
Run the container:
docker run -d \ -e TINFOIL_API_KEY=your_api_key \ -p 3000:3000 \ --name tinfoil-proxy \ tinfoil-proxy
-
Install dependencies:
npm install
-
Start the server:
npm start
The server will verify the Tinfoil environment and start listening on port 3000.
Run the test suite to verify functionality:
npm testOnce the proxy is running, point your OpenAI client or standard HTTP requests to http://localhost:3000.
Example: PowerShell Request (using configured API key)
Invoke-RestMethod -Uri "http://localhost:3000/v1/chat/completions" `
-Method Post `
-ContentType "application/json" `
-Body '{
"model": "gpt-oss-120b",
"messages": [
{ "role": "user", "content": "Hello, Tinfoil!" }
]
}' | ConvertTo-Json -Depth 10Example: PowerShell Request with custom API key
Invoke-RestMethod -Uri "http://localhost:3000/v1/chat/completions" `
-Method Post `
-ContentType "application/json" `
-Headers @{ Authorization = "Bearer your_custom_api_key" } `
-Body '{
"model": "gpt-oss-120b",
"messages": [
{ "role": "user", "content": "Hello, Tinfoil!" }
]
}' | ConvertTo-Json -Depth 10- Initialization: On startup, the
SecureClientauthenticates with the Tinfoil platform and verifies the execution environment (TEE) using remote attestation. If verification fails, the server will not start. - Request Handling: Incoming requests (e.g., from an LLM client) are intercepted.
- API Key Management: The proxy uses the API key provided in the request's
Authorizationheader if present. If no API key is provided by the client, it falls back to the configuredTINFOIL_API_KEYenvironment variable. This allows clients to use their own API keys while maintaining backward compatibility. - Security: Requests are forwarded over an encrypted channel established by the
SecureClient. - Streaming: Responses from Tinfoil are streamed back to the client in real-time, preserving standard OpenAI chunk formats.
Tinfoil AI provides an API for running LLMs inside secure enclaves (TEEs). This ensures that:
- Data Privacy: Your prompts and data are encrypted in transit and in use. Tinfoil AI cannot see your data.
- Model Integrity: You are guaranteed that the code running is exactly what was attested.
For more information, visit tinfoil.sh.
- Unofficial: This project is community-maintained and is not officially associated with Tinfoil AI.
- Security: This proxy is intended for local use or within a secure private network. Do not expose this service directly to the public internet without additional authentication and security layers.