NanoProxy is a bridge layer for NanoGPT when native tool calling is unreliable.
It sits between the client and NanoGPT, replaces fragile native tool-calling with a stricter bridge protocol, and converts the result back into normal OpenAI-style tool_calls for the client. That lets tools like OpenCode keep working normally even when NanoGPT would otherwise stop early, leak raw tool text, or return malformed tool output.
This is the easiest setup for OpenCode.
- No local proxy server to keep running
- No custom provider setup
- Keep using the normal built-in NanoGPT provider
Use this if:
- you are not using OpenCode
- your client supports an OpenAI-compatible base URL
- you prefer a separate local proxy process
The plugin intercepts NanoGPT API requests inside OpenCode and applies the NanoProxy bridge automatically.
Example config:
{
"plugin": [
"file:///path/to/NanoProxy/src/plugin.mjs"
]
}Notes:
- Use a real absolute file path.
- On Windows, a valid example looks like:
file:///C:/Users/you/path/to/NanoProxy/src/plugin.mjs
- After editing the config, restart OpenCode.
Enable debug logging for one OpenCode run:
NANOPROXY_DEBUG=1 opencodeOptional:
- set
NANOPROXY_LOG=/path/to/fileto change the single event log file location - set
NANOPROXY_LOG_DIR=/path/to/folderto change where detailed per-request debug files are written
On Windows, you can also use the same persistent toggle used by the standalone server:
./toggle-debug.ps1That writes a .debug-logging flag file in the repo. When that flag is present, both:
- the OpenCode plugin
- the standalone server
will enable NanoProxy debug logging until you toggle it off again.
When debug mode is enabled, NanoProxy also writes:
- a single event log file
- per-request
*-request.json - raw streamed
*-stream.sse - parsed
*-response.json
By default these go into:
- event log: your system temp folder as
nanoproxy-plugin.log - detailed logs: your system temp folder under
nanoproxy-plugin-logs
Run the server:
node server.jsThen point your coding tool to:
http://127.0.0.1:8787
Keep using your normal NanoGPT API key in that tool.
If your tool supports a custom OpenAI-compatible provider or baseURL, use http://127.0.0.1:8787 there.
Optional overrides:
UPSTREAM_BASE_URL=https://nano-gpt.com/api/v1
PROXY_HOST=127.0.0.1
PROXY_PORT=8787
node server.jsOff by default.
Enable for one run:
NANO_PROXY_DEBUG=1 node server.jsOr toggle persistently on Windows:
./toggle-debug.ps1That same toggle also enables plugin debug logging.
Server logs are written to Logs/.
curl http://127.0.0.1:8787/healthIf you want to run the standalone server in Docker instead of running Node directly:
docker build -t nano-proxy .
docker run --rm -p 8787:8787 nano-proxyOr with Compose:
docker compose up --buildThis still exposes the proxy at:
http://127.0.0.1:8787
For tool-enabled requests:
- It removes the normal native tool-calling structure before sending the request upstream.
- It tells the model to use a stricter text-based tool format instead.
- It watches the model output.
- It converts that output back into normal OpenAI-style
tool_calls.
So your client still sees normal tool calls, but NanoGPT does not have to rely on its native tool-calling behavior.
Tool reply:
[[OPENCODE_TOOL]]
[[CALL]]
{"name": "read", "arguments": {"filePath": "src/app.js"}}
[[/CALL]]
[[/OPENCODE_TOOL]]
Multiple independent tool calls in one turn:
[[OPENCODE_TOOL]]
[[CALL]]
{"name": "read", "arguments": {"filePath": "src/app.js"}}
[[/CALL]]
[[CALL]]
{"name": "read", "arguments": {"filePath": "src/styles.css"}}
[[/CALL]]
[[/OPENCODE_TOOL]]
Final answer:
[[OPENCODE_FINAL]]
Your answer here.
[[/OPENCODE_FINAL]]
- Requests without tools are forwarded unchanged.
- Reasoning streams live.
- Tool and final content are buffered until NanoProxy can classify them safely.
- By default, NanoProxy allows up to 5 tool calls in a single assistant turn for models that behave well with batching.
- Some models may still behave better with one tool call per turn.
- Make sure the config file is valid JSON or JSONC.
- On Windows, avoid saving it with weird encoding or a BOM if OpenCode rejects it.
- Check that the plugin path is correct.
- Make sure it is a
file:///...URL, not a normal path string. - Restart OpenCode after editing the config.
- If needed, enable
NANOPROXY_DEBUG=1and check the plugin log output.
- This usually means the model drifted into a malformed tool format.
- Try again once first.
- If it keeps happening, enable debug logs and inspect what the model actually emitted.
- Make sure
node server.jsis still running. - Check
http://127.0.0.1:8787/health.
NanoProxy/
|-- server.js
|-- src/
| |-- core.js
| `-- plugin.mjs
|-- selftest.js
|-- README.md
|-- package.json
|-- Dockerfile
`-- docker-compose.yml
node --check server.js
node selftest.jsMIT