A secure OpenAI-compatible API proxy that uses the official iFlow CLI SDK instead of direct HTTP calls.
This is a drop-in OpenAI API replacement. Deploy in 3 steps:
# Option A: npx (fastest)
npx iflow-sdk-bridge
# Option B: npm install
npm install -g iflow-sdk-bridge
iflow-sdk-bridge
# Option C: from source
git clone https://github.com/a88883284/iflow-sdk-bridge.git
cd iflow-sdk-bridge
npm install && npm run build && npm startcurl http://localhost:28002/v1/models
# Should return: {"object":"list","data":[...]}OpenClaw / Claude Code - add to ~/.openclaw/openclaw.json:
{
"providers": {
"iflow-bridge": {
"baseUrl": "http://localhost:28002/v1",
"apiKey": "sk-dummy"
}
}
}Any OpenAI SDK:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://localhost:28002/v1',
apiKey: 'not-needed'
});npm install -g pm2
pm2 start npx --name iflow-sdk-bridge -- iflow-sdk-bridge
pm2 save && pm2 startup| Endpoint | Method | Description |
|---|---|---|
http://localhost:28002/v1/chat/completions |
POST | OpenAI-compatible chat |
http://localhost:28002/v1/messages |
POST | Anthropic-compatible chat |
http://localhost:28002/v1/models |
GET | List models |
http://localhost:28002/stats |
GET | Server stats |
http://localhost:28002/health |
GET | Health check |
glm-5, glm-4.7, deepseek-v3.2-chat, qwen3-coder-plus, kimi-k2, kimi-k2-thinking, kimi-k2.5, minimax-m2.5, qwen-vl-max
Problem: The iFlow CLI SDK has its own system prompt handling that may conflict with your AI tool's prompts.
| Issue | Description |
|---|---|
| iFlow System Prompt Injection | SDK automatically appends iFlow's default system prompt |
| Client Prompt Filtering | SDK may filter or override prompts from OpenClaw/Claude Code |
| Tool Call Issues | Custom tool definitions may be ignored or cause errors |
| Conversation Drift | AI behavior may not match expected persona |
Root Cause: @iflow-ai/iflow-cli-sdk is designed for the iFlow CLI ecosystem, not as a generic API proxy. It enforces its own session settings.
Current Workaround:
- This bridge is best suited for simple chat scenarios without complex system prompts
- For AI agents requiring custom tools/prompts, consider using iflow2api instead
Status: Under investigation. PRs welcome to improve prompt handling.
Unlike other iFlow API proxies that make direct HTTP requests to apis.iflow.cn, this project uses the official @iflow-ai/iflow-cli-sdk which provides:
| Feature | SDK Bridge | Direct HTTP |
|---|---|---|
| TLS Fingerprint | ✅ Native Node.js (auto) | |
| Telemetry Reporting | ✅ CLI handles it | |
| traceparent Header | ✅ CLI handles it | |
| Request Headers | ✅ CLI handles it | |
| Detection Risk | Low | Higher |
How it works:
Your App → SDK Bridge → Local iFlow CLI Process → Remote API
↓
Automatic TLS/Telemetry/traceparent
The iFlow CLI process handles all network-level security features automatically, making this approach inherently safer than direct HTTP calls.
- 🔒 Secure by Design - Uses official SDK with automatic security features
- 🔀 OpenAI Compatible - Drop-in replacement for OpenAI API
- 🎭 Sensitive Info Filter - Smart sanitization with natural replacements
- ⚡ Streaming Support - Full SSE streaming support
- 🌐 CORS Enabled - Ready for browser-based clients
- Node.js 18+
- iFlow CLI installed and authenticated (
iflow login)
# Clone the repository
git clone https://github.com/a88883284/iflow-sdk-bridge.git
cd iflow-sdk-bridge
# Install dependencies
npm install
# Build
npm run build# Development
npm run dev
# Production
npm run build
npm startThe server will start on http://localhost:28002 by default.
| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | Chat completions (OpenAI compatible) |
/v1/models |
GET | List available models |
/stats |
GET | Get server statistics |
curl http://localhost:28002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "glm-5",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://localhost:28002/v1',
apiKey: 'not-needed',
});
const response = await client.chat.completions.create({
model: 'glm-5',
messages: [{ role: 'user', content: 'Hello!' }],
});Configure in your config file:
{
"iflow-provider": {
"baseUrl": "http://localhost:28002/v1",
"apiKey": "sk-xxxx"
}
}| Model ID | Description |
|---|---|
glm-5 |
GLM-5 (Recommended) |
glm-4.7 |
GLM-4.7 |
glm-4.6 |
GLM-4.6 |
deepseek-v3.2-chat |
DeepSeek V3.2 |
qwen3-coder-plus |
Qwen3 Coder |
kimi-k2 |
Kimi K2 |
kimi-k2-thinking |
Kimi K2 Thinking |
kimi-k2.5 |
Kimi K2.5 |
minimax-m2.5 |
MiniMax M2.5 |
qwen-vl-max |
Qwen VL Max (Vision) |
| Variable | Default | Description |
|---|---|---|
PORT |
28002 |
Server port |
IFLOW_SDK_SILENT |
false |
Suppress SDK logs |
Claude model names are automatically mapped:
claude-opus-4-6 → glm-5
claude-sonnet-4 → glm-5
claude-haiku-4 → glm-5
The bridge automatically sanitizes sensitive information with natural replacements:
| Original | Replaced With |
|---|---|
/Users/xxx/projects |
/home/user/workspace |
api_key: "sk-xxx" |
api_key: "xxx" |
localhost:28002 |
localhost:8080 |
| Aspect | iFlow SDK Bridge | Direct API Calls |
|---|---|---|
| Approach | Official SDK | HTTP requests |
| Security | Native | Manual simulation |
| Complexity | Simple | Complex |
| Risk Level | Low | Higher |
| TLS Fingerprint | ✅ Auto (Node.js) | |
| Telemetry | ✅ CLI handles | |
| Headers | ✅ CLI handles |
MIT License - see LICENSE