Skip to content

tanu360/chatjimmy-reverse-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– ChatJimmy API

Free OpenAI & Anthropic compatible API proxy β€” powered by chatjimmy.ai

Features β€’ Endpoints β€’ Quick Start β€’ Examples β€’ Tool Calling β€’ Architecture β€’ Deploy β€’ License


Auth: Bearer token starting with tarun- (e.g. tarun-mysecretkey)

Model: llama3.1-8B (default)


🌟 Overview

ChatJimmy API is a Cloudflare Worker that translates standard OpenAI and Anthropic API formats into chatjimmy.ai's backend format. Use it as a drop-in replacement with any OpenAI/Anthropic SDK or tool β€” Continue, Cursor, etc.

This project is unofficial and not affiliated with chatjimmy.ai.


✨ Key Features

  • Dual API compatibility β€” OpenAI /v1/chat/completions + Anthropic /v1/messages
  • Streaming & non-streaming β€” full SSE streaming support for both formats
  • Tool calling β€” translates OpenAI/Anthropic tool calls via <tool_calls> XML injection
  • Think block stripping β€” removes <|think|> blocks from responses
  • Stats passthrough β€” token counts, speed metrics, TTFT in usage fields
  • IP spoofing β€” rotates through realistic residential IP ranges per request
  • Zero dependencies β€” single file, pure Cloudflare Workers API
  • Global edge β€” deploys to 300+ Cloudflare edge locations

πŸ› οΈ Endpoints

Method Path Description
GET /api Health check + endpoint list
GET /health Upstream health status
GET /v1/models Available models
POST /v1/chat/completions OpenAI-compatible chat
POST /v1/messages Anthropic-compatible messages

All endpoints support CORS and return JSON.


πŸš€ Quick Start

With cURL

curl https://jimmy.aikit.club/v1/chat/completions \
  -H "Authorization: Bearer tarun-mykey" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3.1-8B",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": true
  }'

With OpenAI SDK

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "tarun-mykey",
  baseURL: "https://jimmy.aikit.club/v1",
});

const response = await client.chat.completions.create({
  model: "llama3.1-8B",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);

With Anthropic SDK

import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({
  apiKey: "tarun-mykey",
  baseURL: "https://jimmy.aikit.club",
});

const message = await client.messages.create({
  model: "llama3.1-8B",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(message.content[0].text);

With Vanilla JS (fetch)

const res = await fetch("https://jimmy.aikit.club/v1/chat/completions", {
  method: "POST",
  headers: {
    Authorization: "Bearer tarun-mykey",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "llama3.1-8B",
    messages: [{ role: "user", content: "Hello!" }],
  }),
});

const data = await res.json();
console.log(data.choices[0].message.content);

πŸ’‘ Usage Examples

Streaming Chat

const response = await fetch("https://jimmy.aikit.club/v1/chat/completions", {
  method: "POST",
  headers: {
    Authorization: "Bearer tarun-mykey",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "llama3.1-8B",
    messages: [
      { role: "system", content: "You are a helpful assistant." },
      { role: "user", content: "Write a haiku about coding" },
    ],
    stream: true,
  }),
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  const chunk = decoder.decode(value);
  // Parse SSE chunks: "data: {...}\n\n"
  for (const line of chunk.split("\n")) {
    if (line.startsWith("data: ") && line !== "data: [DONE]") {
      const data = JSON.parse(line.slice(6));
      process.stdout.write(data.choices[0]?.delta?.content || "");
    }
  }
}

Non-Streaming

curl https://jimmy.aikit.club/v1/chat/completions \
  -H "Authorization: Bearer tarun-mykey" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3.1-8B",
    "messages": [{"role": "user", "content": "What is 2+2?"}],
    "stream": false
  }'

Non-Streaming (Vanilla JS)

const res = await fetch("https://jimmy.aikit.club/v1/chat/completions", {
  method: "POST",
  headers: {
    Authorization: "Bearer tarun-mykey",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "llama3.1-8B",
    messages: [
      { role: "system", content: "You are a helpful assistant." },
      { role: "user", content: "What is 2+2?" },
    ],
    stream: false,
  }),
});

const data = await res.json();
console.log(data.choices[0].message.content); // "2 + 2 = 4"

Response:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1710000000,
  "model": "llama3.1-8B",
  "choices": [
    {
      "index": 0,
      "message": { "role": "assistant", "content": "2 + 2 = 4" },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 8,
    "total_tokens": 20
  }
}

πŸ”§ Tool Calling

The proxy supports OpenAI and Anthropic tool calling formats. Tools are injected into the system prompt using <tool_calls> XML tags, and the model's responses are parsed back into proper tool call objects.

OpenAI Format

curl https://jimmy.aikit.club/v1/chat/completions \
  -H "Authorization: Bearer tarun-mykey" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3.1-8B",
    "messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
    "tools": [{
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather",
        "parameters": {
          "type": "object",
          "properties": {
            "city": { "type": "string" }
          },
          "required": ["city"]
        }
      }
    }]
  }'

Anthropic Format

curl https://jimmy.aikit.club/v1/messages \
  -H "Authorization: Bearer tarun-mykey" \
  -H "Content-Type: application/json" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "llama3.1-8B",
    "max_tokens": 1024,
    "messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
    "tools": [{
      "name": "get_weather",
      "description": "Get current weather",
      "input_schema": {
        "type": "object",
        "properties": {
          "city": { "type": "string" }
        },
        "required": ["city"]
      }
    }]
  }'

Tool Calling β€” Vanilla JS (OpenAI Format)

const res = await fetch("https://jimmy.aikit.club/v1/chat/completions", {
  method: "POST",
  headers: {
    Authorization: "Bearer tarun-mykey",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "llama3.1-8B",
    messages: [{ role: "user", content: "What is the weather in Tokyo?" }],
    tools: [
      {
        type: "function",
        function: {
          name: "get_weather",
          description: "Get current weather",
          parameters: {
            type: "object",
            properties: { city: { type: "string" } },
            required: ["city"],
          },
        },
      },
    ],
  }),
});

const data = await res.json();
const msg = data.choices[0].message;

if (msg.tool_calls) {
  for (const tc of msg.tool_calls) {
    console.log(tc.function.name); // "get_weather"
    console.log(JSON.parse(tc.function.arguments)); // { city: "Tokyo" }
  }
} else {
  console.log(msg.content);
}

Tool Calling β€” Vanilla JS (Anthropic Format)

const res = await fetch("https://jimmy.aikit.club/v1/messages", {
  method: "POST",
  headers: {
    Authorization: "Bearer tarun-mykey",
    "Content-Type": "application/json",
    "anthropic-version": "2023-06-01",
  },
  body: JSON.stringify({
    model: "llama3.1-8B",
    max_tokens: 1024,
    messages: [{ role: "user", content: "What is the weather in Tokyo?" }],
    tools: [
      {
        name: "get_weather",
        description: "Get current weather",
        input_schema: {
          type: "object",
          properties: { city: { type: "string" } },
          required: ["city"],
        },
      },
    ],
  }),
});

const data = await res.json();

for (const block of data.content) {
  if (block.type === "tool_use") {
    console.log(block.name); // "get_weather"
    console.log(block.input); // { city: "Tokyo" }
  } else if (block.type === "text") {
    console.log(block.text);
  }
}

Note: Tool calling reliability depends on the underlying model (llama3.1-8B). Complex tool schemas with many parameters may not always produce valid JSON.


πŸ—οΈ Architecture

Client Request (OpenAI or Anthropic format)
  β”‚
  β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Auth Check                   β”‚
β”‚  Bearer token must start      β”‚
β”‚  with "tarun-"                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Format Translation           β”‚
β”‚  β€’ Parse messages             β”‚
β”‚  β€’ Convert tool definitions   β”‚
β”‚  β€’ Build system prompt        β”‚
β”‚  β€’ Handle tool_calls ↔ XML    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  IP Rotation                  β”‚
β”‚  Random residential IP from   β”‚
β”‚  300+ global ranges           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  chatjimmy.ai Upstream        β”‚
β”‚  POST /api/chat               β”‚
β”‚  {messages, chatOptions,      β”‚
β”‚   attachment}                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Response Translation         β”‚
β”‚  β€’ Strip <|think|> blocks     β”‚
β”‚  β€’ Parse <|stats|> for usage  β”‚
β”‚  β€’ Parse <tool_calls> XML     β”‚
β”‚  β€’ Convert to OpenAI/Anthropicβ”‚
β”‚    streaming or non-streaming β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Deploy

Prerequisites

Setup

git clone https://github.com/tarun/chatjimmy.git
cd chatjimmy
npm install

Development

npm run dev

Production

npm run deploy

βš™οΈ Configuration

All configuration is via constants at the top of chatjimmy.js:

Constant Default Description
DEFAULT_MODEL llama3.1-8B Default model when none specified
DEFAULT_TOP_K 8 Default top_k sampling parameter
DEFAULT_TIMEOUT_MS 30000 Upstream request timeout (30s)
DEFAULT_MAX_BODY_BYTES 64000 Max request body size

πŸ“œ License

This project is licensed under the MIT License β€” see the LICENSE file for details.

About

ChatJimmy Reverse API

Topics

Resources

License

Stars

Watchers

Forks

Contributors