Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
node_modules
npm-debug.log
yarn-error.log
.git
.gitignore
.env
.env.*
11 changes: 11 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
PORT=3000

# OpenAI credentials. You can use either APP_* or OPENAI_* naming.
APP_KEY=your_openai_api_key
APP_ORG=your_openai_org_id

OPENAI_API_KEY=your_openai_api_key
OPENAI_ORG=your_openai_org_id

# Optional. Default: gpt-3.5-turbo-instruct
OPENAI_TEXT_MODEL=gpt-3.5-turbo-instruct
16 changes: 16 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM node:20-alpine

WORKDIR /app

COPY package.json yarn.lock ./

RUN corepack enable && yarn install --frozen-lockfile --production

COPY index.js ./

ENV NODE_ENV=production
ENV PORT=3000

EXPOSE 3000

CMD ["yarn", "start"]
70 changes: 53 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,60 @@
---
title: Node HTTP Module
description: A HTTP module server
tags:
- http
- nodejs
- javascript
---
# chatGPT-nodejs

# HTTP Module Example
一个基于 Koa 的 Node.js 服务,提供两个 API:

This example starts an [HTTP Module](https://nodejs.org/api/http.html) server.
- `GET /chat?prompt=...`
- `GET /image?prompt=...`

[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/new/template/ZweBXA)
并提供健康检查接口:

## 💁‍♀️ How to use
- `GET /health`

- Install dependencies `yarn`
- Connect to your Railway project `railway link`
- Start the development server `railway run yarn start`
## 环境变量

## 📝 Notes
可使用以下任一命名方式配置 OpenAI:

The server started simply returns a `Hello World` payload. The server code is located in `server.mjs`.
- `APP_KEY` 或 `OPENAI_API_KEY`(必填,调用 AI 接口时)
- `APP_ORG` 或 `OPENAI_ORG`(可选)
- `OPENAI_TEXT_MODEL`(可选,默认 `gpt-3.5-turbo-instruct`)
- `PORT`(可选,默认 `3000`)

可以复制 `.env.example` 作为参考配置。

## 本地运行

```bash
yarn
PORT=3000 APP_KEY=your_key yarn start
```

启动后访问:

- `http://localhost:3000/health`

## Docker 部署

构建镜像:

```bash
docker build -t chatgpt-nodejs .
```

运行容器:

```bash
docker run -p 3000:3000 \
-e PORT=3000 \
-e APP_KEY=your_key \
-e APP_ORG=your_org \
chatgpt-nodejs
```

## Railway 部署

项目已包含 `railway.toml`,可直接部署:

1. 在 Railway 创建项目并连接仓库
2. 在 Railway Variables 中设置环境变量(至少设置 `APP_KEY`)
3. 部署后用 `/health` 进行健康检查

[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/new)
130 changes: 91 additions & 39 deletions index.js
Original file line number Diff line number Diff line change
@@ -1,57 +1,109 @@
import { Configuration, OpenAIApi } from "openai";
import Koa from "koa"
import Koa from "koa";
import Router from "koa-router";

// https://platform.openai.com/docs/api-reference/images
const port = Number(process.env.PORT || 3000);
const apiKey = process.env.APP_KEY || process.env.OPENAI_API_KEY || "";
const organization = process.env.APP_ORG || process.env.OPENAI_ORG;
const textModel = process.env.OPENAI_TEXT_MODEL || "gpt-3.5-turbo-instruct";

const configuration = new Configuration({
organization: process.env.APP_ORG,
apiKey: process.env.APP_KEY,
});
const openai = new OpenAIApi(configuration);
const response = await openai.listEngines();
const openai = apiKey
? new OpenAIApi(
new Configuration({
organization,
apiKey,
}),
)
: null;

const app = new Koa()
const app = new Koa();
const router = new Router();

app.use(async (ctx, next) => {
try {
await next();
} catch (error) {
ctx.status = 500;
ctx.body = {
error: "Internal server error",
message: error?.message || "Unknown error",
};
}
});

router.get("/chat", async (ctx, next) => {
// 获取请求中的参数
const { prompt } = ctx.request.query;
router.get("/", (ctx) => {
ctx.body = {
service: "chatGPT-nodejs",
status: "ok",
endpoints: ["/health", "/chat?prompt=...", "/image?prompt=..."],
};
});

const res = await openai.createCompletion({
// 对话模型
model: "text-davinci-003",// dialogue-babi-001 对话模型
prompt: prompt,
max_tokens: 2048,
temperature: 0.2
})
// 将生成的内容返回给客户端
ctx.body = res.data.choices
router.get("/health", (ctx) => {
ctx.body = {
status: "ok",
openaiConfigured: Boolean(openai),
};
});

router.get("/image", async (ctx, next) => {
// 获取请求中的参数
const { prompt } = ctx.request.query;
const res = await openai.createImage({
// 对话模型
model: "image-alpha-001",
prompt: prompt,
size: "256x256",
n: 1
})
// 将生成的内容返回给客户端
var url = res.data.data[0].url

ctx.body = "<img src=\"" + url + "\"></>"
router.get("/chat", async (ctx) => {
const prompt = String(ctx.request.query.prompt || "").trim();

if (!openai) {
ctx.status = 503;
ctx.body = {
error: "OpenAI API key is not configured",
expectedEnv: ["APP_KEY", "OPENAI_API_KEY"],
};
return;
}

if (!prompt) {
ctx.status = 400;
ctx.body = { error: "Query param `prompt` is required" };
return;
}

const res = await openai.createCompletion({
model: textModel,
prompt,
max_tokens: 1024,
temperature: 0.2,
});

ctx.body = { choices: res.data.choices };
});

router.get("/image", async (ctx) => {
const prompt = String(ctx.request.query.prompt || "").trim();

if (!openai) {
ctx.status = 503;
ctx.body = {
error: "OpenAI API key is not configured",
expectedEnv: ["APP_KEY", "OPENAI_API_KEY"],
};
return;
}

if (!prompt) {
ctx.status = 400;
ctx.body = { error: "Query param `prompt` is required" };
return;
}

const res = await openai.createImage({
prompt,
size: "256x256",
n: 1,
});

ctx.body = { url: res.data.data[0]?.url || null };
});

// 启用路由
app.use(router.routes()).use(router.allowedMethods());

// 启动服务器
app.listen(process.env.PORT, () => {
console.log("Server is listening on port " + process.env.PORT);
app.listen(port, () => {
console.log(`Server is listening on port ${port}`);
});

8 changes: 8 additions & 0 deletions railway.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
[build]
builder = "NIXPACKS"

[deploy]
startCommand = "yarn start"
healthcheckPath = "/health"
restartPolicyType = "ON_FAILURE"
restartPolicyMaxRetries = 10