A comprehensive solution AI transalter base on RAG
- Simple Translator
- edit
vite-project/.envunder vite-porject eg:VITE_FLASK_URL=http://localhost:5000
-
edit
backend/.envunder backend eg:USE_STREAM=false OPENAI_API_KEY=xxxxxx DEEPSEEK_API_KEY=xxxxxxxxx -
edit
backend/config.ymlunder backend to config the model eg:models: - name: gpt-3.5-turbo providers: [OpenAI, DeepSeek, Custom] max_tokens: 1000 temperature: 0.7
You can setup the env by one command
docker-compose up -d
- Node.js (v14+)
- npm or yarn
cd vite-project
npm install
npm installnpm start
# vite project
npm run dev- Access the web interface at
http://localhost:3000 - Use the API endpoints as documented in the
/docsfolder
- Python 3.11 or higher
- pip (Python package manager)
-
Install Python dependencies:
pip install -r requirements.txt
-
Set environment variables:
- Copy
.envto your working directory and update credentials as needed. eg:
USE_STREAM=false - Copy
-
Run the backend server:
cd backend python app.pyThe backend API will be available at
http://localhost:5000.
- The backend uses SQLite for translator data storage (
backend/config.db). - For development, the server runs with
debug=Truefor hot-reloading. - Make sure to keep your
.envfile secure and do not commit sensitive credentials.
Contributions are welcome! Please open issues or submit pull requests.
This project is licensed under the MIT License.