chatOllama is a mimic of ChatGPT that uses generative models to generate responses to user input based on the conversation's context. The project is built using React with TypeScript, Express.js, Prisma, Ollama, and Tailwind CSS & Shadcn/UI.
- Node.js (v21.6.2)
- npm (v10.5.0)
- Docker but not required
Clone the repository:
git clone <repo>Navigate to the backend folder and install dependencies:
cd backend
npm installNavigate to the frontend folder and install dependencies:
cd ../frontend
npm installTo start the backend server, navigate to backend/prisma and create a .env file with the following content:
DATABASE_URL="postgresql://<username>:<password>@localhost:5432/<databaseName>?schema=public"Then, run the following commands:
npx prisma migrate dev
npx prisma generate
cd ..
npm run devTo start the frontend server, run the following commands:
cd frontend
npm run devThe AI model used in this project is Ollama, a generative model that generates responses to user input based on the conversation's context. For this project, we will use llama3.2.
To run the model, you need to install Ollama. This can be done using Docker.
In this repository, there is an ollama.sh script that will help you run the model:
./ollama.shThis script will download the model and run it in a Docker container, utilizing your GPU if available.