Using LLMs to treat healthcare induced anxiety
Run the following command
docker-compose up --build
Make sure you have python 3 installed (specifically 3.12.11)
Install Ollama. This is needed for running an LLM locally.
Once you have ollama installed and running. Run the following command in the terminal. This downloads the gemma3:1b model
ollama pull gemma3:1b
Create venv
python3.12 -m venv .venv
Activate venv
.venv/bin/activate
- Install dependencies:
pip install -r requirements.txt- Start the backend server:
cd src
uvicorn server:app --host 0.0.0.0 --port 8000In a separate terminal
- Install dependencies:
cd ui
npm install- Configure proxy in
vite.config.js(already done):
server: {
proxy: {
'/api': 'http://localhost:8000'
}
}- Start development server:
npm run dev