Depends on #3 - [ ] Setup development setup with inline database - [ ] [docker-compose](https://docs.docker.com/compose/) - [ ] [PgVector](https://github.com/pgvector/pgvector) - [ ] Save and embed user messages - [ ] Persist user messages on PostgreSQL - [ ] Create and persist [full-text search indexes](https://www.postgresql.org/docs/current/textsearch.html) for all messages - [ ] Use [Ollama Embeddings API](https://ollama.com/blog/embedding-models) to create vector embeddings for the user messages - [ ] Implement Memory Tool - [ ] Expose new `memory` tool to the LLM - [ ] Use [PgVector Hybrid Search](https://docs.pgvecto.rs/use-case/hybrid-search.html) to find messages related to the LLM query
Depends on #3
memorytool to the LLM