This is a Node NestJS api implementing MongoDB Atlas vector search on sample dataset which utilize OpenAI’s text-embedding-ada-002 model. It allows to filter data based on provided vector or with help of OpenAI api it can transform a given string into a 1536 length vector. Various features such as throttling, caching, embedding generation, and CQRS are implemented for better modularity and separation of concerns.
- Clone the repository:
git clone <repo-url> cd <project-folder>
- Install dependencies:
yarn
- Set up environment variables (see Configuration).
This app requires some environment variables to be set. Create a .env file in the root directory and add:
MONGO_URI=your_mongo_uri
MONGO_DB_NAME=your_database_name
MONDO_COLLECTION_NAME=your_collection_name
OPENAI_API_KEY=your_openai_token
PORT=3000Throttling is implemented using @nestjs/throttler to limit request rates and prevent abuse.
Caching is implemented using @nestjs/cache-manager to improve performance and reduce API calls.
This module generates text embeddings using OpenAI’s text-embedding-ada-002 model. It requires an OpenAI API key.
This app follows the CQRS (Command Query Responsibility Segregation) pattern to decouple commands from queries, ensure clean architecture and serve as additional layer between services and controllers.
Start the application using:
yarn startFor development mode with hot reload:
npm start:devThis project is licensed under the MIT License.