Skip to content

Vector Search Setup

Alessio Rocchi edited this page Jan 27, 2026 · 1 revision

Vector Search Setup

Enable semantic search with vector embeddings.


Prerequisites

  • OpenAI API key (for OpenAI embeddings)
  • OR Ollama (for local embeddings)

Configuration

OpenAI Embeddings

{
  "memory": {
    "vectorSearch": {
      "enabled": true,
      "provider": "openai",
      "model": "text-embedding-3-small"
    }
  },
  "providers": {
    "openai": {
      "apiKey": "${OPENAI_API_KEY}"
    }
  }
}

Ollama Embeddings (Local)

{
  "memory": {
    "vectorSearch": {
      "enabled": true,
      "provider": "ollama",
      "model": "nomic-embed-text"
    }
  }
}

Usage

Store with Embedding

await memory.store('concept:jwt', 'JWT provides stateless authentication', {
  generateEmbedding: true
});

Semantic Search

const results = await memory.search('how to authenticate users', {
  useVector: true,
  threshold: 0.7
});

// Returns semantically similar entries

Embedding Models

OpenAI:

  • text-embedding-3-small - 1536 dimensions (default)
  • text-embedding-3-large - 3072 dimensions (better quality)

Ollama:

  • nomic-embed-text - 768 dimensions (default)
  • mxbai-embed-large - 1024 dimensions

Performance

  • Vector search requires API calls (OpenAI) or local computation (Ollama)
  • FTS5 search is faster for exact keyword matches
  • Use hybrid approach: Vector for concepts, FTS for keywords

Related:

Clone this wiki locally