An OpenAI-powered CLI to build a semantic search index from your MDX files. It allows you to perform complex searches across your content and integrate it with your platform.
This project uses OpenAI to generate vector embeddings and Pinecone to host the embeddings, which means you need to have accounts in OpenAI and Pinecone to use it.
Setting up a Pinecone project
After creating an account in Pinecone, go to the dashboard and click on the
Create Index button:
Fill the form with your new index name (e.g. your blog name) and set the number of dimensions to 1536:
The CLI requires four env keys:
OPENAI_API_KEY=
PINECONE_API_KEY=
PINECONE_BASE_URL=
PINECONE_NAMESPACE=Make sure to add them before using it!
index <dir> β processes files with your content and upload them to Pinecone.
Example:
$ @beerose/semantic-search index ./postssearch <query> β performs a semantic search by a given query.
Example:
$ @beerose/semantic-search search "hello world"For more info, run any command with the --help flag:
$ @beerose/semantic-search index --help
$ @beerose/semantic-search search --help
$ @beerose/semantic-search --helpYou can use the semanticQuery function exported from this library and
integrate it with your website or application.
Install deps:
$ pnpm add pinecone-client openai @beerose/semantic-search
# or `yarn add` or `npm i`An example usage:
import { PineconeMetadata, semanticQuery } from "@beerose/semantic-search";
import { Configuration, OpenAIApi } from "openai";
import { PineconeClient } from "pinecone-client";
const openai = new OpenAIApi(
new Configuration({
apiKey: process.env.OPENAI_API_KEY,
})
);
const pinecone = new PineconeClient<PineconeMetadata>({
apiKey: process.env.PINECONE_API_KEY,
baseUrl: process.env.PINECONE_BASE_URL,
namespace: process.env.PINECONE_NAMESPACE,
});
const result = await semanticQuery("hello world", openai, pinecone);Here's an example API route from aleksandra.codes: https://github.com/beerose/aleksandra.codes/blob/main/api/search.ts
Semantic search can understand the meaning of words in documents and return results that are more relevant to the user's intent.
This tool uses OpenAI to generate vector embeddings with
a text-embedding-ada-002 model.
Embeddings are numerical representations of concepts converted to number sequences, which make it easy for computers to understand the relationships between those concepts. https://openai.com/blog/new-and-improved-embedding-model/
It also uses Pinecone β a hosted database for vector search. It lets us perform k-NN searches across the generated embeddings.
The @beerose/sematic-search index CLI command performs the following steps for
each file in a given directory:
- Converts the MDX files to raw text.
- Extracts the title.
- Splits the file into chunks of a maximum of 100 tokens.
- Generates OpenAI embeddings for each chunk.
- Upserts the embeddings to Pinecone.
Depending on your content, the whole process requires a bunch of calls to OpenAI and Pinecone, which can take some time. For example, it takes around thirty minutes for a directory with ~25 blog posts and an average of 6 minutes of reading time.
To test the semantic search, you can use @beerose/sematic-search search CLI
command, which:
- Creates an embedding for a provided query.
- Sends a request to Pinecone with the embedding.
.
βββ bin
β βββ cli.js
βββ src
β βββ bin
β β βββ cli.ts
β βββ commands
β β βββ indexFiles.ts
β β βββ search.ts
β βββ getEmbeddings.ts
β βββ isRateLimitExceeded.ts
β βββ mdxToPlainText.test.ts
β βββ mdxToPlainText.ts
β βββ semanticQuery.ts
β βββ splitIntoChunks.test.ts
β βββ splitIntoChunks.ts
β βββ titleCase.ts
β βββ types.ts
βββ tsconfig.build.json
βββ tsconfig.json
βββ package.json
βββ pnpm-lock.yamlbin/cli.jsβ The CLI entrypoint.src:bin/cli.tsβ Files where you can find CLI commands and settings. This project uses CAC for building CLIs.commands/indexFiles.tsβ A CLI command that handles processing md/mdx content, generating embeddings and uploading vectors to Pinecone.command/search.tsβ A semantic search command. It generates an embedding for a given search query and then calls Pinecone for the results.getEmbeddings.tsβ Generating embeddings logic. It handles a call to Open AI.isRateLimitExceeded.tsβ Error handling helper.mdxToPlainText.tsβ Converts MDX files to raw text. Uses remark and a customremarkMdxToPlainTextplugin (also defined in that file).semanticQuery.tsβ Core logic for performing semantic searches. It's being used insearchcommand, and also it's exported from this library so that you can integrate it with your projects.splitIntoChunks.tsβ Splits the text into chunks with a maximum of 100 tokens.titleCase.tsβ Extracts a title from a file path.types.tsβ Types and utilities used in this project.
tsconfig.json- TypeScript compiler configuration.tsconfig.build.json- TypeScript compiler configuration used forpnpm build.
Tests:
src/mdxToPlainText.test.tssrc/splitIntoChunks.test.ts
Install deps and build the project:
pnpm i
pnpm buildRun the CLI locally:
node bin/cli.jspnpm testContributions, issues and feature requests are welcome.
Feel free to check
issues page if you want to
contribute.
Copyright Β© 2023 Aleksandra Sikora.
This
project is MIT
licensed.





