-
Notifications
You must be signed in to change notification settings - Fork 40
User Guide
This project serves as a Proof of Concept(PoC) for utilizing embeddings to attain long-term memory and Q&A functionality. This wiki page guides you through the initial setup process, including configuring initial prompts, settings, and more. Additionally, it offers a concise explanation of the underlying mechanics of this program.
If you want to know how to load large document files, please finish the current page first, then check this page.
Follow these steps to start using this program:
- Download the compiled binary for your operating system(OS) from the releases.
- Extract the files to a folder of your choice. For the purposes of this guide, let's call the folder
CCB. - (OS-specific) If you are using Linux or macOS, open a terminal and run the following commands:
This sets the permission to execute the file.
cd /path/to/the/folder/CCB chmod 755 GPT3Bot - To run the program, follow these steps based on your OS:
- Windows: Double-click
GPT3Bot.exeto run the program. - Linux or macOS: Open a terminal and run the following commands:
cd /path/to/the/folder/CCB ./GPT3Bot
- Windows: Double-click
- The program will prompt you to enter your OpenAI API key. If you don't have an API key, you can generate one here. The program will automatically validate the API key. If it's invalid, enter a valid API key and try again.
- Next, the program will prompt you to choose to load the initial prompt or saved chat history.
- To load the initial prompt, press
Enter. - To load the saved chat history, press
Son your keyboard, then pressEnter.
- To load the initial prompt, press
- You will then be prompted to enter the filename for the initial prompt or saved chat history.
- To use the default initial prompt, press
Enter. - To use a custom initial prompt or saved chat history, type the filename(e.g.
botfor a file namedbot.txtfor initial prompt orbot.jsonfor saved chat history) and pressEnter.
- To use the default initial prompt, press
- You are now ready to chat! Type in your questions and press
Enter. The language model will respond. - You have the option to enhance your experience by adding personalized .txt files to the
initialfolder. By doing this, you can create custom initial prompts that you can choose to load each time you start the program, giving it your unique touch. In addition to theDefault.txtfile that's already present, personalized initial prompts act like header messages for each new interaction. These prompts are sent to the language model every time you enter an input, regardless of its context. This feature allows for greater customization and control over the overall tone and style of your interactions with the language model. - If you want to modify the config, check this wiki page: Config.
When in the chatting area, you can use the following commands:
-
/stop: This command will prompt you to save your chat history before exiting the program. To skip saving, pressEnter. To save, enter the desired filename(e.g.botforbot.json) and pressEnter. Your chat history will be saved in thesavedfolder. Note: Any unsaved changes will be discarded when the program is closed without saving! -
/undo: This command will remove your last input. -
/reset: This command will reset your entire chat history. Use with caution! -
/tc: This command allows you to check the token information for a piece of text. Type or paste the text after executing this command, then pressEnterto see the token count, structure, and IDs. -
/dump: This command allows you to dump the current chat history to a .txt file inside thedumpfolder.
This program leverages embeddings to achieve long-term memory. When you input a piece of text, the program obtains the embeddings from the OpenAI API using text-embedding-ada-002 model. Embeddings are high-dimensional vector representations of text, and cosine similarity can be used to determine if two texts are similar even if they have different forms and structures. The program searches for both input embeddings and response embeddings in the chat history, if it finds similar ones, it means that there is some context to share with the current input, the program then injects these interactions into the initial prompt for the language model to reference.