This project uses the opeanai API to implement chat with an assistant and integrate custom interpreter code with E2B, as well as using the Serp API for search.
Register at https://platform.openai.com/playground/chat?models=gpt-4o for get ASSISTANT_IDRegister at https://e2b.dev/ for get E2B_API_KEYRegister at https://serpapi.com/ for get SERP_API_KEYCreate an .env file and take the values from .env.example and replace the default values with your keysYou need to install and run Docker https://docs.docker.com/engine/install/, this is for future work with e2b.And you need to install the e2b CLI to work with the code interpreter, here is more information on how to do it https://e2b.dev/docs/guide/custom-sandbox, after installing the e2b CLI, come back to us, we have a lot more interesting things to tell you
yarne2b template build- To run this command, the docker must be running. I also have to warn you that the build may take more than 10 minutes to complete, it depends on the number of packages in requirements.txt, if you don't need so many packages to work with the chat, you can remove them.yarn start
In the ai-pgahq-com/src/pages/api/openai.ts you can see the full implementation, assisted by the tools. We use the tools to work with files and to get more information for gpt answers. In the tools themselves, we use the e2b environment to run code or store files in it for future use (or download). We also use openai.audio.transcriptions to read audio files and process them, for example, we can reduce loud audio files. To generate images (generationImages func), we use the dall-e-3 model. We store all this in indexDB. Why indexDB? Because we can store much more information in it than in localStorage or Cookies. You can read more about what IndexDB is and how to use it here.