-
Notifications
You must be signed in to change notification settings - Fork 140
Open
Description
Hi,
I want to run a proposed pipeline on my local server that functions similarly to https://wikichat.genie.stanford.edu/. When I input a query like 'Tell me about Red Velvet', I want to get results comparable to what this service provides. Currently, when I execute python command_line_chatbot.py --pipeline retrieve_and_generate --engine gpt-4o on my local server, it only opens an interactive server rather than processing batch queries.
In other words, I have N sample queries that I want to process automatically through my Python code rather than entering them one by one in interactive mode. How can I modify the code to handle batch processing of these queries?
Best,
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels