Skip to content

Conversation

@adkelley
Copy link

Fixed and refactored text_to_prompts() to handle multiple instances of the user, system, and assistant roles in a prompt. Before this fix, extract_messages() will only extract the first instance of a role (e.g., user) in the sigil, ignoring any further instances. For example, given the prompt:

~LLM"""
model: gpt-3.5-turbo
system: You are a helpful assistant.
user: Who won the world series in 2020?
assistant: The Los Angeles Dodgers won the World Series in 2020.
user: Where was it played?
"""

'user: Where was it played?' will not be added to the messages list. This fix ensures that all instances user, system, and assistant roles in the sigil are added to the messages list. This fix still assumes that model is the first line in the sigil.

Fixed and refactored 'text_to_prompts()' to handle multiple instances of the 'user'
prompt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant