Replies: 1 comment 2 replies
-
|
@ProKil @Jasonqi146 @lwaekfjlk Sorry to tag you all - but if anyone could answer I'd be very grateful, thank you. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all, SOTOPIA n00b here :)
I need to run a simulation to compare a prompting technique to regular prompting of the same model. I have small local models and no GPT 4. I intend to use Llama3.1 (8b, 70b as agents, judge)
Currently, I have VLLM running locally with 8b model, and SOTOPIA does connect to that in the minimal example (agent1, 2, and env pointing to the same thing for now).
What I want to do from here is to change the system parameter of one of the LLMs to implement custom prompting for agent 1.
And then I don't have an idea how to "run the actual thing" from there on. Do I sample agents and interactions (from db dump) and run them in a loop? Is one of the example evaluation codes more suitable?
Beta Was this translation helpful? Give feedback.
All reactions