-
Notifications
You must be signed in to change notification settings - Fork 39
Open
Description
Hello,
I have noticed that the interface returns the same generations independently of the number of responses requested (n > 1). Easy reproduction:
from easyllm.clients import huggingface
# helper to build llama2 prompt
huggingface.prompt_builder = "llama2"
response = huggingface.ChatCompletion.create(
model="meta-llama/Llama-2-70b-chat-hf",
messages=[
{"role": "system", "content": "\nYou are a helpful assistant speaking like a pirate. argh!"},
{"role": "user", "content": "What is the sun?"},
],
temperature=0.9,
top_p=0.6,
max_tokens=256,
n=10
)
print(response)You will notice that the content of 'choices' will be exactly the same.
Looking at the code base, it seems that the issue comes from the fact that you are performing 'n' independent HTTP requests with the same generation parameters (but with a fixed seed).
# Normally this would not have been an issue since most of the time we are sampling
# from the model, however, gen_kwargs
# have the same seed so the output will be the same for each
for _i in range(request.n):
res = client.text_generation(
prompt,
details=True,
**gen_kwargs,
)I believe a solution would be to either change gen_kwargs to directly return n outputs by setting num_return_sequences to n, or by artificially generating different seeds for each request.
Metadata
Metadata
Assignees
Labels
No labels