-
Notifications
You must be signed in to change notification settings - Fork 14
Open
Description
The Eurus-RM-7b cannot predict the score correctly.
- I run:
from transformers import AutoTokenizer, AutoModel
import torch
def test(model_path):
dataset = [ # cases in webgpt; we use the same template as Mistral-Instruct-v0.2
{
"chosen": "[INST] Sural relates to which part of the body? [/INST] The sural region is the muscular swelling of the back of the leg below the knee, formed chiefly by the bellies of the gastrocnemius and soleus muscles [1,2].",
"rejected": "[INST] Sural relates to which part of the body? [/INST] The Sural nerve runs down the side of the leg near the small saphenous vein, then passes forward below the lateral malleolus and continues on the outside of the foot as the lateral dorsal cutaneous nerve, which then communicates with the intermediate dorsal cutaneous nerve, which branches off to the side of the foot. [1]",
}
]
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
with torch.no_grad():
for example in dataset:
inputs = tokenizer(example["chosen"], return_tensors="pt")
chosen_reward = model(**inputs).item()
inputs = tokenizer(example["rejected"], return_tensors="pt")
rejected_reward = model(**inputs).item()
print(f"chosen_reward: {chosen_reward} | rejected_reward: {rejected_reward} | diff: {chosen_reward - rejected_reward}")
test("/workspace/xxx/models/Eurus-RM-7b")
-
It's output is:
chosen_reward: -626.8788452148438 | rejected_reward: -405.09423828125 | diff: -221.78460693359375 -
The chosen_reward is smaller than that of rejected_reward. However, it shows that in (https://huggingface.co/openbmb/Eurus-RM-7b), the
Output: 47.4404296875 -
Can you give me some suggestions?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels