-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Description
Hi I've been trying to use BERTscore found here . I tried with both pytorch and TF models and I even tried tokenizing the reference and predicted texts using Autotokenizer however it keeps giving an error. Could you please help me solve this issue? I have pasted my code below
from transformers import AutoTokenizer TFAutoModel
from bert_score import score
modeltf = TFAutoModel.from_pretrained("GroNLP/bert-base-dutch-cased")
custom_tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased", revision="v1")
tokenized_reference = custom_tokenizer(truth, return_tensors='pt', padding=True, truncation=True)
tokenized_generated = custom_tokenizer(pred, return_tensors='pt', padding=True, truncation=True)
P, R, F1 = score(tokenized_generated,tokenized_reference,model_type=modeltf)
And the error that gets generated is:
KeyError Traceback (most recent call last)
[<ipython-input-69-5d5bcd498207>](https://localhost:8080/#) in <cell line: 1>()
----> 1 P, R, F2 = score(tokenized_generated,tokenized_reference,model_type=modeltf)
[/usr/local/lib/python3.10/dist-packages/bert_score/score.py](https://localhost:8080/#) in score(cands, refs, model_type, num_layers, verbose, idf, device, batch_size, nthreads, all_layers, lang, return_hash, rescale_with_baseline, baseline_path, use_fast_tokenizer)
93 model_type = lang2model[lang]
94 if num_layers is None:
---> 95 num_layers = model2layers[model_type]
96
97 tokenizer = get_tokenizer(model_type, use_fast_tokenizer)
KeyError: <transformers.models.bert.modeling_tf_bert.TFBertModel object at 0x7c76c0efddb0>
The versions I'm using for this are:
bert-score==0.3.13
tensorflow==2.13.0
transformers==4.34.0
Metadata
Metadata
Assignees
Labels
No labels