-
Notifications
You must be signed in to change notification settings - Fork 172
Description
Version
PyABSA version: 2.4.3
PyTorch version: 2.8.0+cpu
Transformers version: 4.29.2
CUDA : False
windows64
Describe the bug
My task is using ATEPC tool to analyze English comments from youtube, I am new in huggingface and golab, So I want to try it using Pycharm. And I just want to try the pre_trained model.
I have installed the PyABSA using pip, created a pycharm project, tested the code:
from pyabsa import available_checkpoints
checkpoint_map = available_checkpoints('atepc', show_ckpts=True)
errors occur as follow:
FileNotFoundError: [Errno 2] No such file or directory: './checkpoints.json'
then I create a checkpoints dirctory and downloa several files as follow:
---absa
-----checkpoints/
--------------english # files unzipped from fast_lcf_atepc_English_cdw_apcacc_82.36_apcf1_81.89_atef1_75.43
--------------englishv2 # files cloned from gitee
-----test.py
the englishv2 comes from the file I downloaded using commond :
···
git clone https://gitee.com/hf-models/deberta-v3-base-absa-v1.1.git englishv2
···
then I run the codes:
from pyabsa import ABSADatasetList, available_checkpoints
from pyabsa import ATEPCCheckpointManager
aspect_extractor = ATEPCCheckpointManager.get_aspect_extractor(checkpoint='english')
the console outputs:
[2025-10-10 17:08:55] (2.4.3) Fail to download checkpoints info from huggingface space, try to download from local
[2025-10-10 17:08:55] (2.4.3) No checkpoint found in Model Hub for task: english
[2025-10-10 17:08:55] (2.4.3) Load aspect extractor from checkpoints\englishv2\.git
[2025-10-10 17:08:55] (2.4.3) config: None
[2025-10-10 17:08:55] (2.4.3) state_dict: None
[2025-10-10 17:08:55] (2.4.3) model: None
[2025-10-10 17:08:55] (2.4.3) tokenizer: None
and I test if I created the checkpoints direactory correctly, I test with some codes, I forget the codes, but the console outputs are like:
测试 english: ./checkpoints/english
[2025-10-10 15:15:30] (2.4.3) Load aspect extractor from ./checkpoints/english
[2025-10-10 15:15:30] (2.4.3) config: checkpoints\english\fast_lcf_atepc.config
[2025-10-10 15:15:30] (2.4.3) state_dict: checkpoints\english\fast_lcf_atepc.state_dict
[2025-10-10 15:15:30] (2.4.3) model: None
[2025-10-10 15:15:30] (2.4.3) tokenizer: checkpoints\english\fast_lcf_atepc.tokenizer
english 测试失败: Exception: No module named 'pyabsa.functional' Fail to load the model from ./checkpoints/english!
测试 englishv2: ./checkpoints/englishv2
[2025-10-10 15:15:30] (2.4.3) Load aspect extractor from ./checkpoints/englishv2
[2025-10-10 15:15:31] (2.4.3) config: None
[2025-10-10 15:15:31] (2.4.3) state_dict: None
[2025-10-10 15:15:31] (2.4.3) model: checkpoints\englishv2\spm.model
[2025-10-10 15:15:31] (2.4.3) tokenizer: None
Expected behavior
I want to know how should I download the checkpoints.json file as the " No such file or directory: './checkpoints.json" showed, and how to use the local checkpoints file that I downloads from the Google Drive and the BaiduNetDisk, as it seems like that they don`t contain the .model file.
fast_lcf_atepc_English_cdw_apcacc_82.36_apcf1_81.89_atef1_75.43
fast_lcf_atepc_English_cdw_apcacc_85.03_apcf1_82.76_atef1_84.8
fast_lsa_t_English_acc_84.23_f1_83.65
And if there are any other suggustions for my task? Or If I need to use a hunggingface pipeline to finish my task ? I am looking forward to your reply!