Skip to content

WHUNextGen/MentraSuite

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MentraSuite

Mengxi Xiao1  Kailai Yang2  Pengde Zhao3  Enze Zhang1  Ziyan Kuang4  Zhiwei Liu2  Weiguang Han1  Min Peng1  Qianqian Xie1*  Sophia Ananiadou2

1School of Artificial Intelligence, Wuhan University  2The University of Manchester  3School of Computer Science, Wuhan University  4Jiangxi Normal University

Psychological Reasoning LLMs.

[ English | 简体中文 ]

Latest News

✨[2025.12.6] We released the hybrid model of our family, Mindora-chord. For downloading the model checkpoints, please click here: elsashaw/mindora-chord

✨[2025.9.4] We released the first model of our family, Mindora-r2. For downloading the model checkpoints, please click here: elsashaw/mindora-rl

Introduction

Mindora is a family of psychological reasoning LLMs designed for psychology-related tasks that demand strong reasoning abilities, including question answering, therapy plan generation, cognitive error analysis, and misinformation detection. We further evaluated the generalization ability of Mindora on unseen tasks such as psychiatric diagnosis and observed remarkable results.

Our base model is Qwen3-8B, and we obtained it through SFT and GRPO.

Benchmark

The script of MentraBench can be found in src/MentraBench.py. You can add your own model in the options of args.llm. Then you need to add your model in call_llm.py, and call your model in the get_llm() function of MentraBench.py.

You can evaluate your model by running the following script:

python MentraBench.py --llm [your_llm] --dataset_name [the_dataset_to_test]

Quick Start

You can use the model in the same way as using Qwen3-8B.

  • Initialization
from modelscope import AutoModelForCausalLM, AutoTokenizer


class Mindora:
    def __init__(self, version):
        self.model = AutoModelForCausalLM.from_pretrained(
            model_name="elsashaw/mindora-rl2",
            torch_dtype="auto",
            device_map="cuda"
        )
        self.tokenizer = AutoTokenizer.from_pretrained(model_name)

    def generate(self, prompt):
        messages = [
            {"role": "system", "content": "Based on the following information of a case to make judgements. When answering, follow these steps concisely:\n\n 1. Reasoning Phase:\n   - Enclose all analysis within <think> tags\n   - Use structured subtitles (e.g., '###Comparing with Given Choices:') on separate lines\n   - Final section must be '###Final Conclusion:'\n\n2. Answer Phase:\n - Enclose your answer within <answer> tags\n - The answer phase should end with 'Answer: [option]'.\n - The answer should be aligned with reasoning phase. \nDeviation from this format is prohibited."},
            {"role": "user", "content": prompt}
        ]
        text = self.tokenizer.apply_chat_template(
            messages,
            tokenize=False,
            add_generation_prompt=True
        )
        model_inputs = self.tokenizer([text], return_tensors="pt").to(self.model.device)

        generated_ids = self.model.generate(
            **model_inputs,
            max_new_tokens=2048
        )
        generated_ids = [
            output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
        ]
        response = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
        return response
  • Usage example
mindora = Mindora()
response = mindora.generate(prompt="your prompt")

Acknowledgments

Model training is based on the LLaMA-Factory and VeRL frameworks.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages