MiniHuman: Simulating, Augmenting, and Regulating Virtual Human Behaviors and Responses to Environmental Stimuli
MiniHuman aims to leverage SOTA AI models (LLM agents, RL agents, etc), sensing, and intervention techniques to simulate both human mind (mental behaviors) and physical behaviors, to support AI and HCI applications in education, health, recommendation, user experience, and human augmentation.
More models, behaviors, environments, and applications will be released soon!
- LLM-based Student Simulator (e.g., Classroom Simulacra [CHI 2025], EduAgent, EduTwin)
- Student Behavior Regulation (e.g., PeerEdu [CHI 2025])
- Attention Regulation (e.g., Eyerofeedback)
- Deep Reinforcement Learning Based Cognition Model (e.g., ReactiveAgent)
- Cognitive Training (e.g., TimeCare [CHI 2023])
- Human action in response to climate change
- Social Interaction
- Social Network
We support a wide range of human behaviors to empower diverse application scenarios.
- Physiological behaviors
- Gaze
- Motor behaviors
- Cognitive states
- Attention
- Workload
- ...
- Knowledge states
- Emotion
We support a wide range of environmental stimuli to empower diverse application scenarios.
- Visual stimuli
- Time Pressure
- ...
- Auditory stimuli
- Instructor Voice
- ...
- Textual stimuli
- Course Materials
- ...
- LLM-based Agents
- Persona Initialization
- Task Definition
- Behavior Definition
- Environment Description
- Foundation Model
- RL-based Agents
- Action Space
- Observation Space
- Optimization Policy
If you use this repository, please cite it as follows:
Songlin Xu, Xinyu Zhang. (2024). MiniHuman-Toolkit (Version 0.1.2). GitHub. https://github.com/songlinxu/MiniHuman-Toolkit
Alternatively, use the following BibTeX entry:
@misc{songlin2024minihuman,
author = {Songlin Xu, Xinyu Zhang},
title = {minihuman},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/songlinxu/MiniHuman-Toolkit}},
version = {0.1.2}
}