Ph.D. Student in Artificial Intelligence at Korea University
Researcher in Vision-Language Models, Explainable AI, and Human Attention Alignment
I’m working on interpretable multimodal reasoning — designing methods to visualize, evaluate, and align the internal attention of large vision-language models with human visual perceptual patterns.
Currently, I am developing evaluation protocols that align visual attention maps of multimodal models with human gaze behavior.
Beyond research, I am also passionate about developing algorithmic trading systems across both stock and crypto markets.
- Vision-Language Models & Multimodal Reasoning
- Explainable AI (Perturbation- and Attention-based Saliency)
- Human Attention Alignment & Evaluation Metrics
- Email: minsuksung@korea.ac.kr / mssung94@gmail.com
- Blog: https://minsuk-sung.github.io/
- GitHub: https://github.com/minsuk-sung
- Google Scholar: https://scholar.google.com/citations?user=_tRU3aIAAAAJ
- ResearchGate: https://www.researchgate.net/profile/Minsuk-Sung
- ORCID: https://orcid.org/0009-0007-2282-6873


