-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Pose your questions as Issue Comments (below) for James Evans regarding his 10/3 talk on Simulating Subjects: The Promise and Peril of AI Stand-ins for Social Agents and Interactions. Large Language Models (LLMs), through their exposure to massive collections of online text, learn to reproduce the perspectives and linguistic styles of diverse social and cultural groups. This capability suggests a powerful social scientific application – the simulation of empirically realistic, culturally situated human subjects. Synthesizing recent research in artificial intelligence and computational social science, we outline a methodological foundation for simulating human subjects and their social interactions. We then identify nine characteristics of current models that are likely to impair realistic simulation human subjects, including atemporality, social acceptability bias, uniformity, and poverty of sensory experience. For each of these areas, we discuss promising approaches for overcoming their associated shortcomings. Given the rate of change of these models, we advocate for an ongoing methodological program on the simulation of human subjects that keeps pace with rapid technical progress.
Contributing papers
Required: Kozlowski, Austin, James Evans. 2024. "Simulating Subjects: The Promise and Peril of AI Stand-ins for Social Agents and Interactions."
Plus ONE of the following:
- Potter, Yujin, Shiyang Lai, Junsol Kim, James Evans, Dawn Song. 2024. "Hidden Persuaders: How LLM Political Bias Could Sway Our Elections" Empirical Methods in Natural Language Processing (EMNLP).
- Lai, Shiyang, Yujin Potter, Junsol Kim, Richard Zhuang, Dawn Song, James Evans. 2024. “Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation.” International Conference on Machine Learning (ICML).
- Kozlowski, Austin, Hyunku Kwon, James Evans. 2024. "In Silico Sociology: Forecasting COVID-19 Polarization with Large Language Models."