Skip to content

uchicago-computation-workshop/Fall2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 

Repository files navigation

Fall2025

Repository for the Fall 2025 Computational Social Science Workshop

Time: 11:00 AM to 12:20 PM, Thursdays Location: Room 107, Kent Chemical Laboratory

11/6

Jay McClelland received his Ph.D. in Cognitive Psychology from the University of Pennsylvania in 1975. He served on the faculty of the University of California, San Diego, before moving to Carnegie Mellon in 1984, where he became a University Professor and held the Walter Van Dyke Bingham Chair in Psychology and Cognitive Neuroscience. He was a founding Co-Director of the Center for the Neural Basis of Cognition, a joint project of Carnegie Mellon and the University of Pittsburgh. In 2006 McClelland moved to the Department of Psychology at Stanford University, where he founded the Center for Mind, Brain, and Computation in 2007 and served as department chair from fall 2009 through summer 2012. He is currently the Lucie Stern Professor in the Social Sciences and Co-Director of the Center for Mind, Brain, Computation and Technology. Over his career, McClelland has contributed to both the experimental and theoretical literatures in a number of areas, most notably in the application of connectionist/parallel distributed processing models to problems in perception, cognitive development, language learning, and the neurobiology of memory. He was a co-founder with David E. Rumelhart of the Parallel Distributed Processing (PDP) research group, and together with Rumelhart he led the effort leading to the publication in 1986 of the two-volume book, Parallel Distributed Processing, in which the parallel distributed processing framework was laid out and applied to a wide range of topics in cognitive psychology and cognitive neuroscience. McClelland and Rumelhart jointly received the 1993 Howard Crosby Warren Medal from the Society of Experimental Psychologists, the 1996 Distinguished Scientific Contribution Award (see citation) from the American Psychological Association, the 2001 Grawemeyer Prize in Psychology, and the 2002 IEEE Neural Networks Pioneer Award for this work.

McClelland has served as Senior Editor of Cognitive Science, as President of the Cognitive Science Society, as a member of the National Advisory Mental Health Council, and as President of the Federation of Associations in the Behavioral and Brain Sciences (FABBS). He is a member of the National Academy of Sciences and a corresponding Fellow of the British Academy. He has received the APS William James Fellow Award for lifetime contributions to the basic science of psychology, the David E. Rumelhart prize for contributions to the theoretical foundations of Cognitive Science, the NAS Atkinson Prize in Psychological and Cognitive Sciences, and the Heineken Prize in Cognitive Science.

McClelland currently teaches on the PDP approach to cognition and its neural basis in the Psychology Department and in the Symbolic Systems Program at Stanford and conducts research on learning, memory, conceptual development, language processing, and mathematical cognition at Stanford and as a consulting research scientist at DeepMind.

Are people still smarter than machines? Today, AI systems are being put to use for many purposes, and some systems have exceeded human capabilities. But are these systems really intelligent? There are good reasons to think that humans still have many advantages. In this talk I will discuss what I see as the advantages humans still have and suggest ways in which the AI systems of the future might capture them.

Reading List

10/30

James Evans is the Max Palevsky Professor of Sociology & Data Science at the University of Chicago, External Faculty at the Santa Fe Institute, and Visiting Faculty at Google. Evans’ research uses large-scale data, machine learning and generative AI to understand how collectives of humans and machines think and what they know. This involves inquiry into the emergence of ideas, shared patterns of reasoning, and processes of attention, communication, agreement, and certainty. Thinking and knowing collectives like science, modern large language models, the Web, or modern commercial enterprises involve complex networks of diverse human and machine intelligences, collaborating and competing to achieve overlapping aims. Evans’ work connects the interaction of these agents with the knowledge they produce and its value for themselves and the system. His work is supported by numerous federal agencies (NSF, NIH, DOD), foundations and philanthropies, has been published in Nature, Science, PNAS, and top social and computer science outlets, and has been covered by global news outlets from the New Yorker, Wall Street Journal, and New York Times to the Economist, Le Monde, and Die Zeit.

AI Diversity NOT Alignment for Sustained Innovation in Human-AI Evolution: In this talk, I explore success and failure modes in the co-evolution of Human-AI collaboration. First, I consider unintended consequences of the alignment framing. Large Models (LMs) represent powerful social and cultural technologies that encapsulate vast human information. Value alignment was imagined to promote the AIs aligned with long-term human interests. Nevertheless, rewarding helpfulness has steered LM values to reflect specific political positions with unintended consequences for civil society. At other times, AI alignment has been pursued to create trustworthy AI companions, but we show these are paradoxically least helpful for improving human discernment. Moreover, while LMs can be tuned to reduce false beliefs and mitigate polarization, they often exhibit human-like biases in trust and display human-like tendencies to polarize, which may deepen and accelerate existing human divides. I consider methods to cultivate diversity through AI cross-training and co-evolution. Second, I consider the lack of an independent, evolved, and expanding sensorium in modern LMs relative to their human partners. LMs are trained on vast, fixed tranches of text, images, and other data, but compared to the distribution of human experiences, their perspectives are far more limited. This lack of evolving senses, dynamic data, and associated curiosity exerts a limiting effect on Human-AI capacity. For example, I show how scientists have succeeded in their use of AI, but their pathways are correlated toward the most available data, and so AI-infused science has focused attention on a narrow swath of known problems rather than opening new concerns. I consider methods to grow and evolve AI sensation relative to its action space in order to avoid cultural crystallization and collapse. I conclude with a discussion of the importance of AI diversity for existential concerns and human safety, not through alignment, but evolving a diverse ecology of AIs that check and balance each other.

Reading List

10/23

Isaac Mehlhaff is a Neubauer Family Assistant Professor in the Department of Political Science and is also affiliated with the Data Science Institute, Committee on Data Science, and Program in Political Economy at the University of Chicago.

His research is driven by substantive questions in public opinion and political psychology: How and why do citizens change their attitudes on political issues? Under what conditions can political discussion exacerbate or ameliorate mass polarization? How is polarization causally related to other features of government and society? He approaches his work primarily as a computational social scientist, using and developing methods in natural language processing, machine learning, and Bayesian modeling.

He received his PhD in 2023 from The University of North Carolina at Chapel Hill. He also holds an MA from UNC-Chapel Hill and a BA from the University of Wisconsin-Madison.

Political Argumentation and Attitude Change in Online Interactions: Prevailing theories of public opinion and political psychology hold that human reasoning is biased and lazy, which suggests it is ill-suited to help ordinary citizens engage meaningfully with politics. In contrast, I contend that the biased and lazy nature of reasoning is precisely what gives citizens the tools to think through political issues and update their attitudes in response to argumentative exchanges. To test these hypotheses, I train a series of deep neural networks to classify textual inputs on several characteristics of discussion and argumentation. I use these classifiers to annotate over one million comments from the Reddit social media platform and show that attitude change is substantially more likely to result from argumentative exchanges rather than more contemplative ones. Results suggest that under the right conditions, humans can be quite skilled political reasoners.

Reading List

10/16

Marc Berman is a Professor in the Department of Psychology, Co-Director for the Masters in Computational Social Science Program, and Director of the Environmental Neuroscience Lab. He is involved in the Cognition, Social and Integrative Neuroscience programs. Understanding the relationship between individual psychological and neural processing and environmental factors lies at the heart of his research. In his lab they utilize brain imaging, behavioral experimentation, computational neuroscience and statistical models to quantify the person, the environment and their interactions.

Natural scenes are more compressible and less memorable than human-made scenes: Humans often cannot process all the information available within an environment, but instead filter out much of it. This study examines whether the extent of information filtering may differ between environments, specifically natural and human-made environments. Across three behavioral experiments and computational analysis of 108,754 scene images, we analyzed the spectral and edge content of scenes to quantify the proportion of noticeable information. Our findings reveal that natural scenes have a lower proportion of noticeable information compared to human-made scenes, resulting in higher compressibility. Furthermore, natural scenes were consistently less memorable than human-made scenes, suggesting that greater information filtering occurs during encoding into memory. The lower memorability of natural scenes was partially explained by their higher compressibility. Our results indicate that compressibility, or the density of noticeable information, could be a key feature distinguishing natural environments from human-made environments, potentially explaining the benefits of interacting with natural environments.

Reading List

10/09

Leonardo Bursztyn is the Saieh Family Professor of Economics at the University of Chicago. He is also an Editor of the Journal of Political Economy, the co-director of the Becker Friedman Institute Political Economics Initiative and of the Program in Behavioral Economics Research, and the founder and director of the Normal Lab.

His research seeks to better understand how individuals' main economic decisions are shaped by their social environments. His work has examined educational, labor market, financial, consumption, and political decisions, both in developing and developed countries.

Leonardo is a Research Associate at the National Bureau of Economic Research (NBER), a fellow at the Bureau for Research and Economic Analysis of Development (BREAD), and an affiliate at the Abdul Latif Jameel Poverty Action Lab (J-PAL) and at the Pearson Institute. He is also the recipient of a 2016 Sloan Research Fellowship. He received his PhD in economics at Harvard University in 2010.

Product Market Traps in Big Tech: We examine how social pressures and firm strategies can generate “product market traps” -- situations in which consumers sustain demand for products they would often prefer not to exist. Using incentivized experiments, we show that such traps arise organically on social media platforms such as TikTok and Instagram, largely due to FOMO (fear of missing out), and yield negative welfare once we account for the costs imposed on non-users. In the smartphone market, we study Apple’s decision to mark Android messages with “green bubbles” on iPhones. Our survey and experimental evidence shows that the feature carries strong stigma, creates sizable welfare losses, and that removing it significantly increases demand for Android devices. Finally, in the market for AI learning tools, incentivized experiments with parents reveal that demand rises sharply with peer adoption, while information about harms does not reduce individual demand but instead increases support for collective restrictions. Together, these findings demonstrate how product market traps can sustain demand and reinforce market power while lowering welfare, complicating standard methods of welfare measurement.

Reading List

10/02

Jean Clipperton is an Associate Director of MACSS and Associate Senior Instructional Professor at the University of Chicago. She is a political scientist and computational social scientist and study how individuals create and interpret meaning through language, emotion, and culture, particularly in political contexts. Her research bridges political communication, political behavior, sociology, psychology, and institutional analysis. At the center of her research is a core question: how do individuals and institutions use language to create shared understanding and construct identity?

It's a new soundtrack: Candidates, Campaigns, and Rally playlists As political candidates are increasingly using music at rallies and releasing or revealing playlists, placing their selected songs into this framework can provide clear and concise opportunities to study how they construct their public image, build political brands, and signal values to voters. I find both cross-party and within-candidate patterns: all candidates increased the use of pro-social language in their playlists, while Democratic candidates increasingly had more negative emotions and more moral language in their songs. Additionally, trust was the most common emotion present in all front-runner candidate playlists. Playlists can send signals about the candidates and how they view their responsibility: as a delegate acting on behalf of the people or as a brand in which voters invest. The dataset covers three presidential elections from 2016 to 2024, and over 2,000 songs. Trump, in each of his three campaigns, evidenced more 'brandidate'-type language in his playlists, with 'I' pronounced being used very heavily. In contrast, Democratic candidates tended toward delegate-type language and more inclusive 'we' language.


Ali Sanaei is an Associate Instructional Professor in the Masters in Computational Social Science program. He is a political scientist with a substantive interest in foreign policy and public opinion. His methodological interests include formal models, causal inference, Bayesian statistics, and applications of machine learning.

Reconstructing Pahlavi Governance: Leveraging Oral Histories with Retrieval-Augmented Generation Oral histories provide valuable insights that are often impossible obtain with any other methods. If we are able to analyze oral histories at the corpus level, instead of focusing on one or a few interviews, we can benefit by getting an automatic triangulation of narratives and perspectives, and also by being able to query about the details that may not be possible to obtain from any single interview. We can leverage large language models by retrieval-augmented generation (RAG) techniques to accomplish this. We first divide the corpus into small snippets of text, then, for any given query, we retrieve the most relevant ones by semantic similarity using word-embeddings, then we show that while the generative phase fails if it is done in one pass, we can divide this phase into multiple tasks and obtain high quality results. We extract lessons and excerpts from each snippet, and finally, we synthesize the lessons and piece them together with the excerpts to create a verifiable narrative that answers the query by reference to the source. We apply this technique to Harvard Iranian Oral History Project corpus with queries about economic governance in the late Pahlavi era.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •