-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Emergent Properties in Large Language Models. Large Language Models (LLMs) trained to predict the next word in a sentence have surprised their creators by displaying emergent properties, ranging from a proclivity for biased behavior to an ability to write computer code and solve mathematical tasks. This talk discusses the results of several studies evaluating LLMs' performance on tasks typically used to study human mental processes. Findings indicate that as LLMs increase in size and linguistic dexterity, they can navigate false-belief scenarios, sidestep semantic illusions, and tackle cognitive reflection tasks. This talk will explore several possible interpretations of these findings, including the intriguing possibility that LLMs do not merely model language but also the psychological processes underlying how humans use language.
Reading List:
- Kosinski, M. (2024). Evaluating Large Language Models in Theory of Mind Tasks. Proceedings of the National Academy of Sciences (PNAS).
- Hagendorff, T., Fabi, S., Kosinski, M. (2023). Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nature Computational Science.