Skip to content

Question for Michal Kosinski on Emergent Properties in Large Language Models #4

@muhua-h

Description

@muhua-h

Emergent Properties in Large Language Models. Large Language Models (LLMs) trained to predict the next word in a sentence have surprised their creators by displaying emergent properties, ranging from a proclivity for biased behavior to an ability to write computer code and solve mathematical tasks. This talk discusses the results of several studies evaluating LLMs' performance on tasks typically used to study human mental processes. Findings indicate that as LLMs increase in size and linguistic dexterity, they can navigate false-belief scenarios, sidestep semantic illusions, and tackle cognitive reflection tasks. This talk will explore several possible interpretations of these findings, including the intriguing possibility that LLMs do not merely model language but also the psychological processes underlying how humans use language.

Reading List:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions