Skip to content

VoynichLabs/nuggetbench

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

How well can LLMs recognize what geographical areas that chicken nuggets resemble?

ModelAccuracy
google/gemini-3-pro-preview9/18
qwen/qwen3-vl-235b-a22b-instruct5/18
x-ai/grok-4-fast4/18
openai/gpt-5.22/18
anthropic/claude-opus-4.51/18

Today, we're in the benchmaxxing, Goodhart's Law era of AI progress. If it can be verified, it will be trained on. This causes models to be better at things that are commonly used as measures of their intelligence, but it's unclear to what extent the capability gain from training on narrow tasks applies outside of that domain (like it would for humans). For example, models are fantastic at reading text, but horrible at basic visual tasks.

This benchmark tests for something that is pointless and stupid to train for, while also requiring visual acuity and world knowledge. The hope is that this gives a better check of model ability than more sensible or common measures.

See /tables for per-model results.

See /tables/answers.md for the dataset and to try it for yourself.

To run the benchmark for yourself, clone this repo. You must have uv installed, and an OPENROUTER_API_KEY set as an environment variable. Then, do uv run main.py.

About

Can LLMs see when chicken nuggets resemble geographical areas?

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%