Skip to content

Conversation

@mike-ferguson
Copy link
Member

This PR adds metadata fields to both models and benchmarks with known reference paper URL. The fields task_specialization and training_dataset were added to 28 models (with papers), while benchmarks add in task, species, region, and datatype.

More specifically for models:

Models with training_dataset: 28
Models with task_specialization: 28

======================================================================
TRAINING DATASETS
======================================================================
  training_dataset: ImageNet-1k: 21 models
  training_dataset: Tiny_ImageNet: 6 models
  training_dataset: CIFAR-100: 2 models
  training_dataset: ImageNet-22k: 2 models
  training_dataset: VTAB: 1 model
  training_dataset: custom_dataset: 1 model

======================================================================
TASK SPECIALIZATIONS
======================================================================
  task_specialization: image_classification: 9 models
  task_specialization: brain_alignment: 8 models
  task_specialization: scaling_models: 7 models
  task_specialization: object_detection: 5 models
  task_specialization: class_activation_mapping: 1 model
  task_specialization: graph_recognition: 1 model
  task_specialization: mobile_vision: 1 model
  task_specialization: salient_object_detection: 1 model

And benchmarks:

Total benchmarks found: 184
Benchmarks with benchmark_type: 184
Benchmarks with task: 76
Benchmarks with region: 56
Benchmarks with species: 137
Benchmarks with datatype: 133

======================================================================
BENCHMARK TYPES
======================================================================
  benchmark_type: behavioral: 84 benchmarks
  benchmark_type: neural: 56 benchmarks
  benchmark_type: engineering: 44 benchmarks

======================================================================
TASKS
======================================================================
  task: alternative_forced_choice: 39 benchmarks
  task: passive_viewing: 37 benchmarks

======================================================================
REGIONS
======================================================================
  region: V1: 29 benchmarks
  region: IT: 16 benchmarks
  region: V4: 12 benchmarks
  region: V2: 5 benchmarks

======================================================================
SPECIES
======================================================================
  species: homo_sapiens: 113 benchmarks
  species: macaca_mulatta: 32 benchmarks

======================================================================
DATATYPES
======================================================================
  datatype: behavioral: 84 benchmarks
  datatype: engineering: 44 benchmarks
  datatype: fMRI: 4 benchmarks
  datatype: neural: 1 benchmark

Note: this is a first step in adding metadata. Benchmark metadata requires a review by a human before merge. Model metadata has largely been confirmed by me.

Note: I am did not conduct this work, I have only validated models; all credit to Nishka Shah

@mike-ferguson mike-ferguson changed the title Urop model metadata Add additional benchmark and Model metadata Dec 17, 2025
@mike-ferguson mike-ferguson changed the title Add additional benchmark and Model metadata Add additional benchmark and model metadata Dec 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants