Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
e15f4c7
rename package brainscore -> brainscore_vision
mschrimpf Nov 9, 2022
df29f39
move MajajHong2015 benchmarks into plugin
mschrimpf Nov 15, 2022
3cab21b
fix pool -> benchmark_registry
mschrimpf Nov 15, 2022
9314eba
setup plugin registries (#349)
mschrimpf Dec 7, 2022
3fa43bf
Sw/Restructuring vision (#353)
samwinebrake Feb 17, 2023
92b0225
use pyproject.toml instead of setup.py (#383)
mschrimpf Aug 1, 2023
e8de112
integrate model helpers (formerly model_tools) (#381)
mschrimpf Aug 4, 2023
0d74c72
First model added (hopefully many more to come!)
mike-ferguson Aug 30, 2023
d169417
Revert "First model added (hopefully many more to come!)"
mike-ferguson Aug 30, 2023
ae22ea5
Metrics plugin format (#391)
mschrimpf Nov 2, 2023
23af10a
split up `test_setup.sh` into benchmark-specific s3 download (#393)
mschrimpf Nov 12, 2023
c9cb898
Prevent triggering Travis plugin tests for empty `git diff` results (…
kvfairchild Nov 13, 2023
d685ab2
validate and fix tests (#403)
mschrimpf Dec 8, 2023
2f51df0
Add models for testing model conversion helpers (based off #395) (#398)
kvfairchild Dec 11, 2023
d0db0f7
Revert "Add models for testing model conversion helpers (based off #3…
mschrimpf Dec 11, 2023
cc76430
delete `lookup.csv` and entrypoint (#397)
mschrimpf Dec 12, 2023
5101bd9
Add models for testing model conversion helpers (based off #395) (#408)
shehadak Dec 13, 2023
4681cd4
integrate submission handling (#394)
mschrimpf Dec 16, 2023
b6cff2a
integrate remaining submission tests (#414)
mschrimpf Dec 20, 2023
dad68f0
merge main into 2.0 integrate_core (#424)
mschrimpf Jan 1, 2024
89cba23
First pass of new submission docs (#396)
mike-ferguson Jan 2, 2024
c68961c
simplify alexnet and pixel models (#412)
mschrimpf Jan 2, 2024
bcaf67c
update examples for 2.0 (#425)
mschrimpf Jan 2, 2024
0192381
fix README links (#426)
mschrimpf Jan 3, 2024
cf5a3c9
register pytest markers in pyproject (#413)
mschrimpf Jan 3, 2024
aa3f8b8
import script from core (#427)
kvfairchild Jan 3, 2024
ec966e5
remove lab identifiers from plugin identifiers (#402)
mschrimpf Jan 3, 2024
e72292b
Setup GitHub Actions and Travis for automated submissions (#428)
kvfairchild Jan 4, 2024
b3be250
merge master and fix errors (#429)
mschrimpf Jan 4, 2024
3d77f63
Merge branch 'upstream' into integrate_core
mschrimpf Jan 4, 2024
82613c1
Merge remote-tracking branch 'upstream/integrate_core' into integrate…
mschrimpf Jan 4, 2024
b659c98
Merge branch 'upstream' into integrate_core
mschrimpf Jan 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
114 changes: 114 additions & 0 deletions .github/workflows/automerge_plugin-only_prs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
name: Automatically merge plugin-only PRs


# Triggered on all PRs either by
# - completion of CI checks, OR
# - tagging with label
# 1) If PR is labeled "automerge" or "automerge-web"
# (all website submissions are tagged "automerge-web"),
# checks if Travis tests pass. If yes, THEN
# 2) Checks if PR modifies any code outside plugin dirs.
# If no changes are made beyond new or revised plugins
# (subdirs of /benchmarks, /data, /models, or /metrics)
# the PR is automatically approved and merged.


on:
pull_request:
types: [labeled]
status:

permissions: write-all

jobs:

isautomerge:
name: Set as 'automerge' if PR is labeled with 'automerge' or 'automerge-web'
runs-on: ubuntu-latest
if: |
contains( github.event.pull_request.labels.*.name, 'automerge') ||
contains( github.event.pull_request.labels.*.name, 'automerge-web')
outputs:
AUTOMERGE: ${{ steps.setautomerge.outputs.AUTOMERGE }}
steps:
- name: Set 'automerge' to 'True' # job only runs if True
id: setautomerge
run: |
echo "::set-output name=AUTOMERGE::True"


travis_success:
name: Check if Travis build is successful
runs-on: ubuntu-latest
needs: [isautomerge]
if: ${{ needs.isautomerge.outputs.AUTOMERGE == 'True' }}
outputs:
TRAVIS_OK: ${{ steps.istravisok.outputs.TRAVIS_OK }}
steps:
- name: Get Travis build status
id: gettravisstatus
run: |
echo ${{ github.event.pull_request.head.sha }}
echo "TRAVIS_CONCLUSION=$(python -c "import requests; r = requests.get(\"https://api.github.com/repos/brain-score/vision/commits/${{ github.event.pull_request.head.sha }}/check-runs\"); print(next(run['conclusion'] for run in r.json()['check_runs'] if run['name'] == 'Travis CI - Pull Request'))")" >> $GITHUB_ENV
- name: Check if Travis was successful
id: istravisok
run: |
if [ "$TRAVIS_CONCLUSION" == "success" ]
then
travisok=True
elif [ "$TRAVIS_CONCLUSION" == "None" ]
then
travisok=Wait
else
travisok=False
fi
echo "::set-output name=TRAVIS_OK::$travisok"


plugin_only:
name: Ensure PR ONLY changes plugin files
runs-on: ubuntu-latest
needs: travis_success
if: ${{ needs.travis_success.outputs.TRAVIS_OK == 'True' }}
outputs:
PLUGIN_ONLY: ${{ steps.ispluginonly.outputs.PLUGIN_ONLY }}
steps:
- name: Parse plugin_only confirmation from Travis status update
id: getpluginonlyvalue
run: echo "PLUGIN_ONLY=$(python -c "import requests; r = requests.get(\"https://api.github.com/repos/brain-score/vision/statuses/$github.event.pull_request.head.sha\"); print(next(status['description'].split('- ')[1] for status in r.json() if status['description'].startswith('Run automerge workflow')))")" >> $GITHUB_ENV
- name: Check if PR is plugin only
id: ispluginonly
run: |
if [ "$PLUGIN_ONLY" == "True" ]
then
pluginonly=True
else
pluginonly=False
fi
echo "::set-output name=PLUGIN_ONLY::$pluginonly"


automerge:
name: If plugin-only, approve and merge
runs-on: ubuntu-latest
needs: plugin_only
if: ${{ needs.plugin_only.outputs.PLUGIN_ONLY == 'True' }}
steps:
- name: Auto Approve
uses: hmarr/auto-approve-action@v3.1.0

- name: Auto Merge (GitHub submissions)
uses: plm9606/automerge_actions@1.2.2
with:
github-token: ${{ secrets.WORKFLOW_TOKEN }}
label-name: "automerge"
merge-method: "squash"
auto-delete: "true"

- name: Auto Merge (brain-score.org submissions)
uses: plm9606/automerge_actions@1.2.2
with:
github-token: ${{ secrets.WORKFLOW_TOKEN }}
label-name: "automerge-web"
merge-method: "squash"
auto-delete: "true"
115 changes: 115 additions & 0 deletions .github/workflows/score_new_plugins.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
name: Trigger scoring run


# Triggered on all PRs on merge to main
# If changes are made to a subdir of /benchmarks or /models,
# a Jenkins scoring run is triggered for the corresponding plugin


on:
pull_request:
branches:
- main
types:
- closed

env:
BSC_DATABASESECRET: secrets.BSC_DATABASESECRET

permissions: write-all

jobs:

changes_models_or_benchmarks:
name: Check if PR makes changes to /models or /benchmarks
runs-on: ubuntu-latest
outputs:
PLUGIN_INFO: ${{ steps.getpluginfo.outputs.PLUGIN_INFO }}
RUN_SCORE: ${{ steps.runscore.outputs.RUN_SCORE }}
steps:
- name: Check out repository code
uses: actions/checkout@v2
with:
fetch-depth: 0

- name: Set up Python 3.7
uses: actions/setup-python@v4
with:
python-version: 3.7

- name: Save changed files to env var
run: echo "CHANGED_FILES=$(git diff --name-only origin/main~1 origin/$GITHUB_HEAD_REF | tr '\n' ' ')" >> $GITHUB_ENV

- name: Installing package dependencies
run: |
python -m pip install --upgrade pip setuptools
python -m pip install ".[test]"

- name: Get plugin info
id: getpluginfo
run: |
echo "PLUGIN_INFO='$(python -c 'from brainscore_core.plugin_management.parse_plugin_changes import get_scoring_info; get_scoring_info("${{ env.CHANGED_FILES }}", "brainscore_vision")')'" >> $GITHUB_OUTPUT

- name: Run scoring
id: runscore
run: |
echo "RUN_SCORE=$(jq -r '.run_score' <<< ${{ steps.getpluginfo.outputs.PLUGIN_INFO }})" >> $GITHUB_OUTPUT

get_submitter_info:
name: Get PR author email and (if web submission) Brain-Score user ID
runs-on: ubuntu-latest
needs: [setup, changes_models_or_benchmarks]
if: ${{ needs.changes_models_or_benchmarks.outputs.RUN_SCORE == 'True' }}
env:
PLUGIN_INFO: ${{ needs.changes_models_or_benchmarks.outputs.PLUGIN_INFO }}
outputs:
PLUGIN_INFO: ${{ steps.add_email_to_pluginfo.outputs.PLUGIN_INFO }}
steps:
- name: Parse user ID from PR title (WEB ONLY where we don't have access to the GitHub user)
id: getuid
if: ${{ github.event.pull_request.labels.*.name == 'automerge-web' }}
run: |
echo "BS_UID="$(<<<${{ github.event.pull_request.title }} | sed -E 's/.*\(user:([^)]+)\).*/\1/'"" >> $GITHUB_ENV
- name: Add user ID to PLUGIN_INFO (WEB ONLY)
id: add_uid_to_pluginfo
if: ${{ github.event.pull_request.labels.*.name == 'automerge-web' }}
run: |
echo "The Brain-Score user ID is ${{ steps.getuid.outputs.BS_UID }}"
echo "PLUGIN_INFO="$(<<<$PLUGIN_INFO jq '. + {user_id: ${{ steps.getuid.outputs.UID }} }')"" >> $GITHUB_ENV

- name: Get PR author email from GitHub username
id: getemail
uses: evvanErb/get-github-email-by-username-action@v1.25
with:
github-username: ${{github.event.pull_request.user.login}} # PR author's username
token: ${{ secrets.GITHUB_TOKEN }} # Including token enables most reliable way to get a user's email
- name: Add PR author email to PLUGIN_INFO
id: add_email_to_pluginfo
run: |
echo "The PR author email is ${{ steps.getemail.outputs.email }}"
echo "PLUGIN_INFO=$(<<<$PLUGIN_INFO tr -d "'" | jq -c '. + {author_email: "${{ steps.getemail.outputs.email }}"}')" >> $GITHUB_OUTPUT


runscore:
name: Score plugins
runs-on: ubuntu-latest
needs: [changes_models_or_benchmarks, get_submitter_info]
if: ${{ needs.changes_models_or_benchmarks.outputs.RUN_SCORE == 'True' }}
env:
PLUGIN_INFO: ${{ needs.get_submitter_info.outputs.PLUGIN_INFO }}
JENKINS_USER: ${{ secrets.JENKINS_USER }}
JENKINS_TOKEN: ${{ secrets.JENKINS_TOKEN }}
JENKINS_TRIGGER: ${{ secrets.JENKINS_TRIGGER }}
steps:
- name: Add domain, public, competition, and model_type to PLUGIN_INFO
run: |
echo "PLUGIN_INFO=$(<<<$PLUGIN_INFO tr -d "'" | jq -c '. + {domain: "vision", public: true, competition: "None", model_type: "Brain_Model"}')" >> $GITHUB_ENV

- name: Check out repository code
uses: actions/checkout@v2

- name: Build project and run scoring
run: |
python -m pip install --upgrade pip setuptools
python -m pip install ".[test]"
python -c 'from brainscore_core.submission.endpoints import call_jenkins; call_jenkins('\''${{ env.PLUGIN_INFO }}'\'')'
10 changes: 10 additions & 0 deletions .github/workflows/travis_trigger.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/bin/bash

GH_WORKFLOW_TRIGGER=$1
TRAVIS_PULL_REQUEST_SHA=$2

curl -L -X POST \
-H "Authorization: token $GH_WORKFLOW_TRIGGER" \
-d '{"state": "success", "description": "Run automerge workflow for plugin-only PR",
"context": "continuous-integration/travis"}' \
"https://api.github.com/repos/brain-score/brain-score/statuses/$TRAVIS_PULL_REQUEST_SHA"
62 changes: 46 additions & 16 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,28 +1,58 @@
version: ~> 1.0
language: python
matrix:
include:
- name: 3.7 public
python: '3.7'
- name: 3.7 private
python: '3.7'
env:
- PRIVATE_ACCESS=1
- secure: f1rWEwrslh7qa2g/QlKs001sGC3uaOxZNQSfNOPj+TMCqEo2c6OzImC4hyz+WqCyc6N/lFT4yYo2RhvaqStHMRmu/+9aZmuH05Bb0KQpfzNFA+yGa/U5WR3/4u6KRvDAeNEi9drT2LuacTyGbldmQsquujK0jrPpFWpe7zUUKv0zb0lJf0zcjeSrZlDXLlgD6DCqow7OqHRvW04dPZVy1OArRwtPV6DJ6Rqo1MqFQGHJ806VPlXhSoydb7a58dhGajqPjomdmZjhd3wS6Lv6uetTE/VVb4EP4e7n0qfZIx/TpnWG0SR44pcP7OCNARWYANsAivzxnQ0shyXnIzOo8ZcPYiPpt/5D53i5idTBxXyuDaHGQvgwuY5XLZzznEedBgZa4OvjxAXlLEQjdVDfSsZeYaV9gyFkeTlLnK1zvWi0US38eF2Qtm3Sx3D/5TtBKK2n38tyK5gg/XvJNycaXvIl7iVcnI2ifpqD1mUWI6C9j9Tk19/XEpWkwaFi91+0LZF1GhjBu8o3G5Np4RIOKXi3TIHkpbMM5mf11T6Bm9LvEMq1h8bgRQigEbeJF8CbUOSVFv+AaXsggGjQhuwdyvy2JZo+tO1nfhi+kW3XrDGPsz1R7Wfqduyn7UUh5OiFymeZwKseYKnwU47KyCqDwrq5Mnx1MlSidnVmPriadR4=
- secure: WE7FPwy07VzJTKAd2xwZdBhtmh8jk7ojwk4B2rIcBQu0vwUXc1MgO8tBLD7s08lBedBjqZiLZEW31uPMEyWNysouDt16a5gm2d149LR7flI3MOifBtxINfJuC3eOEG65bPgN/bYEsIpLKnu3469d5nxZkK7xsjbWTxHGoUpLvVPsmHY2ZM5/jftybs7fI0do4NMG2XffKfZbiFb447Ao3xeQeEfW6IkJllzgGnlG9FJATFidrbwDNdmzAnvPEnDoKAf7ZvhPV0x9yR5V6P4Ck5hxl8mlPdBa1cRMO8s/1ag1c7YJ3AF9ZlwcwqTiGsT8DHTVRxSz4nFHJTMlrm9j84u7WzLZJBhPgF0UeLN3AQgiAZ3c2TFDvjQWeHVuSPkV5GrKlfhSvR82s9yPEdHQxxwYymBbAr6rJR4NtXTyZX0vg8NRKHssZKLSafs/D/pt9xXspqu8HAHc+mS0lCips79XptSr5BEsioil3D2io3tbzrGugpTeJ7oEA787vKn2Cm4XmhyQ0UBhvwsPZ351l27wZYuNV07o9Ik83hN/w4o2v899QQ/zbX42Iy8ZUCWOPX7MV7+TA7SMxru3qx7HL5hDM8kTetxbLB6Ckr+JOdX8L2Fb5L3TVDpsvfv0ebXgwaQR/ez8/7bcXmBqcERApHDz73HaMXUap+iDR4FLdXE=
- AWS_DEFAULT_REGION=us-east-1
env:
global:
- PYTEST_SETTINGS="not requires_gpu and not memory_intense and not slow and not travis_slow"
- DOMAIN="vision"
- MODIFIES_PLUGIN="False"
- PLUGIN_ONLY="False"
- WEB_SUBMISSION="False"
before_install:
- pip install --upgrade pip
- pip install setuptools==60.5.0
- pip install pytest
# download large files
- pip install awscli
- bash test_setup.sh
install:
- pip install .
- pip install -e ".[test]"
# install conda for plugin runner
- wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
- bash miniconda.sh -b -p $HOME/miniconda
- source "$HOME/miniconda/etc/profile.d/conda.sh"
- hash -r
- conda config --set always_yes yes --set changeps1 no
- conda update -q conda
- conda info -a
- pip list # list installed package versions
script:
- if [ "$PRIVATE_ACCESS" = 1 ] && [ "$TRAVIS_PULL_REQUEST" = "false" ]; then pytest -m "private_access and not requires_gpu and not memory_intense and not slow and not travis_slow"; fi
- if [ "$PRIVATE_ACCESS" != 1 ]; then pytest -m "not private_access and not requires_gpu and not memory_intense and not slow and not travis_slow"; fi
import:
- brain-score/core:brainscore_core/travis/script.yml@main # run tests
- brain-score/core:brainscore_core/travis/submission_failure.yml@main # if tests fail on web submission, email submitter

jobs:
include:
- name: 3.7 public
python: '3.7'
- name: 3.7 private
python: '3.7'
env:
- PRIVATE_ACCESS=1
- secure: f1rWEwrslh7qa2g/QlKs001sGC3uaOxZNQSfNOPj+TMCqEo2c6OzImC4hyz+WqCyc6N/lFT4yYo2RhvaqStHMRmu/+9aZmuH05Bb0KQpfzNFA+yGa/U5WR3/4u6KRvDAeNEi9drT2LuacTyGbldmQsquujK0jrPpFWpe7zUUKv0zb0lJf0zcjeSrZlDXLlgD6DCqow7OqHRvW04dPZVy1OArRwtPV6DJ6Rqo1MqFQGHJ806VPlXhSoydb7a58dhGajqPjomdmZjhd3wS6Lv6uetTE/VVb4EP4e7n0qfZIx/TpnWG0SR44pcP7OCNARWYANsAivzxnQ0shyXnIzOo8ZcPYiPpt/5D53i5idTBxXyuDaHGQvgwuY5XLZzznEedBgZa4OvjxAXlLEQjdVDfSsZeYaV9gyFkeTlLnK1zvWi0US38eF2Qtm3Sx3D/5TtBKK2n38tyK5gg/XvJNycaXvIl7iVcnI2ifpqD1mUWI6C9j9Tk19/XEpWkwaFi91+0LZF1GhjBu8o3G5Np4RIOKXi3TIHkpbMM5mf11T6Bm9LvEMq1h8bgRQigEbeJF8CbUOSVFv+AaXsggGjQhuwdyvy2JZo+tO1nfhi+kW3XrDGPsz1R7Wfqduyn7UUh5OiFymeZwKseYKnwU47KyCqDwrq5Mnx1MlSidnVmPriadR4=
- secure: WE7FPwy07VzJTKAd2xwZdBhtmh8jk7ojwk4B2rIcBQu0vwUXc1MgO8tBLD7s08lBedBjqZiLZEW31uPMEyWNysouDt16a5gm2d149LR7flI3MOifBtxINfJuC3eOEG65bPgN/bYEsIpLKnu3469d5nxZkK7xsjbWTxHGoUpLvVPsmHY2ZM5/jftybs7fI0do4NMG2XffKfZbiFb447Ao3xeQeEfW6IkJllzgGnlG9FJATFidrbwDNdmzAnvPEnDoKAf7ZvhPV0x9yR5V6P4Ck5hxl8mlPdBa1cRMO8s/1ag1c7YJ3AF9ZlwcwqTiGsT8DHTVRxSz4nFHJTMlrm9j84u7WzLZJBhPgF0UeLN3AQgiAZ3c2TFDvjQWeHVuSPkV5GrKlfhSvR82s9yPEdHQxxwYymBbAr6rJR4NtXTyZX0vg8NRKHssZKLSafs/D/pt9xXspqu8HAHc+mS0lCips79XptSr5BEsioil3D2io3tbzrGugpTeJ7oEA787vKn2Cm4XmhyQ0UBhvwsPZ351l27wZYuNV07o9Ik83hN/w4o2v899QQ/zbX42Iy8ZUCWOPX7MV7+TA7SMxru3qx7HL5hDM8kTetxbLB6Ckr+JOdX8L2Fb5L3TVDpsvfv0ebXgwaQR/ez8/7bcXmBqcERApHDz73HaMXUap+iDR4FLdXE=
- AWS_DEFAULT_REGION=us-east-1
- stage: "Automerge check"
python: '3.7'
install: python -m pip install -e ".[test]"
if: type = pull_request
script:
- |
if [ ! -z "$TRAVIS_PULL_REQUEST_BRANCH" ]; then
CHANGED_FILES=$( git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*" && git fetch && echo $(git diff --name-only origin/$TRAVIS_PULL_REQUEST_BRANCH origin/$TRAVIS_BRANCH -C $TRAVIS_BUILD_DIR) | tr '\n' ' ' ) &&
PLUGIN_ONLY=$( python -c "from brainscore_core.plugin_management.parse_plugin_changes import is_plugin_only; is_plugin_only(\"${CHANGED_FILES}\", \"brainscore_${DOMAIN}\")" )
fi
- |
if [ "$PLUGIN_ONLY" = "True" ]; then
bash ${TRAVIS_BUILD_DIR}/.github/workflows/travis_trigger.sh $GH_WORKFLOW_TRIGGER $TRAVIS_PULL_REQUEST_SHA;
fi

notifications:
slack:
Expand Down
4 changes: 3 additions & 1 deletion MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
include LICENSE
include README.md
include brainscore/lookup.csv
include brainscore_vision/lookup.csv
include brainscore_vision/model_helpers/brain_transformation/imagenet_classes.txt
include brainscore_vision/model_helpers/check_submission/images/*.png

recursive-exclude * __pycache__
recursive-exclude * *.py[co]
Expand Down
23 changes: 12 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ on their match to brain measurements in primate vision.
The intent of Brain-Score is to adopt many (ideally all) the experimental benchmarks in the field
for the purpose of model testing, falsification, and comparison.
To that end, Brain-Score operationalizes experimental data into quantitative benchmarks
that any model candidate following the [`BrainModel`](brainscore/model_interface.py) interface can be scored on.
that any model candidate following the [`BrainModel`](brainscore_vision/model_interface.py) interface can be scored on.

See the [Documentation](https://brain-score.readthedocs.io) for more details
and the [Tutorial](https://brain-score.readthedocs.io/en/latest/modules/model_tutorial.html)
Expand All @@ -27,21 +27,22 @@ To score a model on all benchmarks, submit it via the [brain-score.org website](
`pip install git+https://github.com/brain-score/brain-score`

Score a model on a public benchmark:

```python
from brainscore.benchmarks import public_benchmark_pool
from brainscore_vision.benchmarks import public_benchmark_pool

benchmark = public_benchmark_pool['dicarlo.MajajHong2015public.IT-pls']
model = my_model()
score = benchmark(model)
#> <xarray.Score (aggregation: 2)>
#> array([0.32641998, 0.0207475])
#> Coordinates:
#> * aggregation (aggregation) <U6 'center' 'error'
#> Attributes:
#> raw: <xarray.Score (aggregation: 2)>\narray([0.4278365 ...
#> ceiling: <xarray.Score (aggregation: 2)>\narray([0.7488407 ...
#> model_identifier: my-model
#> benchmark_identifier: dicarlo.MajajHong2015public.IT-pls
# > <xarray.Score (aggregation: 2)>
# > array([0.32641998, 0.0207475])
# > Coordinates:
# > * aggregation (aggregation) <U6 'center' 'error'
# > Attributes:
# > raw: <xarray.Score (aggregation: 2)>\narray([0.4278365 ...
# > ceiling: <xarray.Score (aggregation: 2)>\narray([0.7488407 ...
# > model_identifier: my-model
# > benchmark_identifier: dicarlo.MajajHong2015public.IT-pls
```

Some steps may take minutes because data has to be downloaded during first-time use.
Expand Down
Loading