Problem
The People dimension (20/100 points) is scored entirely on ORCID coverage. Institutional authors (e.g., OECD, World Bank, WHO) cannot hold ORCIDs — only individuals can. Publishers with significant institutional authorship will always score poorly on this dimension regardless of their metadata quality.
This is structurally similar to #5 (funding penalizing unfunded research) — the scoring assumes a model of individual, funded academic authorship that doesn't hold across all scholarly publishing.
Why this is hard to fix with current data
- No authorship-type signal in the Crossref Member API. Coverage stats report "X% of DOIs have ORCIDs" but don't distinguish individual vs. institutional authors.
- Crossref's
author field supports both. At the work level, institutional authors use the name field instead of given/family, but this isn't reflected in aggregate coverage stats.
- ROR could serve as the institutional equivalent of ORCID, but the Organizations dimension already counts ROR IDs separately — double-counting would introduce its own bias.
Possible approaches
- Work-level sampling — Sample works from publishers flagged as low-ORCID, check the ratio of
name (institutional) vs given/family (individual) authors, and adjust the People score accordingly.
- Content-type heuristics — Certain content types (reports, datasets, standards) are more likely to have institutional authorship. Reduce ORCID weight for these types.
- Composite People metric — Score People as "ORCID coverage among individual authors + ROR coverage among institutional authors" rather than ORCID alone. Requires work-level analysis.
- Methodology note — At minimum, document this limitation on the site.
Related
Source
Raised by Toby Green (OECD/Coherent Digital) on LinkedIn
Problem
The People dimension (20/100 points) is scored entirely on ORCID coverage. Institutional authors (e.g., OECD, World Bank, WHO) cannot hold ORCIDs — only individuals can. Publishers with significant institutional authorship will always score poorly on this dimension regardless of their metadata quality.
This is structurally similar to #5 (funding penalizing unfunded research) — the scoring assumes a model of individual, funded academic authorship that doesn't hold across all scholarly publishing.
Why this is hard to fix with current data
authorfield supports both. At the work level, institutional authors use thenamefield instead ofgiven/family, but this isn't reflected in aggregate coverage stats.Possible approaches
name(institutional) vsgiven/family(individual) authors, and adjust the People score accordingly.Related
Source
Raised by Toby Green (OECD/Coherent Digital) on LinkedIn