Releases: aadivar/nexus-score
v0.1.2 — Typo-tolerant institution search
Institution search now falls back to OpenAlex's native Levenshtein fuzzy matching (~2 per word) when an exact query returns zero results. Misspelled queries like "IIT HYderbaad" or "Oxferd University" now resolve correctly. Well-spelled queries still hit the exact path first to preserve ranking.
Per OpenAlex search docs.
v0.1.1 — Institutional Research Visibility Analysis
Institutional Research Visibility Analysis
New /analysis/institution page measures what publishers deposit to Crossref for a given institution's journal articles over the last 90 days — observed directly from Crossref work records, never projected from publisher-wide averages. Designed to survive scrutiny in real publisher-agreement negotiations.
How it works
The analysis combines three data sources:
- OpenAlex — identifies the institution's journal articles (last 90 days) grouped by publisher
- Crossref — batch-fetches each work record and inspects metadata directly (affiliation, any-ROR, this-institution's-ROR, funder, abstract, license, ORCID)
- ROR — the machine-readable bridge; the
Inst. RORcolumn is the strict institutional-attribution test
Scope and 90-day window chosen to respect the free rate limits offered by Crossref and OpenAlex.
Safeguards against misleading numbers
- Fixed an ancestor double-counting bug in the grouping (uses singular
host_organization, nothost_organization_lineage) PUBLISHER_MAPexpanded to ~55 entries covering Springer/IEEE/T&F/WK/Elsevier sub-entities and previously-unmapped publishers (De Gruyter, Emerald, Hindawi, Karger, Thieme, Brill, eLife, NEJM, Wellcome, AAAS, JMIR, ACM, AIAA, SPIE, more)- Removed a fabricated IEEE OpenAlex ID (
P4310320430didn't exist) - Publishers with <10 articles at an institution are shown but not measured
- Journal articles only — no proceedings, peer reviews, or book chapters
- Individual Crossref batch failures are tolerated, not fatal
Categorized "Why N articles weren't analyzed"
Unanalyzable articles are surfaced in three buckets, every one framed as a publisher deposit gap: no Crossref DOI registered; publisher deposits but isn't mapped yet; deposit too incomplete for OpenAlex to identify the publisher.
Cost calculator
Floating corner panel with a 15-currency dropdown. Plug in your own hourly rates; the observed 90-day gap is extrapolated to an annual cost using the institution's real 1-year article count. Rates stay client-side.
Methodology cards
Three collapsible cards explain (1) how the analysis works, (2) caveats to keep in mind (amber-styled warning card — including the ROR funder-vs-institution distinction, connecting to open issue #2 FundRef→ROR transition), and (3) what each gap column means.
API / CLI
GET /api/analyze-institution?ror=<ror>&days=<days>
GET /api/analyze-institution?search=<name>
pnpm --filter web analyze:institution -- --ror 052gg0110
pnpm --filter web analyze:institution -- --search "IISc" --days 90
Vercel Analytics
Each analyzed institution gets a distinct URL (/analysis/institution?ror=<ror>) via history.pushState, so per-institution pageviews show up separately in analytics.
Live
v0.1.0 — Research Nexus Score
Research Nexus Score v0.1.0
The first public release of Research Nexus Score — an open-source tool for evaluating publisher metadata quality across Crossref members.
Pre-1.0 note: The scoring methodology is still evolving in public. Weights, dimensions, and signals may change as community feedback shapes the model (see open issues tagged
methodology).1.0.0will be cut when the methodology and Publisher API are stable commitments.
Highlights
- Publisher Leaderboard: Rankings for 27,830+ publishers based on Crossref metadata coverage
- Composite Scoring: Single score (0–100) across 5 dimensions — Provenance, People, Organizations, Funding, Access
- Trend Analysis: Current vs backfile era comparisons
- Per-Content-Type Breakdowns: Filter by journal-article, book-chapter, proceedings, and 11 more
- Per-Dimension Leaderboards: Sort by any single dimension; radar chart overlays reveal publisher "shape"
- Actionable Recommendations: Every score comes with improvement paths linked to Crossref documentation
- Gap Fixer: Recover missing metadata from Crossref Participation Reports using OpenAlex, ORCID, ROR, and Reducto
- Journal Nexus: Article-level enrichment and inspection
- MCP Server:
npx @nexus-score/mcp-serverfor AI assistant integration - Core Library:
@nexus-score/corefor programmatic scoring
Why content-type and per-dimension views matter
Aggregate scores were misleading. eLife scored 31/F in aggregate but 97/A on journal articles — peer reviews were the diluter. With the content-type filter, eLife jumps from #2,581 to #2. APS scores 81/A on current journal articles but 7/F on proceedings. Same publisher, two completely different pipeline investments.
Two publishers can both score 50 and look nothing alike. Per-dimension sorting and radar profiles make those differences visible.
Metadata quality is driven by the deposit pipeline, not by discipline.
Community-driven
Features in this release were shaped by feedback from researchers, publishers, librarians, and infrastructure folks across LinkedIn and GitHub. Key contributors to the content-type filter (PR #7): Fiona Hutton (eLife), Bianca Kramer, Toby Green (Coherent Digital), Andy Byers (OLH), Colin Adcock (APS), Wendy Patterson (Beilstein), Paula Kennedy (ULP), Renisha Winston (i-manager), Kora Korzec (Crossref). Per-dimension leaderboards (PR #8): Bianca Kramer.
Tech Stack
- Next.js 16, React 19, TypeScript, Turborepo, Tailwind CSS 4
- Crossref REST API, OpenAlex, ORCID, ROR, Unpaywall, Reducto
Links
- Live: https://nexus-score.vercel.app
- Methodology: INSIGHTS.md
- License: AGPL-3.0