How DJD calculates trust
++ DJD Agent Score produces a 0–100 trust score for wallets on Base and then packages that signal into certification, + evaluator, directory, and monitoring surfaces. The model is intentionally inspectable: no hidden manual overrides, no social + proof scraping, and no black-box human opinions in the score itself. +
-Is this wallet actively providing services in the agent economy? Measures active x402 service endpoints, total revenue earned, unique counterparties served, and how long the wallet has been operating.
-Scoring pipeline
+When an app calls DJD with a wallet address, the engine runs a five-phase pipeline:
+-
+
- Fetch on-chain data — transaction history, USDC transfers, balances, basename, GitHub verification, and Insumer attestations pulled from verifiable sources. +
- Run sybil and gaming detection — behavioral checks identify fake wallet networks, circular funding, wash-trading, and timing anomalies before scoring finishes. +
- Calculate five dimensions — each dimension produces a 0–100 sub-score from explicit on-chain signals. +
- Compose the final score — weights, trajectory, confidence dampening, and integrity penalties turn the dimension set into a single output. +
- Package explainability — confidence, improvement guidance, top contributors, and top detractors travel with the score. +
The full pipeline runs against live blockchain state. Scores are cached briefly for performance, and background jobs continuously refresh stale wallets as the network evolves.
+ +The five dimensions
+Each dimension maps to a question an operator or payment system would naturally ask before trusting a wallet.
+ +Does this wallet consistently execute transactions? Reliability measures transaction success rate, total volume, nonce alignment, uptime estimation, and recency of activity.
+Can this wallet actually pay? Viability looks at ETH and USDC balances, income-to-spend ratio, wallet age, balance trends, and whether the wallet routinely collapses to zero.
+Has this wallet established a verifiable identity? Identity checks agent registration, Base Name ownership, GitHub verification with activity signals, and attestations from Insumer.
+Does this wallet behave like a legitimate actor or a bot? Behavior scores timing variance, hourly entropy, and suspicious inactivity gaps that often show up in manufactured identities.
+Is this wallet actively providing services in the agent economy? Capability tracks x402 service endpoints, revenue earned, counterparty breadth, and service longevity.
+Composite score formula
+The final score is not just a weighted average. Three additional layers keep the output aligned with real-world trust decisions.
+raw = Reliability*0.30 + Viability*0.25 + Identity*0.20
+ + Behavior*0.15 + Capability*0.10
+
+adjusted = raw + trajectoryModifier
+final = adjusted * integrityMultiplier
+output = clamp(0, 100, final)Trajectory modifier adds or subtracts up to five points based on sustained improvement or decline over time.
+Integrity multiplier compounds penalties from sybil indicators, gaming flags, and fraud pressure instead of letting one clean-looking dimension mask deeper issues.
+Confidence dampening keeps mature wallets stable and lets new wallets move more as fresh evidence arrives.
+ +Tier model
+| Tier | Score range | Meaning |
|---|---|---|
| Elite | +90 – 100 | +Exceptional track record across the full trust surface. | +
| Trusted | +75 – 89 | +Reliable actor with verified identity and consistent operating history. | +
| Established | +50 – 74 | +Active wallet with reasonable history but some dimensions still developing. | +
| Emerging | +25 – 49 | +Limited history or mixed signals; useful, but not yet high-trust. | +
| Unverified | +0 – 24 | +Insufficient evidence or significant red flags. | +
Sybil and gaming defense
+A high score is meaningless if wallets can fake their way into it, so the engine defends the model before the score ships.
+Sybil detection
+DJD identifies suspicious wallet networks using the interaction graph stored in SQLite. It looks for circular funding patterns, shared funding sources, tightly synchronized timing, low-diversity counterparties, and other topology signals that show up in manufactured identity farms.
+Gaming detection
+The engine also catches wallets inflating their stats through temporary balance window dressing, wash-trading, or query-sensitive behavior. Gaming penalties are applied directly to the composite score and also feed the integrity multiplier.
+ +Data sources
+Every signal comes from verifiable on-chain or explicitly linked identity data on Base:
+-
+
- Base RPC data — transaction history, nonces, balances, and contract interactions. +
- USDC transfer events — indexed from live event logs. +
- Base Name Service — for name ownership. +
- GitHub API — repository verification and activity for registered agents. +
- Insumer Model — multi-chain attestations for linked identity context. +
- Internal indexer — a continuous Base block indexer that builds the local relationship graph and feature store. +
No social clout metrics, manual score overrides, or pay-to-improve backdoors influence the score. If it is not on-chain or verifiably linked to the wallet, it does not count.
+ +Outcome tracking
+Scores only matter if they predict real behavior. DJD tracks post-score outcomes such as payment follow-through, fraud pressure, and subsequent on-chain activity to understand whether the model is becoming more or less predictive over time.
+That outcome data feeds the recalibration system, which adjusts weights and tier thresholds as the network matures.
+ +What DJD does not do
+-
+
- No manual score overrides. +
- No pay-to-boost mechanics. Certification can package and verify trust, but it does not purchase a better score. +
- No hidden off-chain reputation sources such as social followings or vague community claims. +
- No “secret sauce” positioning that prevents buyers from understanding what the model is measuring. +
See the model in the product
+Use the free lookup to score a real wallet, then follow that same wallet into Certify, evaluator preview, and directory surfaces.
+ ++ Model version 2.5.0. This methodology is a living document and will change as the scoring engine evolves. + Questions or feedback? Email ${SUPPORT_EMAIL}. +
Composite score formula
-The final score is not just a simple weighted average. Three additional layers refine it:
-raw = Reliability×0.30 + Viability×0.25 + Identity×0.20
- + Behavior×0.15 + Capability×0.10
-
-adjusted = raw + trajectoryModifier // ±5 from sustained trends
-final = adjusted × integrityMultiplier // penalizes sybil/gaming/fraud
-output = clamp(0, 100, final) // dampened by confidence levelTrajectory modifier — analyzes the wallet's score history. Sustained improvement adds up to +5; sustained decline subtracts up to −5. New wallets start at zero modifier.
-Integrity multiplier — a compound penalty from sybil indicators, gaming flags, and fraud reports. A clean wallet has a multiplier of 1.0. Flagged wallets get dampened toward zero.
-Confidence dampening — wallets with long histories and many data points have high confidence, which limits score volatility. New wallets with little data have low confidence, allowing larger score swings as new information arrives.
- -Tiers
-| Tier | Score Range | Meaning |
|---|---|---|
| Elite | -90 – 100 | -Exceptional track record across all dimensions | -
| Trusted | -75 – 89 | -Reliable actor with verified identity and consistent history | -
| Established | -50 – 74 | -Active wallet with reasonable history, some dimensions still developing | -
| Emerging | -25 – 49 | -Limited history or mixed signals — proceed with caution | -
| Unverified | -0 – 24 | -Insufficient data or significant red flags | -
Sybil & gaming detection
-A high score means nothing if wallets can fake it. The engine runs two layers of detection before scoring:
- -Sybil detection (7 checks)
-Identifies fake wallet networks using the on-chain relationship graph stored in SQLite. Checks for circular funding patterns, shared funding sources, coordinated transaction timing, low-diversity counterparties, and other network topology signals. Flagged wallets receive hard caps on Reliability, Viability, and Identity scores.
-Gaming detection
-Catches wallets inflating their stats. Detects window dressing (temporarily inflating balances before a score check), wash trading (self-transfers to boost volume), and query manipulation. Gaming penalties are applied as a direct deduction from the composite score.
- -Data sources
-Every signal comes from verifiable on-chain data on Base L2:
--
-
- Base blockchain RPC — transaction history, nonces, ETH balances, contract interactions -
- USDC transfer events — token transfers indexed from on-chain event logs -
- Base Name Service — .base.eth name ownership -
- GitHub API — repository verification, stars, recent activity (for registered agents) -
- Insumer Model — multi-chain attestations (USDC/Base, ENS, Optimism, Arbitrum, stETH) -
- Internal indexer — continuously indexes Base blocks (every 12 seconds), building a local relationship graph of wallet interactions -
No off-chain opinions, manual reviews, or centralized databases influence the score. If it's not on-chain or verifiably linked to an on-chain identity, it doesn't count.
- -Outcome tracking
-Scores are only useful if they predict real behavior. The engine tracks outcomes after scoring:
--
-
- Did high-scored wallets follow through on payments? -
- Did low-scored wallets exhibit fraudulent behavior? -
- How do scores correlate with subsequent on-chain activity? -
This outcome data feeds the auto-recalibration system, which adjusts dimension weights and tier thresholds over time. The model gets more accurate as more wallets are scored and more outcomes are observed.
- -What we don't do
--
-
- No manual score overrides — every score is algorithmically derived -
- No pay-to-improve — the certification product verifies existing scores, it doesn't boost them -
- No off-chain data — social media followers, KYC status, and reputation claims are not inputs -
- No model secrecy — this page documents the methodology; the scoring engine source code will be published -
Try it yourself. Score any Base wallet in under 200ms.
- View API Reference → -- Model version 2.5.0. This methodology is a living document — updated as the scoring engine evolves. - Questions or feedback? Reach out at ${SUPPORT_EMAIL}. -
-