-
Notifications
You must be signed in to change notification settings - Fork 2
Gartner metrics #39
Description
Discussed in #33
Originally posted by silrenan May 29, 2025
Hails to thee,
This document.pdf, authored by the analysts at Gartner, has intersected with my spatiotemporal cognition—its contents resonating like a signal through the noise of operational entropy. To test the parameters it defines, I submit to thee the following principle:
Axiom: Code quality is the minimal sustained cost required to ensure a system remains fully compliant with its specifications—no more, no less.
This is not a preference. It is a necessity. In the architecture of enduring systems, quality is not ornamental—it is structural. And with that in my mind I present thee with:
Axiom-Driven Metrics for Engineering Suplex Clarity
“Measure what matters. Automate what endures.”
- Flow
- Quality
- Team
- Value
Flow
-
Mean Time Dev Feedback
Definition: Time from code commit to first automated feedback (e.g., CI result).
Extraction: GitHub Actions: Measure time between commit and CI status on PR. GitHub API to list PRs, extract commit timestamp and first CI check timestamp. -
Change Lead Time
Definition: Time from code commit to deployment in production.
Extraction: GitHub Actions: Track from commit to merge/deploy event. Combine git log with GitHub API for PR merge and deployment timestamps. -
Test Cycle Time
Definition: Time taken to run all tests for a change.
Extraction: Duration of test jobs in GitHub Actions. Parse GitHub Actions run times for test jobs.
Quality
-
Code Quality
Definition: Maintainability and complexity of code, as measured by static analysis.
Extraction: SonarQube: fetch metrics like cyclomatic/cognitive complexity, code smells, duplications. Use SonarQube API to pull project metrics. Or GitHub Actions with Radon or other analysis tool. -
Performance
Definition: How fast the software runs under expected load.
Extraction: Run simple benchmarks or performance tests locally or in CI. Use open-source tools (e.g., JMH for Java, pytest-benchmark for Python) and store results in repo or CI artifacts. -
Security
Definition: Number and severity of known vulnerabilities in code and dependencies.
Extraction: Use GitHub Dependabot alerts (free): check repo Security tab. Use GitHub API to fetch open security advisories. Use Qodana if available for the team.
Team
-
Team Health
Definition: How happy and productive the team feels.
Extraction: Use anonymous surveys (Google Forms, etc.)—no code-only way. -
Software Quality Skills
Definition: Distribution of testing and quality skills in the team.
Extraction: Use self-assessment surveys or skills matrix—manual process.
Value
- Topline/Bottomline Contributor
Definition: How much the software contributes to revenue or cost savings.
Extraction: Requires business data, not available via code/tools—needs business input. We can mock here.
My main few reflection considerations:
- All extraction methods are junior-friendly, require no extra teams, and use only Git, GitHub, or SonarQube/Qodana.
- For “flow” and “quality,” we can automate most metrics with scripts using public APIs.
- For “team” and “value” metrics, automation is not possible without external input—surveys or business data are needed—mockable, done on the fly.
- The next step (for another discussion) would be application metrics with Prometheus.
@pedr0limpio and @Cintyabio , what dost thee think?