From e0dac845cade30da68888c907ec89417e4fa56b3 Mon Sep 17 00:00:00 2001 From: itay mendelawy Date: Tue, 30 Dec 2025 16:38:52 +0000 Subject: [PATCH 1/2] fix odd readme phrasing --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 54f102c..c6007f4 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,12 @@ Unfortunately none of the researchers suggests a single-metric similar to how Cy Σ(depth²) / lineCount ``` -Deeper nesting contributes exponentially more to the score. Default thresholds: +I chose depth squared weighting because: +- Mirrors cognitive complexity principles (nested code is harder to understand) +- Normalized by line count for cross-file comparison +- Shallow-but-wide code scores low; deep code scores high + +Default thresholds: | Level | Score | |----------|-----------| @@ -36,11 +41,6 @@ Deeper nesting contributes exponentially more to the score. Default thresholds: | `medium` | 4 - 10 | | `high` | ≥ 10 | -We chose depth squared weighting because: -- Mirrors cognitive complexity principles (nested code is harder to understand) -- Normalized by line count for cross-file comparison -- Shallow-but-wide code scores low; deep code scores high - `verbose: true` returns all research-backed metrics for you to experiment and explore: | Metric | Description | From 75080188b8559a58f4101427679abbf23bd30af8 Mon Sep 17 00:00:00 2001 From: itay mendelawy Date: Tue, 30 Dec 2025 16:40:19 +0000 Subject: [PATCH 2/2] formatting --- README.md | 29 +++++++++++++++-------------- src/index.ts | 1 - 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index c6007f4..60a65ac 100644 --- a/README.md +++ b/README.md @@ -13,11 +13,11 @@ import { analyzeComplexity, analyzeDiffComplexity } from 'indent-complexity'; // Pass any code snippet, function of complete file to get complexity assessment: const codeComplexity = analyzeComplexity(codeSnippet); -console.log(codeComplexity.score) +console.log(codeComplexity.score); // Also supports `git diff` output to evaluate complexity for added lines (by default): const diffComplexity = analyzeDiffComplexity(gitDiff); -console.log(diffComplexity.score) +console.log(diffComplexity.score); ``` ## The Score @@ -29,27 +29,28 @@ Unfortunately none of the researchers suggests a single-metric similar to how Cy ``` I chose depth squared weighting because: + - Mirrors cognitive complexity principles (nested code is harder to understand) - Normalized by line count for cross-file comparison - Shallow-but-wide code scores low; deep code scores high Default thresholds: -| Level | Score | -|----------|-----------| -| `low` | < 4 | -| `medium` | 4 - 10 | -| `high` | ≥ 10 | +| Level | Score | +| -------- | ------ | +| `low` | < 4 | +| `medium` | 4 - 10 | +| `high` | ≥ 10 | `verbose: true` returns all research-backed metrics for you to experiment and explore: -| Metric | Description | -|--------|-------------| -| `variance` | Correlates with McCabe cyclomatic complexity | -| `max` | Deepest nesting level | -| `mean` | Average depth per line | -| `lineCount` | Lines analyzed (excluding comments/blanks) | -| `depthHistogram` | Distribution of depths | +| Metric | Description | +| ---------------- | -------------------------------------------- | +| `variance` | Correlates with McCabe cyclomatic complexity | +| `max` | Deepest nesting level | +| `mean` | Average depth per line | +| `lineCount` | Lines analyzed (excluding comments/blanks) | +| `depthHistogram` | Distribution of depths | ## License diff --git a/src/index.ts b/src/index.ts index 812042f..7baa06c 100644 --- a/src/index.ts +++ b/src/index.ts @@ -23,4 +23,3 @@ export type { AnalyzeOptions, DiffOptions, } from './types.js'; -