Skip to content

Conversation

@acgetchell
Copy link
Owner

  • Introduce canonical EdgeKey and read-only topology traversal helpers on Triangulation
  • Add opt-in AdjacencyIndex builder for faster repeated adjacency queries
  • Add integration tests for topology traversal and adjacency index invariants
  • Refresh repo tooling/CI configs and supporting scripts/tests

- Introduce canonical `EdgeKey` and read-only topology traversal helpers on `Triangulation`
- Add opt-in `AdjacencyIndex` builder for faster repeated adjacency queries
- Add integration tests for topology traversal and adjacency index invariants
- Refresh repo tooling/CI configs and supporting scripts/tests
@coderabbitai
Copy link

coderabbitai bot commented Jan 2, 2026

Walkthrough

Adds a public topology traversal API (EdgeKey, AdjacencyIndex, traversal/accessor methods, build_adjacency_index), renames triangulation()/triangulation_mut() → as_triangulation()/as_triangulation_mut(), integrates new validation invariants, expands tests/examples/benches, and applies broad tooling/config updates and scripting refactors.

Changes

Cohort / File(s) Summary
Core topology types
src/core/edge.rs, src/core/adjacency.rs
Add EdgeKey, AdjacencyIndex, AdjacencyIndexBuildError, read-only query methods, docs and unit tests.
Triangulation API & forwarding
src/core/triangulation.rs, src/core/delaunay_triangulation.rs, src/core/triangulation_data_structure.rs
Add traversal/accessors (edges, incident/adjacent queries, cell_vertices, vertex_coords, counts), build_adjacency_index(), rename triangulation*as_triangulation*, and add invariants (CellVertexKeys, VertexIncidence) into validation.
Crate exports & prelude
src/lib.rs
Re-export new modules and expose prelude::query read-only namespace including EdgeKey and AdjacencyIndex.
Topology helpers & validation
src/topology/characteristics/*.rs, src/topology/characteristics/validation.rs
Reuse facet-to-cells map, add count_simplices_with_facet_to_cells_map and validate_triangulation_euler_with_facet_to_cells_map to avoid recomputation and centralize Euler checks.
Examples / benches / tests
examples/*, benches/*, tests/*, tests/public_topology_api.rs
Update imports to delaunay::prelude::query::*, switch call sites to as_triangulation(), add integration tests exercising adjacency/index APIs and validate round-trips.
Geometry & utilities
src/geometry/quality.rs, src/core/util.rs
Update call sites to use as_triangulation() and expose zero-allocation accessors.
Scripts & tooling
scripts/changelog_utils.py, scripts/benchmark_utils.py, scripts/hardware_utils.py, scripts/enhance_commits.py, scripts/tests/*
Major refactors: modular changelog post-processing, new exceptions/public helpers, TYPE_CHECKING-aware imports, hardened parsing/typing, updated test fixtures and added changelog/hardware tests.
CI / workflows / justfile
.github/workflows/*.yml, justfile
Standardize quoting/formatting, pin tool versions, add install/verification steps (actionlint, taplo, shfmt), and expand justfile targets (fix/check, toml/yaml/markdown tooling).
Lint / format / config
.coderabbit.yml, .codacy.yml, .taplo.toml, .yamllint, ty.toml, cspell.json, pyproject.toml, rust-toolchain.toml
Add richer review config and path instructions, TOML formatting defaults, YAML comment spacing rule, spelling additions, and set rust-toolchain.profile = "default".
Docs & contributor guidance
CONTRIBUTING.md, README.md, WARP.md, docs/*, tests/README.md
Update developer commands (fix/check/ci-slow), CI guidance, examples, and docs to reflect API renames and workflow changes.
Manifests & repo metadata
.gitignore, Cargo.toml, workflow YAMLs, other manifests
Remove uv.lock ignore (now tracked), bench harness flags in Cargo, and many cosmetic whitespace/quoting normalizations across manifests.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant DelaunayTriangulation
    participant Triangulation
    participant AdjacencyBuilder
    participant AdjacencyIndex

    User->>DelaunayTriangulation: build_adjacency_index()
    activate DelaunayTriangulation
    DelaunayTriangulation->>Triangulation: as_triangulation().build_adjacency_index()
    activate Triangulation

    Triangulation->>AdjacencyBuilder: iterate cells & vertices
    activate AdjacencyBuilder
    AdjacencyBuilder->>AdjacencyBuilder: collect edges, vertex_to_cells, cell_to_neighbors
    AdjacencyBuilder-->>Triangulation: Result<AdjacencyIndex, AdjacencyIndexBuildError>
    deactivate AdjacencyBuilder

    alt Success
        Triangulation-->>DelaunayTriangulation: Ok(AdjacencyIndex)
        DelaunayTriangulation-->>User: Ok(AdjacencyIndex)
    else Failure
        Triangulation-->>DelaunayTriangulation: Err(MissingVertexKey | MissingNeighborCell)
        DelaunayTriangulation-->>User: Err(AdjacencyIndexBuildError)
    end
    deactivate Triangulation
    deactivate DelaunayTriangulation
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Possibly related PRs

Poem

🐰 I hopped through edges and mapped each line,
Canonical endpoints snug, neighbors align.
I built an index tidy, immutable and neat,
Tests blinked green, CI hummed — a developer treat.
— a rabbit, delighted with topology complete

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: add public topology traversal API' clearly and specifically describes the main change: introducing a public topology traversal API. This is directly supported by the comprehensive changes across the codebase.
Description check ✅ Passed The description directly relates to the changeset by outlining the four key additions: EdgeKey, topology traversal helpers, AdjacencyIndex builder, and integration tests, with mention of repo tooling refresh.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot added documentation Improvements or additions to documentation geometry Geometry-related issues rust Pull requests that update rust code labels Jan 2, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/changelog_utils.py (1)

1218-1273: Broad except Exception in changelog generation conflicts with Ruff BLE001

In _execute_changelog_generation, the final except Exception: branch is intentionally used to guarantee backup restoration and a clean non‑zero exit on any unexpected failure, which is reasonable for a CLI tool. However, Ruff (BLE001) will flag this as a blind catch.

Two options to fix this without changing behavior:

  • Narrow the exception tuple to the concrete types you expect (e.g., OSError, subprocess.CalledProcessError, json.JSONDecodeError, etc.), or
  • Keep the broad catch but explicitly mark it as intentional for Ruff, for example:
-    except Exception:
+    except Exception:  # noqa: BLE001  # restore backup and exit on any unexpected error

Given this is the top-level orchestration point and you’re already handling KeyboardInterrupt explicitly, the second option is likely sufficient and keeps the robustness you want while satisfying the linter.

Also applies to: 1257-1260

🧹 Nitpick comments (5)
src/core/triangulation.rs (3)

733-942: Topology traversal API looks correct and consistent with TDS invariants

The new read‑only methods (edges, number_of_edges, adjacent_cells, cell_neighbors, incident_edges, number_of_incident_edges, cell_vertices, vertex_coords) are all consistent with the underlying Tds behavior:

  • EdgeKey::new plus FastHashSet ensures global edge deduplication, and the vertex‑incidence paths use the same canonicalization, so counts and sets will agree.
  • All query methods gracefully handle missing keys by returning empty iterators / None, which matches their documentation and the new tests.
  • Allocation behavior (per‑call FastHashSet for edge queries) is explicit in the docs and appropriately pushed toward build_adjacency_index for repeated queries.

No correctness issues stand out here. The implementation matches the intended semantics; any further micro‑optimizations (e.g., pre‑sizing collect_incident_edges sets) would be purely optional.

Also applies to: 965-999, 1123-1161


1001-1121: Adjacency index builder is robust; clarify expectations around isolated vertices

build_adjacency_index correctly:

  • Pre‑sizes maps/sets from number_of_vertices / number_of_cells.
  • Detects structural corruption (cells referencing missing vertex or neighbor keys) and returns AdjacencyIndexBuildError early.
  • Deduplicates edges globally via seen_edges and then populates vertex_to_edges symmetrically for both endpoints.

One subtle point to double‑check: vertices are only inserted into vertex_to_cells / vertex_to_edges when they appear in at least one cell. If a Tds ever contains “isolated” vertices (present in vertices() but not in any cell), they will not appear in these maps, while callers might iterate tri.vertices() and then index into vertex_to_cells / vertex_to_edges. The current insertion/validation code tries hard to prevent such vertices, so this is probably an invariant, but it would be good to:

  • Either explicitly document that adjacency indices assume a triangulation with no isolated vertices (and treat any such state as invalid elsewhere), or
  • Treat seeing such a vertex as an AdjacencyIndexBuildError and fail fast.

If you confirm the invariant is guaranteed post‑assign_incident_cells, the current implementation is fine; otherwise, adding a brief check or a doc note would avoid surprising panics for callers.


4173-4341: Topology/adjacency tests provide good coverage

The new tests around edges, incident_edges, missing keys, geometry accessors, and build_adjacency_index exercise:

  • Basic edge counting in 2D/3D (triangle, double tetrahedron) including vertex degree checks.
  • Behavior for “null” VertexKey / CellKey (empty iterators / None).
  • Coherence between tri.vertices() and the various maps inside AdjacencyIndex.

They align well with the API contracts and should catch most regressions around traversal and indexing. The only tiny nit (optional) is that several tests match vertices via exact f64 equality on coordinates; given these coordinates originate from the same literals in construction, this is safe, but a tolerance‑based compare could be more future‑proof if predicates or kernels ever change representation.

scripts/changelog_utils.py (2)

41-43: Tag-size handling and markdown wrapping helpers behave as intended

  • _GITHUB_TAG_ANNOTATION_LIMIT and its use in _get_changelog_content correctly guard against overly large tag annotations and fall back to a short message linking to CHANGELOG.md with a stable repo URL fallback when origin is unavailable.
  • get_markdown_line_limit sensibly reads MD013.line_length from .markdownlint.json with a robust default (160) and safe failure behavior, and wrap_markdown_line respects the configured limit including indentation.
  • _protect_cron_expressions and wrap_bare_urls correctly avoid mangling quoted crons and existing markdown/code constructs while still shielding asterisks and bare URLs from markdownlint complaints.

Overall, the helpers are defensive and align well with the markdownlint configuration.

Also applies to: 345-401, 624-640, 718-748, 1419-1447


1409-1555: Release-notes post-processor is well-factored; consider a small consistency tweak

The new _ReleaseNotesPostProcessor pipeline and its helpers look thoughtfully structured:

  • _BreakingChangeDetector conservatively promotes clearly breaking items from the “### Changed” section into a dedicated “⚠️ Breaking Changes” section and trims them down to headline + commit link, which should greatly help readers.
  • _EntryConsolidator’s grouping by commit and thresholding (_CONSOLIDATE_ADDED_MIN_ENTRIES) for the “### Added” section is a reasonable heuristic to collapse noisy groups into a parent entry with categorized sub‑bullets.
  • _CommitLinkFormatter keeps commit links on top‑level bullets only, inlining short SHAs where they fit under the markdown limit and dropping stray link‑only lines.
  • _WordingNormalizer is scoped to a tiny, explicit replacements list, minimizing risk of unintended rewrites.

One minor consistency point to verify: _BreakingChangeDetector.extract_breaking_changes currently keyes only on the exact header text "### Changed", while elsewhere you treat both "Changes" and "Changed" as the “changes” section (e.g., ChangelogProcessor._handle_section_headers). If older changelog blocks still use "### Changes", you may want to extend _BreakingChangeDetector and _rn_remove_empty_section to recognize that spelling as well so breaking-change promotion works uniformly.

Also applies to: 1573-1671, 1688-1856, 1944-2008, 2026-2031

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 66df9a6 and 5750cc6.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (41)
  • .codacy.yml
  • .coderabbit.yml
  • .github/workflows/audit.yml
  • .github/workflows/benchmarks.yml
  • .github/workflows/ci.yml
  • .github/workflows/codacy.yml
  • .github/workflows/codecov.yml
  • .github/workflows/generate-baseline.yml
  • .github/workflows/profiling-benchmarks.yml
  • .github/workflows/rust-clippy.yml
  • .gitignore
  • .gitleaks.toml
  • .semgrep.yaml
  • .taplo.toml
  • .yamllint
  • CONTRIBUTING.md
  • Cargo.toml
  • README.md
  • WARP.md
  • cspell.json
  • docs/archive/testing.md
  • docs/code_organization.md
  • justfile
  • pyproject.toml
  • rust-toolchain.toml
  • scripts/benchmark_utils.py
  • scripts/changelog_utils.py
  • scripts/enhance_commits.py
  • scripts/hardware_utils.py
  • scripts/tests/test_changelog_generate_workflow.py
  • scripts/tests/test_changelog_utils.py
  • scripts/tests/test_enhance_commits.py
  • scripts/tests/test_hardware_utils.py
  • scripts/tests/test_release_notes_postprocessor.py
  • src/core/adjacency.rs
  • src/core/edge.rs
  • src/core/triangulation.rs
  • src/lib.rs
  • tests/README.md
  • tests/public_topology_api.rs
  • ty.toml
💤 Files with no reviewable changes (1)
  • .gitignore
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py

⚙️ CodeRabbit configuration file

scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).

Files:

  • scripts/enhance_commits.py
  • scripts/hardware_utils.py
  • scripts/benchmark_utils.py
  • scripts/changelog_utils.py
🧠 Learnings (11)
📚 Learning: 2025-11-21T22:40:08.556Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 124
File: justfile:111-112
Timestamp: 2025-11-21T22:40:08.556Z
Learning: In the delaunay repository justfile, the clippy recipe runs cargo clippy twice: once without --all-features to check default code paths, and once with --all-features to check feature-gated code. This is intentional and necessary because the repository has feature-gated code behind count-allocations, dense-slotmap, bench, and test-debug features.

Applied to files:

  • docs/archive/testing.md
  • .github/workflows/rust-clippy.yml
  • Cargo.toml
  • justfile
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/core/adjacency.rs
  • src/lib.rs
  • Cargo.toml
  • README.md
  • docs/code_organization.md
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.

Applied to files:

  • cspell.json
📚 Learning: 2025-12-17T07:10:54.838Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 156
File: cspell.json:325-325
Timestamp: 2025-12-17T07:10:54.838Z
Learning: In cspell.json for this project, include both 'retriable' and 'retryable' in the customWords (or equivalent custom dictionary) so both spellings are recognized. Do not remove either; maintain both variants to ensure consistency across code, tests, and documentation.

Applied to files:

  • cspell.json
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/lib.rs
  • Cargo.toml
  • tests/public_topology_api.rs
  • README.md
  • src/core/triangulation.rs
  • docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.

Applied to files:

  • src/lib.rs
  • Cargo.toml
  • README.md
  • docs/code_organization.md
📚 Learning: 2025-08-28T14:12:46.198Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CITATION.cff:5-7
Timestamp: 2025-08-28T14:12:46.198Z
Learning: The DOI 10.5281/zenodo.16931097 in the CITATION.cff file belongs to the Rust delaunay triangulation library by acgetchell, not to any Julia DelaunayTriangulation.jl package.

Applied to files:

  • Cargo.toml
  • README.md
📚 Learning: 2025-08-30T00:51:50.849Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:481-512
Timestamp: 2025-08-30T00:51:50.849Z
Learning: The DOI 10.5281/zenodo.16931097 in the delaunay project documentation is valid and correctly resolves to acgetchell's Rust delaunay triangulation library on Zenodo.

Applied to files:

  • Cargo.toml
  • README.md
📚 Learning: 2025-08-28T03:49:30.582Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 54
File: scripts/generate_changelog.sh:416-435
Timestamp: 2025-08-28T03:49:30.582Z
Learning: The generate_changelog.sh script processes template headers from auto-changelog (### Changes, ### Fixed Issues) and transforms them into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). When analyzing changelog generation scripts, check both the template and the final output to understand the transformation pipeline.

Applied to files:

  • scripts/tests/test_changelog_generate_workflow.py
  • scripts/changelog_utils.py
📚 Learning: 2025-09-02T03:03:59.550Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 58
File: scripts/README.md:9-14
Timestamp: 2025-09-02T03:03:59.550Z
Learning: The delaunay project requires Python 3.13+ for performance reasons, as specified by the project maintainer acgetchell, even though the code could technically run on earlier Python versions.

Applied to files:

  • pyproject.toml
📚 Learning: 2025-08-28T03:54:34.371Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 54
File: scripts/generate_changelog.sh:417-438
Timestamp: 2025-08-28T03:54:34.371Z
Learning: The generate_changelog.sh script uses a deliberate design pattern where the auto-changelog template uses simple generic headers (### Changes, ### Fixed Issues) and the enhancer function transforms these into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). This separation keeps the template simple while ensuring standardized output format.

Applied to files:

  • scripts/changelog_utils.py
🧬 Code graph analysis (9)
src/core/adjacency.rs (1)
src/core/triangulation.rs (2)
  • vertices (588-590)
  • cells (561-563)
scripts/tests/test_hardware_utils.py (1)
scripts/hardware_utils.py (2)
  • HardwareComparator (431-631)
  • compare_hardware (484-580)
tests/public_topology_api.rs (2)
src/core/triangulation.rs (10)
  • vertices (588-590)
  • number_of_vertices (609-611)
  • number_of_cells (630-632)
  • edges (767-769)
  • number_of_incident_edges (939-941)
  • cell_neighbors (873-879)
  • vertex_coords (995-999)
  • cell_vertices (965-967)
  • adjacent_cells (833-835)
  • incident_edges (912-914)
src/core/edge.rs (3)
  • new (55-66)
  • v0 (94-96)
  • v1 (124-126)
scripts/enhance_commits.py (1)
scripts/changelog_utils.py (1)
  • ChangelogUtils (61-1093)
scripts/tests/test_changelog_generate_workflow.py (1)
scripts/changelog_utils.py (5)
  • ChangelogError (45-46)
  • _cleanup_final_output (1381-1398)
  • _cleanup_temp_files (2033-2044)
  • _post_process_dates (1349-1357)
  • _execute_changelog_generation (1216-1260)
scripts/tests/test_release_notes_postprocessor.py (1)
scripts/changelog_utils.py (2)
  • _ReleaseNotesPostProcessor (1944-2023)
  • process (1955-1974)
src/core/triangulation.rs (2)
src/core/collections.rs (4)
  • fast_hash_map_with_capacity (1066-1068)
  • fast_hash_map_with_capacity (1193-1193)
  • fast_hash_set_with_capacity (1100-1102)
  • fast_hash_set_with_capacity (1196-1196)
src/core/edge.rs (2)
  • new (55-66)
  • endpoints (155-157)
scripts/benchmark_utils.py (2)
scripts/benchmark_models.py (5)
  • BenchmarkData (14-55)
  • CircumspherePerformanceData (59-65)
  • CircumsphereTestCase (69-100)
  • extract_benchmark_data (214-250)
  • format_benchmark_tables (307-369)
scripts/subprocess_utils.py (5)
  • ProjectRootNotFoundError (268-269)
  • find_project_root (272-288)
  • get_git_commit_hash (168-180)
  • run_cargo_command (109-137)
  • run_git_command (78-106)
scripts/tests/test_changelog_utils.py (1)
scripts/changelog_utils.py (1)
  • get_markdown_line_limit (345-367)
🪛 LanguageTool
CONTRIBUTING.md

[uncategorized] ~396-~396: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ...ating) - just markdown-fix - Auto-fix markdown formatting - just markdown-lint - Lin...

(MARKDOWN_NNP)

🪛 Ruff (0.14.10)
scripts/tests/test_changelog_generate_workflow.py

151-151: Unused noqa directive (non-enabled: SLF001)

Remove unused noqa directive

(RUF100)


201-201: Unused noqa directive (non-enabled: SLF001)

Remove unused noqa directive

(RUF100)

scripts/changelog_utils.py

1257-1257: Do not catch blind exception: Exception

(BLE001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Code Coverage
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: Codacy Security Scan
  • GitHub Check: performance-regression
🔇 Additional comments (72)
rust-toolchain.toml (3)

7-13: LGTM! Inline comments improve clarity.

The inline comments documenting each component's purpose enhance maintainability and help developers understand the toolchain configuration at a glance.


18-19: LGTM! Target documentation is clear.

The inline comments consistently document each target platform, matching the style of the components section.

Also applies to: 21-21


25-25: LGTM! Explicit profile setting is good practice.

Components are additive with the profile, and explicitly setting the profile prevents rustup from defaulting to an unexpected profile when rust-toolchain.toml exists. This configuration clearly documents the intended profile and ensures consistent behavior across environments.

cspell.json (1)

211-211: LGTM! Appropriate spell-check dictionary additions.

The additions of "excinfo" (pytest fixture/attribute) and "taplo" (TOML formatter tool) align with the PR's tooling refresh objectives.

Also applies to: 375-375

tests/README.md (1)

739-739: LGTM! Documentation updated to reflect CI command rename.

The command name change from commit-check to ci is consistent with the broader CI workflow updates in this PR.

.semgrep.yaml (1)

14-14: LGTM! Formatting consistency improvement.

The inline comment spacing adjustment improves formatting consistency without affecting rule behavior.

.codacy.yml (1)

75-76: LGTM! Consistent inline comment formatting.

The spacing adjustments standardize inline comment formatting throughout the configuration file without affecting any tool behavior or configuration semantics.

Also applies to: 88-90, 147-147, 172-174

.gitleaks.toml (1)

18-20: LGTM! Trailing whitespace cleanup.

Removing trailing spaces before inline comments improves file formatting without affecting the Gitleaks allowlist configuration.

.yamllint (1)

14-16: LGTM! Consistent inline comment spacing.

The new rule aligns YAML linting with Prettier's formatting conventions, ensuring at least one space between content and inline comments for improved readability.

.github/workflows/codacy.yml (1)

28-28: LGTM! Formatting standardization.

The changes standardize quote style and update action version pins, aligning with repository-wide formatting conventions without affecting workflow behavior.

Also applies to: 49-49, 133-133

.github/workflows/benchmarks.yml (2)

16-19: LGTM! Quote and comment formatting standardization.

The changes standardize quote style for path patterns and tighten comment spacing, maintaining consistency across workflow files without affecting benchmark execution logic.

Also applies to: 27-30, 45-45, 50-50


53-53: LGTM! Action version pins updated.

The pinned SHA hashes follow GitHub Actions security best practices. These align with the version updates seen across other workflow files in this PR.

Also applies to: 58-58, 64-64, 73-73, 224-224, 332-332

.github/workflows/codecov.yml (1)

27-27: LGTM! Action version pins enhance security.

The pinned SHA hashes with version comments follow GitHub Actions security best practices by preventing supply chain attacks through tag manipulation.

However, please verify these action versions are current and the SHAs are authentic:

#!/bin/bash
# Verify GitHub Actions are using valid and expected SHA pins

echo "=== Checking action versions in codecov.yml ==="

# Extract action references from the file
actions=(
  "actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8:v6.0.1"
  "actions-rust-lang/setup-rust-toolchain@1780873c7b576612439a134613cc4cc74ce5538c:v1.15.2"
  "actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb:v5.0.1"
  "taiki-e/install-action@4c6723ec9c638cccae824b8957c5085b695c8085:v2.65.7"
  "codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de:v5.5.2"
  "codacy/codacy-coverage-reporter-action@89d6c85cfafaec52c72b6c5e8b2878d33104c699:v1.3.0"
  "actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f:v6.0.0"
)

for action in "${actions[@]}"; do
  IFS=':' read -r repo_and_sha expected_tag <<< "$action"
  IFS='@' read -r repo sha <<< "$repo_and_sha"
  
  echo "Checking $repo @ $expected_tag (SHA: ${sha:0:8}...)"
  
  # Use GitHub API to verify the tag points to the expected SHA
  gh api "repos/$repo/git/ref/tags/$expected_tag" --jq '.object.sha' 2>/dev/null | {
    read actual_sha
    if [ -n "$actual_sha" ]; then
      if [ "$actual_sha" = "$sha" ]; then
        echo "  ✓ SHA matches tag $expected_tag"
      else
        echo "  ✗ WARNING: SHA mismatch for tag $expected_tag"
        echo "    Expected: $sha"
        echo "    Actual:   $actual_sha"
      fi
    else
      echo "  ℹ Could not verify (tag may not exist or API limit reached)"
    fi
  }
done

Also applies to: 32-32, 38-38, 52-52, 96-96, 107-107, 115-115

.github/workflows/audit.yml (1)

8-9: LGTM! Formatting standardization.

The quote standardization (single to double quotes) and comment spacing adjustments improve consistency across workflow files without changing behavior.

Also applies to: 12-13, 16-16, 30-30, 33-33, 39-39, 56-56

.github/workflows/rust-clippy.yml (1)

9-9: LGTM! Formatting improvements.

The quote standardization and comment spacing adjustments maintain consistency with other workflow files without altering functionality.

Also applies to: 27-27, 30-30, 36-36, 61-61

.github/workflows/generate-baseline.yml (1)

10-10: LGTM! Formatting standardization.

The formatting adjustments (quote standardization, comment placement, spacing) improve consistency without changing workflow behavior.

Also applies to: 29-29, 33-33, 35-35, 38-38, 44-44, 92-92, 96-97

scripts/tests/test_hardware_utils.py (2)

54-54: LGTM! Correctly updated to match implementation.

The test now correctly expects capture_output=True and text=True parameters in the run_safe_command call, aligning with the updated function signature in hardware_utils.py.


472-496: LGTM! Correctly reflects percentage-based tolerance.

The test updates properly reflect the implementation change from absolute to percentage-based memory tolerance:

  • The within-tolerance case (15.95 GB vs 16.0 GB) represents ~0.3% difference, well below the 2% threshold
  • The outside-tolerance case (15.6 GB vs 16.0 GB) represents ~2.56% difference, correctly exceeding the 2% threshold

The previous value of 15.8 GB would have been within tolerance (1.27%), so the update to 15.6 GB is necessary for the test to properly verify the tolerance boundary.

scripts/tests/test_release_notes_postprocessor.py (3)

71-97: LGTM! Good coverage for escaped Markdown identifiers.

This test correctly verifies that breaking changes with escaped Markdown identifiers (e.g., insert\_transactional) are properly detected and moved to the Breaking Changes section. The assertions appropriately check:

  • Presence in the Breaking Changes section
  • Removal from the Changed section
  • Trimming of verbose body details

98-120: LGTM! Proper validation of removed public API detection.

This test correctly verifies that entries marked as "Removed public API" are identified as breaking changes and properly moved to the Breaking Changes section while being removed from the Changed section.


121-145: LGTM! Comprehensive coverage for incompatible change detection.

This test correctly validates that entries marked with "Incompatible change" language are properly identified as breaking changes and moved to the appropriate section. The test completes the coverage for the three key breaking change patterns alongside the existing MSRV and escaped identifier tests.

.github/workflows/profiling-benchmarks.yml (1)

40-41: LGTM! Environment variables improve debugging and version consistency.

Adding RUST_BACKTRACE: 1 enables better error diagnostics in CI, and centralizing RUST_TOOLCHAIN: 1.92.0 ensures consistency with the toolchain installation step.

Cargo.toml (1)

58-80: Explicit harness = false declarations improve clarity.

While harness = false is already the default for [[bench]] targets in Cargo, making it explicit improves self-documentation and prevents potential confusion.

.coderabbit.yml (1)

31-36: LGTM! Path-specific instructions improve review quality.

Adding dedicated guidance for Python scripts in scripts/*.py helps ensure consistent review focus on code quality, maintainability, and adherence to the project's tooling standards (ruff).

scripts/benchmark_utils.py (4)

35-85: LGTM! TYPE_CHECKING pattern improves type safety without runtime overhead.

The conditional import strategy with TYPE_CHECKING and dual-path fallback handling is a best practice that:

  • Provides type hints for static analysis without runtime cost
  • Handles both direct script execution (from benchmark_models import ...) and module imports (from scripts.benchmark_models import ...)
  • Gracefully handles import failures with ModuleNotFoundError fallback

2061-2061: LGTM! Signature update improves flexibility and type safety.

Changing the parameter type from dict[str, str] to Mapping[str, str | None]:

  • Accepts any mapping type (more flexible than dict)
  • Properly handles None values by converting to empty strings
  • Maintains backward compatibility (dict is a Mapping subtype)

2097-2106: LGTM! Enhanced error handling for missing baseline files.

The explicit None check and error handling:

  • Provides clear user feedback when baseline files are missing
  • Sets appropriate environment variables for workflow integration
  • Returns False to signal failure to caller

2228-2235: LGTM! Defensive type checking prevents runtime errors.

The enhanced validation:

  • Checks JSON structure is a dict before accessing keys
  • Validates commit field is a string before pattern matching
  • Prevents runtime AttributeError and TypeError exceptions
  • Good practice when parsing external data sources
pyproject.toml (2)

17-17: LGTM! Appropriate classifier for computational geometry library.

Adding "Intended Audience :: Science/Research" is fitting for a Delaunay triangulation library, which is commonly used in scientific computing and research applications.


200-201: LGTM! Relaxed settings support gradual type adoption strategy.

Adding allow_untyped_calls = true and allow_incomplete_defs = true aligns with the gradual typing adoption approach documented in the comments. This allows incremental improvement of type coverage without blocking development.

scripts/hardware_utils.py (7)

21-31: LGTM! Clean TYPE_CHECKING pattern.

The guarded import pattern correctly supports both script execution and module import modes while avoiding runtime overhead for type checking.


58-64: LGTM! Comprehensive error handling.

The expanded exception handling appropriately covers subprocess failures, OS errors, and value parsing issues with informative debug logging.


215-234: LGTM! Good helper extraction.

Centralizing PowerShell command execution eliminates duplication and provides a single point for PowerShell invocation with consistent flags.


477-479: LGTM! Good defensive key filtering.

Restricting to known keys prevents unexpected entries from polluting the baseline dictionary.


520-580: LGTM! Clean refactoring.

The data-driven approach with helper functions eliminates duplication and improves maintainability. The pattern is consistent across all hardware attribute comparisons.


582-597: LGTM! Clean comparison helper.

The helper appropriately handles "Unknown" values and uses keyword-only arguments for clarity. The skip logic is sensible for baseline compatibility checks.


598-618: LGTM! Robust memory comparison with tolerance.

The 2% tolerance threshold appropriately handles minor memory differences across systems (e.g., 15.8 GB vs 16.0 GB) while still catching significant mismatches. The fallback logic safely handles non-numeric and edge cases.

docs/archive/testing.md (2)

767-767: LGTM! Documentation updated to reflect new command names.

The change from "commit-check" and "quality" to "ci" and "check" aligns with the broader repository workflow modernization.


783-783: LGTM! Consistent terminology update.

Matches the command name update at line 767, maintaining consistency throughout the documentation.

.taplo.toml (1)

1-20: LGTM! Conservative Taplo configuration.

The configuration appropriately mirrors Cargo's conservative formatting philosophy, disabling opinionated transformations (alignment, reordering, auto-expansion) while maintaining basic consistency (4-space indentation, trailing newlines). The note about linting configuration is helpful.

README.md (2)

94-98: LGTM! Clear workflow command updates.

The updated command names (fix, check, ci, ci-slow) are more intuitive and align with the repository's modernized development workflow. The descriptions clearly distinguish mutating vs. non-mutating operations.


160-165: LGTM! Improved external references.

The updated links provide better accessibility:

  • Spade link now points to the canonical crates.io page
  • Grokipedia links provide clear reference documentation for geometric concepts
scripts/tests/test_enhance_commits.py (1)

946-946: LGTM! Good test strictness improvement.

Adding strict=True to zip() ensures that mismatched input lengths are caught immediately rather than silently ignored. This tightens test validation and prevents subtle bugs.

This is compatible with the project's minimum Python requirement of 3.11+, which fully supports the strict parameter (available since Python 3.10).

scripts/enhance_commits.py (1)

13-23: LGTM! Clean TYPE_CHECKING pattern.

The conditional import strategy correctly uses TYPE_CHECKING to support static type checkers while providing runtime fallback paths for both script and module execution contexts. The comments clearly document the intended use cases.

WARP.md (1)

27-30: LGTM! Documentation updates align with tooling improvements.

The updates correctly reflect the expanded tooling ecosystem introduced in this PR, including new validation commands, integration test requirements, and updated workflow commands. The Rust guidance about requiring crate-level doc comments for integration tests is particularly valuable for maintaining CI compliance with -D warnings.

Also applies to: 36-41, 43-46, 51-51, 57-66, 83-83

scripts/tests/test_changelog_generate_workflow.py (1)

1-109: LGTM! Comprehensive workflow test coverage.

The test structure is well-organized with appropriate use of fixtures, monkeypatching to avoid external dependencies, and clear test boundaries. The tests effectively validate the file-based pipeline, cleanup behavior in both debug and non-debug modes, and edge cases.

tests/public_topology_api.rs (1)

1-109: LGTM! Well-structured integration tests for topology APIs.

The tests comprehensively exercise the new public topology traversal methods (edges(), incident_edges(), cell_neighbors()) and the opt-in AdjacencyIndex builder. The test scenarios properly validate:

  • Edge enumeration and incident edge queries on a single tetrahedron
  • Adjacency relationships in a double tetrahedron configuration
  • Geometry accessor methods for vertices and cells
  • AdjacencyIndex invariants for vertex-to-cells, vertex-to-edges, and cell-to-neighbors mappings

The crate-level doc comment satisfies the integration test requirements documented in WARP.md.

CONTRIBUTING.md (1)

128-131: LGTM! Documentation updates reflect new command structure.

The updates consistently apply the new command nomenclature (fix, check, ci, ci-slow) and reorganized task recipes throughout the contributing guidelines. The changes align with updates in WARP.md and maintain internal consistency across all workflow recommendations.

Also applies to: 352-357, 391-397, 437-445, 460-460

docs/code_organization.md (3)

127-132: LGTM!

The directory tree correctly reflects the new adjacency.rs and edge.rs modules under src/core/, maintaining alphabetical ordering consistent with existing entries.


298-299: LGTM!

The architecture overview accurately documents the purpose of the new modules:

  • edge.rs - Canonical EdgeKey for topology traversal
  • adjacency.rs - Optional AdjacencyIndex builder outputs (opt-in)

This aligns with the implementation in the corresponding source files.


401-414: LGTM!

The development workflow section is updated with clearer formatting and accurate just command descriptions for fast iteration and CI validation.

scripts/tests/test_changelog_utils.py (2)

10-21: LGTM!

Import reorganization is clean - removes unused imports and groups the changelog utility imports together for better readability.


133-142: LGTM!

The test refactoring improves robustness by using actual filesystem operations via pytest fixtures instead of mocking internals:

  • tmp_path provides a clean isolated directory
  • monkeypatch.chdir() ensures the test runs in the temp directory where the config file is written
  • This exercises the real code path in get_markdown_line_limit() which reads from Path(".markdownlint.json")

This approach is more reliable and less brittle than mocking Path.exists and json.load.

src/lib.rs (5)

433-441: LGTM!

The new adjacency and edge modules are properly declared under core with appropriate positioning in the module list.


458-469: LGTM!

The re-exports are correctly added, making EdgeKey and AdjacencyIndex accessible via crate::core::*. The module organization follows the existing pattern.


596-606: LGTM!

The prelude correctly includes adjacency::* and edge::*, making the new topology types conveniently accessible to users via use delaunay::prelude::*.


649-672: LGTM!

The type normality tests are properly updated to verify that EdgeKey and AdjacencyIndex implement the auto traits (Send, Sync, Unpin), ensuring thread-safety for users.


4-5: No action needed. The documentation link to grokipedia.com is valid and references substantive content on simplicial complexes including definitions, geometric realization, algorithms, and computational notes—appropriate for the library's context.

.github/workflows/ci.yml (5)

25-29: LGTM!

Centralizing tool versions as environment variables improves maintainability and ensures consistent versions across all steps. This is a good practice for reproducible CI.


88-155: Well-implemented supply chain hardening for actionlint installation.

The approach correctly:

  • Downloads from official GitHub releases
  • Verifies SHA256 checksums against upstream checksums file
  • Handles both Linux and macOS architectures
  • Uses set -euo pipefail for strict error handling
  • Cleans up temp directory via trap

Minor observation: The verify_sha256 function nicely handles the difference between sha256sum (Linux) and shasum -a 256 (macOS).


157-184: LGTM!

Linux shfmt installation with SHA256 verification follows the same secure pattern as actionlint. The subshell pattern ( cd "$tmpdir"; ... ) correctly scopes the directory change.


186-217: LGTM!

macOS shfmt installation correctly handles arm64/amd64 architecture detection and uses shasum -a 256 for checksum verification. Consistent with the Linux installation pattern.


82-86: LGTM!

Using taiki-e/install-action for taplo-cli is appropriate for Rust-based tools and maintains consistency with the just installation step.

src/core/edge.rs (4)

1-16: LGTM!

Excellent module documentation that clearly explains:

  • The purpose of EdgeKey as a canonical edge identifier
  • The canonicalization of endpoint ordering
  • The determinism caveat regarding cross-process/serialization behavior

The warning about stable vertex UUIDs for deterministic ordering is particularly helpful for users who need reproducible results.


20-66: LGTM!

The EdgeKey implementation is well-designed:

  • Private fields ensure canonicalization is always enforced via new()
  • Using a.data().as_ffi() for ordering provides a stable comparison within a process without relying on external semantics
  • The constructor correctly orders endpoints so (a, b) and (b, a) produce identical keys

The doc examples are comprehensive and demonstrate the equality invariant clearly.


68-158: LGTM!

The accessor methods (v0(), v1(), endpoints()) are appropriately:

  • Marked as const fn for compile-time evaluation
  • Marked with #[inline] for performance
  • Annotated with #[must_use]
  • Documented with clear examples

The From<(VertexKey, VertexKey)> trait implementation provides ergonomic construction from tuples.


167-221: LGTM!

The unit tests provide good coverage:

  • edge_key_is_canonical: Verifies (a, b) and (b, a) produce equal keys with ordering invariant
  • edge_key_endpoints_roundtrip: Verifies accessor consistency and From trait
  • edge_key_is_hashable_and_orderable: Verifies deduplication in HashSet and BTreeSet

These tests validate the core invariants of the canonical edge representation.

src/core/adjacency.rs (4)

1-11: LGTM!

The module documentation clearly establishes the design contract:

  • Opt-in (not stored internally in triangulation)
  • Immutable once built
  • Built from current snapshot (no automatic updates)
  • No interior mutability

This design avoids cache invalidation complexity while giving users control over when to pay the indexing cost.


19-40: LGTM!

The error type is well-designed:

  • #[non_exhaustive] allows adding variants in future versions without breaking changes
  • Both variants include the relevant keys for debugging (cell_key + the missing key)
  • Error messages via #[error(...)] are informative and include the debug representations

Using thiserror::Error follows the project's established error handling patterns.


42-88: LGTM!

The AdjacencyIndex struct provides a well-documented, performant adjacency cache:

  • #[non_exhaustive] allows adding fields in future versions
  • Public fields give users direct access without accessor overhead
  • SmallBuffer<_, MAX_PRACTICAL_DIMENSION_SIZE> avoids heap allocation for typical dimension counts
  • FastHashMap (FxHasher-backed) provides fast lookups

The doc example demonstrates the intended usage pattern through build_adjacency_index().


90-123: LGTM!

The tests verify:

  1. Error messages are informative and contain expected substrings
  2. AdjacencyIndex implements Send + Sync + Unpin for thread-safety

The auto-traits test ensures the type can be safely shared across threads, which is important for parallel mesh processing use cases.

justfile (1)

15-51: New tooling recipes and CI/fix wiring look solid

The expanded just workflows are coherent:

  • _ensure-* helpers cleanly centralize tool presence checks, and setup’s tool loop matches those helpers and gives concrete install hints (including taplo/yamllint/Node).
  • fix vs check vs ci separation is clear and matches the updated help-workflows text: mutating formatters (toml-fmt, fmt, python-fix, shell-fmt, markdown-fix, yaml-fix) vs non‑mutating group checks (lint -> lint-code/lint-docs/lint-config) and CI (ci, ci-slow) that compose these plus tests/bench compile/examples.
  • Per‑language helpers (python-fix/python-check, shell-fmt/shell-check, markdown-*, yaml-*, TOML commands, validate-json) all handle the “no files found” case gracefully and use NUL‑delimited git ls-files to avoid most path edge cases.

No behavior problems stand out, and the clippy invocations remain intentionally duplicated for different feature sets.

Also applies to: 117-134, 193-201, 245-255, 256-314, 381-399, 424-449, 467-499, 501-523, 559-610

@codacy-production
Copy link

codacy-production bot commented Jan 2, 2026

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
+0.38% (target: -1.00%) 80.92%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (66df9a6) 4726 3349 70.86%
Head commit (e8d7018) 4969 (+243) 3540 (+191) 71.24% (+0.38%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#164) 262 212 80.92%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

@codecov
Copy link

codecov bot commented Jan 2, 2026

Codecov Report

❌ Patch coverage is 80.91603% with 50 lines in your changes missing coverage. Please review.
✅ Project coverage is 71.27%. Comparing base (66df9a6) to head (e8d7018).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/core/triangulation.rs 80.00% 30 Missing ⚠️
src/core/triangulation_data_structure.rs 57.69% 11 Missing ⚠️
src/topology/characteristics/validation.rs 76.47% 4 Missing ⚠️
src/core/edge.rs 83.33% 2 Missing ⚠️
src/topology/characteristics/euler.rs 84.61% 2 Missing ⚠️
src/core/adjacency.rs 95.83% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #164      +/-   ##
==========================================
+ Coverage   70.89%   71.27%   +0.37%     
==========================================
  Files          26       28       +2     
  Lines        4724     4967     +243     
==========================================
+ Hits         3349     3540     +191     
- Misses       1375     1427      +52     
Flag Coverage Δ
unittests 71.27% <80.91%> (+0.37%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Makes topology traversal APIs public for external use.

Exposes `edges()`, `incident_edges()`, and `cell_neighbors()` on the
`DelaunayTriangulation` struct as convenience wrappers. Updates
documentation, examples, and benchmarks to use new API.

This allows external users to traverse the triangulation's topology
without needing to access internal implementation details.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (8)
scripts/tests/test_changelog_utils.py (2)

10-21: Line-limit tests now exercise real filesystem path; mocking is consistent but could be simplified

  • test_get_markdown_line_limit_with_config using tmp_path + monkeypatch.chdir is a solid improvement: it exercises the actual .markdownlint.json parsing path instead of over-mocking Path/json.load, and makes the test less brittle to implementation details.
  • For test_get_markdown_line_limit_no_config and test_get_markdown_line_limit_invalid_config, the combination of @patch("pathlib.Path.exists") vs patch("changelog_utils.Path.exists"/"Path.open") works because Path is the same class object, but the split patch targets are a little inconsistent and harder to read. Long term you might consider:
    • Either using the same filesystem-based pattern as the config-present test (e.g., writing invalid JSON into a temp .markdownlint.json instead of mocking json.load), or
    • Standardizing patches on the module-level changelog_utils.Path for clarity.
      Functionally everything here looks correct; these are readability-only suggestions.

Also applies to: 133-142, 150-155


589-643: Title-formatting tests give strong coverage; only minor brittleness around exact strings

The various _format_entry_title tests (short/long titles, markdown escaping, edge cases, tiny limits, and typical GitHub lengths) do a good job pinning down:

  • Line-length guarantees,
  • Bold/continuation prefixes,
  • Commit-link splitting behavior,
  • Proper escaping of markdown metacharacters.

The only minor trade-off is that some assertions depend on exact string shapes (e.g., specific prefixes like "- **" or " ["` and the presence of bold markers). That’s acceptable here, but future refactors to the visual style (while preserving semantics) will require updating tests. If you expect style churn, you could soften a few of the checks to focus more on invariants (length, presence of escaped content, commit SHA/link intact) rather than the precise decoration. Functionally, current tests look correct.

Also applies to: 666-699, 715-743, 777-796

src/core/triangulation.rs (2)

92-107: Public topology traversal & adjacency index implementation looks correct and matches docs

  • edges() / number_of_edges() and incident_edges() / number_of_incident_edges():

    • Use EdgeKey::new plus FastHashSet to globally deduplicate edges, iterating all vertex pairs per cell and per incident cell. The i < j pairing plus canonical EdgeKey endpoints gives you unique undirected edges as advertised.
    • These methods necessarily allocate sets per call; the docs correctly point users to build_adjacency_index() for high-frequency queries, which is a reasonable trade-off.
  • adjacent_cells() and cell_neighbors():

    • adjacent_cells delegates cleanly to tds.find_cells_containing_vertex_by_key(v) and handles missing keys by construction (empty buffer → empty iterator).
    • cell_neighbors only yields existing neighbor cell keys (filters None), and short-circuits to an empty iterator if the cell key is absent or has no neighbor buffer, which matches the doc comment about omitting boundary facets and returning empty for unknown cells.
  • Zero-allocation geometry/topology accessors:

    • cell_vertices and vertex_coords correctly return slices borrowed from the underlying TDS/cell/vertex storage without cloning. The lifetime is tied to &self, and only read-only slice views are exposed, matching the “zero-allocation accessor” promises.
  • build_adjacency_index:

    • Populates:
      • vertex_to_cells by walking cells and checking contains_vertex_key (failing fast with MissingVertexKey on inconsistencies).
      • cell_to_neighbors by copying only present neighbors and validating each with contains_cell (MissingNeighborCell on inconsistency).
      • vertex_to_edges using a global seen_edges set so each undirected edge appears exactly once and is attached to both endpoints.
    • The final loop over self.tds.vertices() guaranteeing entries for every vertex (even isolated/bootstrapping ones) matches the doc comment: adjacency lists may be empty but the maps are total over vertices.
    • Omitting entries in cell_to_neighbors for cells with zero neighbors is a sensible choice and aligns with the example tests, which only unwrap() for cells that are known to have neighbors.

Overall this API surface is coherent, read-only, and consistent with the new AdjacencyIndex documentation and the intent of a canonical EdgeKey.

Also applies to: 734-1158


1186-1232: Isolated-vertex validation is sound; doc semantics are slightly stricter now

  • validate_no_isolated_vertices:

    • Builds a FastHashSet<VertexKey> from all vertices referenced in any cell and then walks the vertex store, erroring on any vertex not present in that set.
    • Returns TriangulationValidationError::Tds(TdsValidationError::InconsistentDataStructure { ... }), which is consistent with how other Level‑3 checks surface “structural” topology problems.
    • Fast path for number_of_vertices() == 0 keeps the empty triangulation valid.
  • Integration into is_valid:

    • Now runs after manifold-facet and global connectedness checks and before Euler characteristic, which is the right place: you reject invalid manifolds (dangling 0‑simplices) before doing global χ accounting.
    • The top-level docs were updated to include “No isolated vertices” in the Level‑3 checklist; the error list right below still only names facets/connectedness/Euler/topology errors, but the new invariant is effectively just another TdsValidationError::InconsistentDataStructure case. If you want absolute clarity, you might explicitly mention “isolated vertex detection” there as another reason for a Tds-wrapped error, but behavior is already consistent.
  • New tests:

    • test_is_valid_rejects_bootstrap_phase_with_isolated_vertex and test_is_valid_rejects_isolated_vertex_even_when_cells_exist correctly exercise:
      • A pure-bootstrap triangulation (vertices but no cells), and
      • A constructed TDS with an extra vertex inserted but never referenced by any cell.
    • Both assert on the InconsistentDataStructure variant and the "Isolated vertex detected" substring, which tightly couples tests to the current diagnostic wording but makes the invariant explicit and easy to debug.

Semantically this is a good tightening of Level‑3 validity: anything that calls Triangulation::is_valid() or validate() now gets a true manifold-with-boundary check instead of silently accepting dangling vertices.

Also applies to: 1482-1513, 3402-3457

scripts/changelog_utils.py (2)

964-970: Hardcoded fallback URL limits script reusability.

The fallback URL https://github.com/acgetchell/delaunay is project-specific. While acceptable for this repository, it makes the script less portable if it's intended to be reused by other projects in the future.

Consider making the fallback URL configurable via an environment variable if cross-project reuse is a future goal:

try:
    repo_url = ChangelogUtils.get_repository_url()
except GitRepoError:
    # Fallback: keep a stable link for this repository even when running in a
    # minimal test environment without a configured `origin` remote.
    repo_url = os.environ.get("CHANGELOG_FALLBACK_URL", "https://github.com/acgetchell/delaunay")

1218-1260: Error handling suppresses original exceptions.

The pattern checks if file_paths is None and raises SystemExit(1) from None, which suppresses the original exception that prevented initialization. While the defensive checks are appropriate (if _initialize_file_paths() fails before assignment, file_paths remains None), suppressing exceptions makes debugging harder.

Consider logging or re-raising the original exception:

🔎 Proposed improvement
     except (ChangelogError, GitRepoError, VersionError):
         if file_paths is None:
-            raise SystemExit(1) from None
+            # Initialization failed before file paths were set up
+            raise
         _restore_backup_and_exit(file_paths)
     except KeyboardInterrupt:
         if file_paths is None:
-            raise SystemExit(1) from None
+            # User interrupted before file paths were set up
+            sys.exit(1)
         _restore_backup_and_exit(file_paths)
     except Exception:  # restore backup and exit on any unexpected error
         if file_paths is None:
-            raise SystemExit(1) from None
+            # Unexpected error before file paths were set up
+            raise
         _restore_backup_and_exit(file_paths)

Regarding the static analysis hint about catching bare Exception at line 1257: this is acceptable here as it's cleanup code that needs to restore backups regardless of the error type.

src/core/triangulation_data_structure.rs (1)

2897-2927: Vertex incidence validation is well‑scoped; consider exposing it in the diagnostic report.

The new validate_vertex_incidence() plus the is_valid() hook nicely enforce that any non‑None incident_cell is non‑dangling and contains the vertex, while still allowing structurally isolated vertices. The ordering (validate cell vertex keys first, then incidence) is also safe.

If you want incidence violations to show up in TriangulationValidationReport, you might optionally add a dedicated invariant kind and call validate_vertex_incidence() from validation_report() as well; right now these checks are only surfaced via is_valid()/validate(), not the report.

Also applies to: 3042-3091

src/lib.rs (1)

629-673: Consider renaming prelude::io to better reflect its purpose.

The new prelude::io namespace is well-documented and serves a clear purpose as a curated read-only API subset. However, the name "io" might be misleading since it includes topology traversal, adjacency queries, predicates, and test helpers—which go beyond traditional input/output operations.

Alternative names that better convey the read-only, query-oriented nature:

  • prelude::query
  • prelude::readonly
  • prelude::inspect
  • prelude::traversal

That said, the documentation clearly explains the intent, and the current design is functional.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5750cc6 and ec236a2.

📒 Files selected for processing (23)
  • benches/circumsphere_containment.rs
  • benches/microbenchmarks.rs
  • benches/profiling_suite.rs
  • examples/convex_hull_3d_20_points.rs
  • examples/into_from_conversions.rs
  • examples/memory_analysis.rs
  • examples/point_comparison_and_hashing.rs
  • examples/triangulation_3d_20_points.rs
  • examples/zero_allocation_iterator_demo.rs
  • pyproject.toml
  • scripts/changelog_utils.py
  • scripts/tests/test_benchmark_utils.py
  • scripts/tests/test_changelog_generate_workflow.py
  • scripts/tests/test_changelog_tag_size_limit.py
  • scripts/tests/test_changelog_utils.py
  • scripts/tests/test_compare_storage_backends.py
  • scripts/tests/test_hardware_utils.py
  • scripts/tests/test_release_notes_postprocessor.py
  • src/core/delaunay_triangulation.rs
  • src/core/triangulation.rs
  • src/core/triangulation_data_structure.rs
  • src/lib.rs
  • tests/public_topology_api.rs
💤 Files with no reviewable changes (2)
  • scripts/tests/test_benchmark_utils.py
  • scripts/tests/test_compare_storage_backends.py
✅ Files skipped from review due to trivial changes (1)
  • scripts/tests/test_changelog_tag_size_limit.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/public_topology_api.rs
  • scripts/tests/test_hardware_utils.py
  • scripts/tests/test_changelog_generate_workflow.py
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py

⚙️ CodeRabbit configuration file

scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).

Files:

  • scripts/changelog_utils.py
🧠 Learnings (10)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • benches/circumsphere_containment.rs
  • examples/zero_allocation_iterator_demo.rs
  • benches/microbenchmarks.rs
  • benches/profiling_suite.rs
  • examples/convex_hull_3d_20_points.rs
  • examples/point_comparison_and_hashing.rs
  • examples/memory_analysis.rs
  • src/core/triangulation.rs
  • examples/triangulation_3d_20_points.rs
  • src/core/delaunay_triangulation.rs
  • src/lib.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • benches/circumsphere_containment.rs
  • examples/zero_allocation_iterator_demo.rs
  • examples/into_from_conversions.rs
  • benches/microbenchmarks.rs
  • benches/profiling_suite.rs
  • examples/convex_hull_3d_20_points.rs
  • examples/point_comparison_and_hashing.rs
  • examples/memory_analysis.rs
  • src/core/triangulation_data_structure.rs
  • examples/triangulation_3d_20_points.rs
  • src/core/delaunay_triangulation.rs
  • src/lib.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.

Applied to files:

  • examples/zero_allocation_iterator_demo.rs
  • benches/microbenchmarks.rs
  • benches/profiling_suite.rs
  • examples/convex_hull_3d_20_points.rs
  • examples/point_comparison_and_hashing.rs
  • examples/memory_analysis.rs
  • examples/triangulation_3d_20_points.rs
  • src/core/delaunay_triangulation.rs
  • src/lib.rs
📚 Learning: 2025-11-21T22:40:08.556Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 124
File: justfile:111-112
Timestamp: 2025-11-21T22:40:08.556Z
Learning: In the delaunay repository justfile, the clippy recipe runs cargo clippy twice: once without --all-features to check default code paths, and once with --all-features to check feature-gated code. This is intentional and necessary because the repository has feature-gated code behind count-allocations, dense-slotmap, bench, and test-debug features.

Applied to files:

  • examples/point_comparison_and_hashing.rs
📚 Learning: 2025-09-02T03:03:59.550Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 58
File: scripts/README.md:9-14
Timestamp: 2025-09-02T03:03:59.550Z
Learning: The delaunay project requires Python 3.13+ for performance reasons, as specified by the project maintainer acgetchell, even though the code could technically run on earlier Python versions.

Applied to files:

  • pyproject.toml
📚 Learning: 2025-08-28T14:12:46.198Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CITATION.cff:5-7
Timestamp: 2025-08-28T14:12:46.198Z
Learning: The DOI 10.5281/zenodo.16931097 in the CITATION.cff file belongs to the Rust delaunay triangulation library by acgetchell, not to any Julia DelaunayTriangulation.jl package.

Applied to files:

  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-28T03:54:34.371Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 54
File: scripts/generate_changelog.sh:417-438
Timestamp: 2025-08-28T03:54:34.371Z
Learning: The generate_changelog.sh script uses a deliberate design pattern where the auto-changelog template uses simple generic headers (### Changes, ### Fixed Issues) and the enhancer function transforms these into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). This separation keeps the template simple while ensuring standardized output format.

Applied to files:

  • scripts/changelog_utils.py
📚 Learning: 2025-08-28T03:49:30.582Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 54
File: scripts/generate_changelog.sh:416-435
Timestamp: 2025-08-28T03:49:30.582Z
Learning: The generate_changelog.sh script processes template headers from auto-changelog (### Changes, ### Fixed Issues) and transforms them into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). When analyzing changelog generation scripts, check both the template and the final output to understand the transformation pipeline.

Applied to files:

  • scripts/changelog_utils.py
🧬 Code graph analysis (3)
scripts/tests/test_release_notes_postprocessor.py (1)
scripts/changelog_utils.py (2)
  • _ReleaseNotesPostProcessor (1947-2026)
  • process (1958-1977)
src/core/delaunay_triangulation.rs (1)
src/core/triangulation.rs (3)
  • edges (768-770)
  • incident_edges (913-915)
  • cell_neighbors (874-880)
scripts/tests/test_changelog_utils.py (1)
scripts/changelog_utils.py (9)
  • ChangelogError (45-46)
  • ChangelogNotFoundError (49-50)
  • ChangelogUtils (61-1093)
  • GitRepoError (53-54)
  • VersionError (57-58)
  • get_markdown_line_limit (345-367)
  • _process_body_line (750-766)
  • _extract_github_anchor (991-1032)
  • _format_entry_title (545-557)
🪛 Ruff (0.14.10)
scripts/changelog_utils.py

1257-1257: Do not catch blind exception: Exception

(BLE001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Code Coverage
  • GitHub Check: performance-regression
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (macos-latest)
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (27)
pyproject.toml (3)

1-37: LGTM! Metadata and formatting improvements.

The build system requirements, project metadata, and dependency declarations are properly configured. The formatting changes improve readability without altering behavior.


38-181: LGTM! Tool configuration formatting improvements.

The expansion of configuration lists to multi-line format improves readability and makes future diffs cleaner. All tool settings remain functionally equivalent.


182-217: LGTM! Type checking configuration aligned with gradual adoption strategy.

The mypy configuration appropriately relaxes strictness checks (allow_untyped_calls, allow_incomplete_defs) to support gradual type adoption, as documented in the inline comments. This aligns with the project's parallel mypy/ty strategy for incremental migration.

Based on learnings, the ty>=0.0.8 dependency was confirmed valid by the maintainer in a previous review.

examples/point_comparison_and_hashing.rs (1)

19-19: LGTM! Import narrowing improves clarity.

The change from delaunay::prelude::* to delaunay::prelude::io::* narrows the import scope, which is a good practice to reduce namespace pollution and make dependencies more explicit. This aligns with the broader PR pattern of separating IO/topology utilities into a dedicated prelude subset.

benches/microbenchmarks.rs (1)

20-20: LGTM! Consistent import refinement.

The narrowed import maintains consistency with the project-wide refactor to use the io prelude subset for benchmarks and examples.

examples/convex_hull_3d_20_points.rs (1)

31-31: LGTM! Import scope appropriately narrowed.

The change is consistent with the project-wide refactor and appropriate for this example's focus on convex hull extraction and topology analysis.

examples/memory_analysis.rs (1)

6-6: LGTM! Import refinement aligns with example scope.

The narrowed import is appropriate for this memory analysis example and maintains consistency with the project-wide prelude reorganization.

examples/zero_allocation_iterator_demo.rs (1)

8-8: LGTM! Completes consistent import pattern.

This final import narrowing completes the consistent refactor across all examples and benchmarks, improving code organization and clarity.

examples/into_from_conversions.rs (1)

22-22: Import narrowing to prelude::io looks consistent

Switching to delaunay::prelude::io::* keeps the example focused on the I/O-oriented prelude while still bringing the required types/macros into scope, assuming the io prelude re-exports them as intended.

benches/profiling_suite.rs (1)

64-64: Bench prelude import aligned with new io namespace

Using delaunay::prelude::io::* here matches the example changes and keeps the bench depending only on the I/O-facing prelude surface; no issues spotted.

scripts/tests/test_changelog_utils.py (1)

411-415: URL and anchor extraction tests cover key regressions and edge cases well

  • test_wrap_bare_urls_indented_code_block validating _process_body_line behavior for 4-space-indented lines directly against ChangelogUtils._process_body_line is a good targeted regression test and matches the current implementation contract from changelog_utils.py.
  • The _extract_github_anchor tests exercise:
    • Standard headings with/without links,
    • Pre-release/build metadata,
    • Missing heading and missing file fallbacks,
    • Ensuring body text occurrences of versions do not spuriously match.
      These align tightly with the helper’s documented algorithm and should catch most future refactors that accidentally broaden the match or change the normalization rules.

Also applies to: 484-577, 528-573

scripts/tests/test_release_notes_postprocessor.py (1)

71-167: Breaking-change promotion tests line up well with post-processor heuristics

The new tests around _ReleaseNotesPostProcessor:

  • Cover both ### Changed and ### Changes headings, ensuring MSRV bumps get promoted and an empty Changes section is removed.
  • Verify that escaped identifiers like insert\\_transactional are still recognized as breaking entries.
  • Check for common wording patterns (“Removed public API”, “Incompatible change”) and assert that those bullets move into ### ⚠️ Breaking Changes and are removed from ### Changed.

Given _ReleaseNotesPostProcessor.process delegates to _BreakingChangeDetector.extract_breaking_changes then reinserts a dedicated section, these tests provide good regression coverage without over-specifying formatting beyond the section header text. No issues from a correctness standpoint.

src/core/triangulation.rs (1)

4260-4456: Topology traversal & adjacency index tests provide solid coverage of the new API

The new tests at the end of the module validate the key behaviors of the read-only topology surface:

  • test_topology_edges_triangle_2d and test_topology_edges_and_incident_edges_double_tetrahedron_3d:

    • Check global edge counts (3 for a triangle, 9 for the two-tetrahedra configuration) and local degrees (base vertex degree 4, apex degree 3), which confirms the combinatorics of both edges() and incident_edges()/number_of_incident_edges().
  • test_topology_queries_missing_keys_are_empty_or_none and test_topology_geometry_accessors_roundtrip:

    • Explicitly assert that unknown VertexKey/CellKey values yield empty iterators or None, and that vertex_coords/cell_vertices round-trip expected coordinate and vertex-key information.
  • test_build_adjacency_index_basic_invariants and test_build_adjacency_index_empty_triangulation_is_empty:

    • Verify that the adjacency index:
      • Produces symmetric neighbor lists for the two tetrahedra,
      • Populates non-empty vertex_to_cells and vertex_to_edges entries for every vertex in a valid triangulation, and
      • Returns fully empty maps when the triangulation is empty.

These tests collectively pin down the contracts you’ve documented for the public topology API and AdjacencyIndex and should catch most future regressions in traversal behavior or index construction.

scripts/changelog_utils.py (4)

41-43: LGTM: Well-defined constant for GitHub tag size limit.

The constant is clearly named and properly scoped as internal (leading underscore). The value 125,000 bytes matches GitHub's documented limit for tag annotations.


626-636: LGTM: Properly typed nested function for cron expression protection.

The type annotation re.Match[str] is correct and the function appropriately handles cron expression detection and protection.


732-745: LGTM: Robust URL wrapping logic with proper type annotation.

The nested function correctly handles edge cases (already wrapped URLs, markdown links, inline code spans) and uses appropriate type hints.


1263-1273: LGTM: Absolute path resolution improves robustness.

Using .resolve() to convert paths to absolute is a good improvement that ensures cleanup works correctly even after directory changes, as explained in the comment at lines 1224-1226.

benches/circumsphere_containment.rs (1)

20-20: LGTM: Import narrowed to io module as part of coordinated refactoring.

The change from delaunay::prelude::* to delaunay::prelude::io::* narrows the import scope, improving clarity and reducing namespace pollution. This aligns with the broader pattern across benchmarks and examples in this PR.

examples/triangulation_3d_20_points.rs (1)

29-29: LGTM: Consistent import narrowing for examples.

The import change from wildcard to io::* is consistent with the refactoring pattern across all examples and benchmarks in this PR, improving code clarity while maintaining functionality.

src/core/triangulation_data_structure.rs (1)

4475-4477: Good regression check for incidence invariants with isolated vertices.

Asserting tds.is_valid().is_ok() here, with a now isolated, confirms the new incidence validation treats incident_cell = None as acceptable while still rejecting stale pointers. This is a useful safety net around the removal/repair path.

src/core/delaunay_triangulation.rs (2)

16-16: Topology traversal wrappers on DelaunayTriangulation look correct and minimal.

Importing EdgeKey and forwarding edges(), incident_edges(), and cell_neighbors() straight to self.triangulation() keeps DelaunayTriangulation as a thin façade while exposing the new topology API at the public layer. Signatures and docs are consistent with the underlying Triangulation methods, and the behavior for missing keys (empty iterators) is preserved.

Also applies to: 845-927


2393-2427: Forwarding test provides good coverage for the new public traversal API.

test_topology_traversal_methods_are_forwarded verifies that edges(), incident_edges(), and cell_neighbors() on DelaunayTriangulation exactly match the underlying Triangulation results on a simple tetrahedron, including edge counts and neighbor emptiness. This is a solid regression test for wiring errors in these convenience wrappers.

src/lib.rs (5)

434-434: LGTM! New public modules align with PR objectives.

The addition of adjacency and edge modules to the public API is well-structured and aligns perfectly with the stated goal of introducing topology traversal helpers.

Also applies to: 441-441


460-460: LGTM! Re-exports follow established patterns.

The wildcard re-exports of adjacency::* and edge::* at the core level are consistent with the existing pattern used for other core modules, making the new types conveniently accessible.

Also applies to: 463-463


596-596: LGTM! Prelude additions enhance API discoverability.

Adding adjacency::* and edge::* to the prelude is appropriate for these new public-facing types, making them easily accessible to users who import delaunay::prelude::*.

Also applies to: 599-599


695-698: LGTM! Test coverage for new types is appropriate.

The additions to the test suite properly verify that EdgeKey and AdjacencyIndex implement the required auto traits (Send + Sync + Unpin), consistent with testing patterns for other types in the codebase.

Also applies to: 717-718


4-4: The grokipedia documentation link is valid and accessible (HTTP 200 response confirmed). This is the project's consistent documentation source, used throughout the README for related concepts (Delaunay triangulations, Voronoi diagrams, Convex hulls). No evidence of a change from Wikipedia exists; grokipedia appears to be the intentional documentation strategy for this project. No action required.

Exposes cell and vertex query APIs on `DelaunayTriangulation` for zero-allocation topology traversal.

Also includes internal refactoring to improve vertex incidence
validation and ensure more robust handling of invalid key references.
Now TDS validation detects isolated vertices.

Refs: feat/public-topology-api
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/core/triangulation_data_structure.rs (1)

470-493: New CellVertexKeys/VertexIncidence invariants and wiring look sound

  • Extending InvariantKind with CellVertexKeys and VertexIncidence plus the dedicated validate_vertex_incidence helper cleanly separates:
    • cells pointing at invalid/missing vertex keys from
    • vertices whose incident_cell is dangling or inconsistent.
  • Tds::is_valid now runs validate_cell_vertex_keys and then validate_vertex_incidence before duplicate/facet/neighbor checks, which prevents higher‑level validators from operating on obviously corrupt key state while still enforcing incidence consistency in normal cases.
  • validation_report:
    • stops early on mapping inconsistencies (as before),
    • records both CellVertexKeys and VertexIncidence violations when present,
    • and then short‑circuits duplicate/facet/neighbor checks if key validity is broken—this ordering matches the doc comment and avoids noisy follow‑on errors from routines that assume valid keys.
  • Allowing incident_cell == None for vertices that are not referenced by any cell, while deferring “no isolated vertices” to Triangulation::is_valid (see src/core/triangulation.rs:1218‑1245 in the provided snippet), gives a nice separation between raw TDS structural integrity and manifold topology constraints.

Only minor suggestion: the “Invariant Enforcement” table and the Level‑2 bullet list near the top of the file don’t yet mention the new CellVertexKeys/VertexIncidence checks; consider adding rows/bullets so the docs stay in sync with the invariants implemented here.

Also applies to: 2901-2931, 3083-3100, 3183-3260

src/core/triangulation.rs (1)

1010-1012: Consider clarifying the documentation about isolated vertices during bootstrap.

The documentation states: "Vertices with no incident cells (e.g., during bootstrap when number_of_cells() == 0) have empty adjacency lists."

This is correct, but it might be clearer to emphasize that this is the expected behavior during bootstrap (vertices are accumulated before the initial simplex is built). Currently, the phrasing could be misinterpreted as describing an error condition.

Suggested clarification
 /// ## Notes
 ///
 /// - No sorted-order guarantees are provided for the values.
 /// - The returned collections are optimized for performance.
 /// - The maps include an entry for every vertex currently stored in the triangulation.
-///   Vertices with no incident cells (e.g., during bootstrap when `number_of_cells() == 0`)
-///   have empty adjacency lists.
+///   During the bootstrap phase (before the initial simplex is created), vertices have empty
+///   adjacency lists because no cells exist yet. This is expected and not an error condition.
 /// - Isolated vertices (present in the vertex store but not referenced by any cell) are allowed at
 ///   the TDS structural layer, but violate the Level 3 manifold invariants checked by
 ///   [`Triangulation::is_valid`](Self::is_valid). When present, their adjacency lists are empty.
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ec236a2 and ab3e589.

📒 Files selected for processing (16)
  • benches/circumsphere_containment.rs
  • benches/microbenchmarks.rs
  • benches/profiling_suite.rs
  • examples/convex_hull_3d_20_points.rs
  • examples/into_from_conversions.rs
  • examples/memory_analysis.rs
  • examples/point_comparison_and_hashing.rs
  • examples/triangulation_3d_20_points.rs
  • examples/zero_allocation_iterator_demo.rs
  • scripts/changelog_utils.py
  • scripts/tests/test_changelog_utils.py
  • src/core/delaunay_triangulation.rs
  • src/core/triangulation.rs
  • src/core/triangulation_data_structure.rs
  • src/lib.rs
  • tests/public_topology_api.rs
🚧 Files skipped from review as they are similar to previous changes (4)
  • examples/zero_allocation_iterator_demo.rs
  • examples/convex_hull_3d_20_points.rs
  • benches/profiling_suite.rs
  • benches/circumsphere_containment.rs
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py

⚙️ CodeRabbit configuration file

scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).

Files:

  • scripts/changelog_utils.py
🧠 Learnings (7)
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • benches/microbenchmarks.rs
  • examples/into_from_conversions.rs
  • src/core/triangulation_data_structure.rs
  • src/lib.rs
  • src/core/delaunay_triangulation.rs
  • examples/point_comparison_and_hashing.rs
  • examples/memory_analysis.rs
  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • benches/microbenchmarks.rs
  • examples/into_from_conversions.rs
  • tests/public_topology_api.rs
  • src/lib.rs
  • src/core/triangulation.rs
  • src/core/delaunay_triangulation.rs
  • examples/point_comparison_and_hashing.rs
  • examples/memory_analysis.rs
  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.

Applied to files:

  • examples/into_from_conversions.rs
  • src/lib.rs
  • src/core/delaunay_triangulation.rs
  • examples/point_comparison_and_hashing.rs
  • examples/memory_analysis.rs
  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-28T03:54:34.371Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 54
File: scripts/generate_changelog.sh:417-438
Timestamp: 2025-08-28T03:54:34.371Z
Learning: The generate_changelog.sh script uses a deliberate design pattern where the auto-changelog template uses simple generic headers (### Changes, ### Fixed Issues) and the enhancer function transforms these into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). This separation keeps the template simple while ensuring standardized output format.

Applied to files:

  • scripts/changelog_utils.py
📚 Learning: 2025-08-28T03:49:30.582Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 54
File: scripts/generate_changelog.sh:416-435
Timestamp: 2025-08-28T03:49:30.582Z
Learning: The generate_changelog.sh script processes template headers from auto-changelog (### Changes, ### Fixed Issues) and transforms them into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). When analyzing changelog generation scripts, check both the template and the final output to understand the transformation pipeline.

Applied to files:

  • scripts/changelog_utils.py
🧬 Code graph analysis (4)
src/core/triangulation_data_structure.rs (3)
src/core/triangulation.rs (1)
  • is_valid (1219-1246)
src/core/cell.rs (1)
  • is_valid (1039-1079)
src/core/vertex.rs (1)
  • is_valid (585-603)
src/core/triangulation.rs (4)
src/core/collections.rs (4)
  • fast_hash_map_with_capacity (1066-1068)
  • fast_hash_map_with_capacity (1193-1193)
  • fast_hash_set_with_capacity (1100-1102)
  • fast_hash_set_with_capacity (1196-1196)
src/core/triangulation_data_structure.rs (5)
  • vertices (927-929)
  • default (274-276)
  • number_of_vertices (1020-1022)
  • cells (860-862)
  • cells (3699-3708)
src/core/edge.rs (2)
  • new (55-66)
  • endpoints (155-157)
src/core/vertex.rs (1)
  • uuid (502-504)
scripts/tests/test_changelog_utils.py (1)
scripts/changelog_utils.py (9)
  • ChangelogError (45-46)
  • ChangelogNotFoundError (49-50)
  • ChangelogUtils (61-1093)
  • GitRepoError (53-54)
  • VersionError (57-58)
  • get_markdown_line_limit (345-367)
  • _process_body_line (750-766)
  • _extract_github_anchor (991-1032)
  • _format_entry_title (545-557)
src/core/delaunay_triangulation.rs (2)
src/core/triangulation.rs (6)
  • edges (768-770)
  • incident_edges (913-915)
  • cell_neighbors (874-880)
  • cell_vertices (966-968)
  • vertex_coords (996-1000)
  • vertices (589-591)
src/core/edge.rs (1)
  • v0 (94-96)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: build (macos-latest)
  • GitHub Check: Codacy Security Scan
  • GitHub Check: performance-regression
  • GitHub Check: Code Coverage
🔇 Additional comments (31)
examples/triangulation_3d_20_points.rs (1)

29-29: Scoped prelude import matches query‑only usage

Switching to delaunay::prelude::query::* still covers all symbols used in this example while tightening the namespace, and is consistent with the PR’s query‑oriented prelude pattern.

examples/point_comparison_and_hashing.rs (1)

19-19: Import narrowing to prelude::query is safe here

The example’s usage (Point + helpers) is fully satisfied by the query prelude; this keeps the import surface smaller without changing behavior.

examples/memory_analysis.rs (1)

6-6: Query‑only prelude import fits this example

Memory analysis relies on triangulation/query types that are re‑exported via prelude::query, so this scoped import matches the new public surface with no functional impact.

examples/into_from_conversions.rs (1)

22-22: Conversion example correctly scoped to query prelude

Using delaunay::prelude::query::* still brings in Vertex, Point, and vertex! while avoiding a broader prelude import; behavior stays the same.

src/core/triangulation_data_structure.rs (1)

4466-4515: Tests accurately capture new structural‑vs‑topological semantics

  • test_remove_cells_by_keys_repairs_incident_cells_for_affected_vertices now explicitly asserts tds.is_valid().is_ok() in a state with an isolated vertex but remaining cells, which confirms that TDS‑level validation allows isolated vertices while still requiring non‑dangling incident_cell pointers.
  • test_validate_cell_vertex_keys_detects_missing_vertices verifies both that validate_cell_vertex_keys flags a corrupted cell and that is_valid() surfaces the same inconsistency first, ensuring the new early‑exit path for bad vertex keys behaves as intended.

This is a good level of coverage around the new invariants and their error prioritization.

Also applies to: 5257-5284

src/lib.rs (3)

434-434: LGTM: Public module additions follow existing patterns.

The new adjacency and edge modules are properly declared as public and re-exported consistently with existing modules like cell, facet, and vertex.

Also applies to: 441-441, 460-460, 463-463, 596-596, 599-599


629-675: Well-designed focused API surface.

The new prelude::query submodule provides a thoughtful, narrower alternative to importing prelude::*. It clearly documents its read-only focus and includes the essential types for topology traversal, convex hull extraction, and testing workflows.


697-698: LGTM: Type safety tests are comprehensive.

The normality checks for EdgeKey and AdjacencyIndex ensure these types satisfy the required auto traits (Send, Sync, Unpin), consistent with existing tests for other core types.

Also applies to: 719-720

benches/microbenchmarks.rs (1)

20-20: LGTM: Narrowed import surface is appropriate.

The change from delaunay::prelude::* to delaunay::prelude::query::* appropriately narrows the import surface while still providing all necessary types. The explicit imports on lines 17-19 correctly handle types not included in the query submodule.

tests/public_topology_api.rs (2)

12-51: LGTM: Comprehensive test of topology traversal APIs.

The test correctly validates:

  • 6 unique edges for a tetrahedron (C(4,2) = 6)
  • Degree 3 for each vertex (expected in a tetrahedron)
  • No cell neighbors for a single cell
  • Zero-allocation geometry accessors working correctly

54-113: LGTM: Thorough validation of adjacency index construction.

The test effectively validates:

  • Correct topology for two tetrahedra sharing a facet
  • Adjacency queries returning expected neighbor counts
  • AdjacencyIndex construction and invariants (canonical edges, vertex-to-cells, cell-to-neighbors mappings)
src/core/delaunay_triangulation.rs (8)

16-16: LGTM: Import is necessary for new edge-related methods.


845-869: LGTM: Clean forwarding method with clear documentation.

The edges() method correctly forwards to the underlying Triangulation and includes a helpful example demonstrating expected behavior.


871-898: LGTM: Consistent forwarding with appropriate documentation.

The incident_edges() method correctly forwards vertex-specific edge queries and documents the empty-iterator behavior for missing vertices.


900-927: LGTM: Cell neighbor traversal correctly forwarded.

The method appropriately delegates to the underlying triangulation and documents the filtering of boundary facets.


929-958: LGTM: Zero-allocation accessor with appropriate constraints.

The cell_vertices() method correctly forwards to the triangulation while maintaining the zero-allocation design through slice returns.


960-993: LGTM: Consistent zero-allocation coordinate accessor.

The vertex_coords() method maintains the zero-allocation design pattern and includes a practical example of finding vertices by coordinates.


2410-2437: LGTM: Test validates vertex incidence violation detection.

The test correctly corrupts a vertex's incident cell pointer and verifies that the validation report captures this as a VertexIncidence invariant violation.


2488-2536: LGTM: Comprehensive test of topology API forwarding.

The test thoroughly validates that all topology traversal methods correctly forward to the underlying Triangulation, including:

  • Edge enumeration
  • Incident edge queries
  • Cell neighbor traversal
  • Geometry accessors (vertex coordinates, cell vertices)
  • Missing key handling
scripts/tests/test_changelog_utils.py (3)

10-10: LGTM! Clean import organization.

The explicit imports from changelog_utils improve test readability and make dependencies clear.

Also applies to: 14-21


133-161: LGTM! Improved test isolation with filesystem fixtures.

The migration from mocks to filesystem-based tests using tmp_path and monkeypatch.chdir provides better isolation and tests actual file I/O behavior.


595-807: LGTM! Comprehensive test coverage for title formatting.

The test suite thoroughly covers:

  • Single-line and multi-line formatting
  • Markdown character escaping
  • Edge cases (empty, whitespace, tiny limits)
  • Regression scenarios from actual bugs
  • Line length limit enforcement

The assertions correctly verify both formatting correctness and markdown lint compliance.

scripts/changelog_utils.py (9)

41-42: LGTM! Centralized magic number with clear documentation.

Good practice to define the GitHub tag annotation limit as a named constant. The underscore prefix correctly indicates this is internal to the module.


626-626: LGTM! Type annotations improve code clarity.

Adding re.Match[str] type hints to these nested functions enhances type safety and IDE support.

Also applies to: 732-732


964-970: LGTM! Consistent fallback URL handling with environment override.

The fallback logic provides graceful degradation when repository URL detection fails:

  • Allows override via CHANGELOG_FALLBACK_URL environment variable
  • Provides reasonable default for the project
  • Logs warning to inform users

The pattern is consistently applied in both _get_changelog_content and _get_repository_url.

Also applies to: 1318-1329


1218-1218: LGTM! Robust error handling with proper initialization guards.

The changes correctly handle early failures:

  • file_paths initialized to None to detect uninitialized state
  • .resolve() ensures absolute paths survive CWD changes
  • Exception handlers check for None before attempting restoration
  • Prevents AttributeError when exceptions occur during setup

Also applies to: 1249-1260, 1265-1265


1410-1448: LGTM! Well-organized module-level configuration.

The design is clean:

  • Clear documentation explaining the refactoring rationale
  • Module-level regex compilation for performance
  • Environment variable (CHANGELOG_BREAKING_API_TOKENS) allows project-specific customization
  • Sensible defaults with override capability

1450-1558: LGTM! Well-designed helper functions.

The helper functions are:

  • Small and focused on single responsibilities
  • Properly namespaced with _rn_ prefix
  • Pure functions with clear inputs/outputs
  • Correctly handle edge cases (empty entries, boundary detection)

1560-1690: LGTM! Comprehensive breaking-change detection with conservative heuristics.

The implementation is well-thought-out:

  • Patterns cover MSRV bumps, API changes, signature modifications, and removals
  • Conservative approach scopes patterns to public-facing APIs to reduce false positives
  • Project-specific tokens are configurable via environment variable
  • Proper handling of markdown escaping for pattern matching
  • Clear comments explain design decisions

1692-1860: LGTM! Smart entry consolidation reduces changelog verbosity.

The consolidation logic is well-designed:

  • Only consolidates groups of 4+ entries from the same commit
  • Preserves entry order and deduplicates subtitles
  • Categorizes into meaningful buckets (API, Behavior, Tests, Other)
  • Respects markdown line length limits
  • Chooses appropriate parent entry (prefers conventional commit prefixes)

1862-2027: LGTM! Well-orchestrated post-processing pipeline.

The post-processing implementation is solid:

_CommitLinkFormatter:

  • Inlines commit links on top-level bullets when they fit
  • Shortens SHAs to 7 characters and simplifies URLs
  • Drops standalone commit link lines
  • Preserves nested bullet structure

_WordingNormalizer:

  • Simple, focused word replacements for readability
  • Uses word boundaries to avoid partial matches

_ReleaseNotesPostProcessor:

  • Clear orchestration of transformations in logical sequence
  • Per-release block processing maintains document structure
  • Removes non-user-facing content (test metrics)
  • Promotes breaking changes to dedicated section
  • Reduces verbosity through consolidation
  • Well-documented with comments explaining each step

The modular design makes the code testable and maintainable.

Exposes public topology traversal API with optional,
opt-in adjacency index for fast repeated topology queries.

Introduces `AdjacencyIndex` for efficient access to
vertex-edge, vertex-cell, and cell-neighbor relationships.
Adds methods to `DelaunayTriangulation` for building and
utilizing the adjacency index, providing iterators and
counts for edges, incident edges, and cell neighbors.

This allows downstream code to perform mesh analysis, FEM
assembly, and graph algorithms without per-call allocations.
- Rename DelaunayTriangulation::{triangulation, triangulation_mut} -> {as_triangulation, as_triangulation_mut}
- Update all in-tree call sites (core, geometry, tests, docs, examples, benches) to use the new names
- Refresh validation/docs snippets to match the new API

BREAKING CHANGE: DelaunayTriangulation::triangulation() is now as_triangulation(); triangulation_mut() is now as_triangulation_mut().
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
src/core/triangulation_data_structure.rs (1)

2890-2903: Vertex incidence validation integrates cleanly with structural checks

validate_vertex_incidence correctly enforces that any Vertex::incident_cell present:

  • points to an existing cell key, and
  • that cell actually contains the vertex key.

Tds::is_valid() now sequences validation as:

  1. UUID↔key mappings,
  2. cell→vertex key existence (validate_cell_vertex_keys),
  3. vertex incidence (validate_vertex_incidence),
  4. duplicate cells, facet sharing, and neighbor consistency.

validation_report() records:

  • CellVertexKeys violations when cells reference missing vertex keys, and
  • VertexIncidence violations when incident_cell is dangling or inconsistent,

then short‑circuits higher‑level invariants once cell vertex‑key validity is known to be broken. This ordering matches the comments and avoids cascading, misleading secondary errors while still surfacing both key‑reference and incidence issues.

You may consider explicitly documenting in the type‑level docs (e.g., at the Vertex or TDS level) that None incident_cell is allowed structurally but will be tightened at the Triangulation level (no isolated vertices) to make the Level‑2 vs Level‑3 split even more explicit.

Also applies to: 2905-2935, 3050-3052, 3097-3099, 3172-3233

src/core/adjacency.rs (1)

19-40: AdjacencyIndex API and helpers are well‑structured and correct

  • AdjacencyIndexBuildError variants cleanly encode the only two structural failures that can occur during index construction (missing vertex keys or neighbor cell keys), and the Display messages are verified in tests.
  • The internal maps (vertex_to_edges, vertex_to_cells, cell_to_neighbors) and their query helpers (adjacent_cells, incident_edges, cell_neighbors and the corresponding number_of_* methods) correctly:
    • return empty iterators / zero counts for missing keys,
    • avoid allocations by iterating over SmallBuffers in place, and
    • respect immutability (no interior mutability exposed).
  • edges() emitting each edge only when edge.v0() == vk is consistent with EdgeKey’s canonical ordering, so global edge iteration is deduplicated without extra sets.
  • Tests cover:
    • error formatting,
    • auto‑traits (Send + Sync + Unpin),
    • equality of iterator counts vs number_of_* helpers, and
    • behavior for missing keys.

One possible micro‑optimization (non‑blocking) would be to cache the global edge count at build time if number_of_edges() is expected to be called very frequently, but the current edges().count() implementation is perfectly acceptable as a starting point.

Also applies to: 78-88, 90-331, 338-365, 368-428

src/lib.rs (1)

629-676: Consider future refinement of query module scope.

The new prelude::query module provides a convenient namespace for read-only topology operations. The documentation clearly describes the included types and use cases.

Minor consideration for future refinement: The module mixes topology queries (edges, adjacency), convex hull extraction, geometric predicates, and test/example helpers (generate_random_*). This broad scope is reasonable for a "common patterns" module, but if the API grows, you might consider splitting test/benchmark helpers into a separate prelude::testing or prelude::generators module to keep query focused purely on read-only structural operations.

For now, the design is well-documented and the opt-in nature (alongside the full prelude::*) provides flexibility.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b3d356 and b54f7a6.

📒 Files selected for processing (23)
  • README.md
  • benches/large_scale_performance.rs
  • benches/microbenchmarks.rs
  • benches/profiling_suite.rs
  • docs/validation.md
  • examples/convex_hull_3d_20_points.rs
  • examples/memory_analysis.rs
  • examples/triangulation_3d_20_points.rs
  • src/core/adjacency.rs
  • src/core/delaunay_triangulation.rs
  • src/core/edge.rs
  • src/core/triangulation.rs
  • src/core/triangulation_data_structure.rs
  • src/core/util.rs
  • src/geometry/algorithms/convex_hull.rs
  • src/geometry/quality.rs
  • src/lib.rs
  • tests/check_perturbation_stats.rs
  • tests/delaunay_edge_cases.rs
  • tests/proptest_convex_hull.rs
  • tests/proptest_delaunay_triangulation.rs
  • tests/proptest_triangulation.rs
  • tests/public_topology_api.rs
🚧 Files skipped from review as they are similar to previous changes (5)
  • tests/public_topology_api.rs
  • benches/microbenchmarks.rs
  • README.md
  • src/core/edge.rs
  • examples/memory_analysis.rs
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • tests/proptest_delaunay_triangulation.rs
  • examples/convex_hull_3d_20_points.rs
  • benches/large_scale_performance.rs
  • tests/check_perturbation_stats.rs
  • docs/validation.md
  • tests/proptest_convex_hull.rs
  • tests/proptest_triangulation.rs
  • src/core/triangulation_data_structure.rs
  • src/core/util.rs
  • benches/profiling_suite.rs
  • src/geometry/quality.rs
  • tests/delaunay_edge_cases.rs
  • examples/triangulation_3d_20_points.rs
  • src/lib.rs
  • src/core/delaunay_triangulation.rs
  • src/core/triangulation.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • tests/proptest_delaunay_triangulation.rs
  • examples/convex_hull_3d_20_points.rs
  • benches/large_scale_performance.rs
  • docs/validation.md
  • tests/proptest_convex_hull.rs
  • tests/proptest_triangulation.rs
  • src/core/triangulation_data_structure.rs
  • src/core/util.rs
  • benches/profiling_suite.rs
  • src/geometry/quality.rs
  • tests/delaunay_edge_cases.rs
  • examples/triangulation_3d_20_points.rs
  • src/lib.rs
  • src/core/delaunay_triangulation.rs
  • src/core/triangulation.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.

Applied to files:

  • tests/proptest_delaunay_triangulation.rs
  • examples/convex_hull_3d_20_points.rs
  • benches/large_scale_performance.rs
  • tests/check_perturbation_stats.rs
  • docs/validation.md
  • tests/proptest_convex_hull.rs
  • tests/proptest_triangulation.rs
  • src/core/util.rs
  • benches/profiling_suite.rs
  • src/geometry/quality.rs
  • tests/delaunay_edge_cases.rs
  • examples/triangulation_3d_20_points.rs
  • src/lib.rs
  • src/core/delaunay_triangulation.rs
  • src/core/triangulation.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • examples/triangulation_3d_20_points.rs
📚 Learning: 2025-08-30T00:51:50.849Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:481-512
Timestamp: 2025-08-30T00:51:50.849Z
Learning: The DOI 10.5281/zenodo.16931097 in the delaunay project documentation is valid and correctly resolves to acgetchell's Rust delaunay triangulation library on Zenodo.

Applied to files:

  • src/lib.rs
🧬 Code graph analysis (12)
tests/proptest_delaunay_triangulation.rs (2)
src/core/delaunay_triangulation.rs (2)
  • as_triangulation (659-661)
  • validate (1448-1454)
src/core/triangulation.rs (1)
  • validate (1545-1548)
tests/proptest_convex_hull.rs (3)
src/geometry/algorithms/convex_hull.rs (2)
  • from_triangulation (683-729)
  • is_valid_for_triangulation (523-529)
src/core/delaunay_triangulation.rs (1)
  • as_triangulation (659-661)
src/core/util.rs (1)
  • extract_hull_facet_set (1059-1081)
tests/proptest_triangulation.rs (1)
src/core/delaunay_triangulation.rs (2)
  • as_triangulation (659-661)
  • as_triangulation_mut (720-722)
src/core/triangulation_data_structure.rs (3)
src/core/triangulation.rs (2)
  • is_valid (1481-1512)
  • cells (562-564)
src/core/cell.rs (3)
  • is_valid (1039-1079)
  • uuid (765-767)
  • cell (3585-3587)
src/core/vertex.rs (2)
  • is_valid (585-603)
  • uuid (502-504)
src/core/adjacency.rs (4)
src/core/triangulation.rs (4)
  • adjacent_cells (907-909)
  • cells (562-564)
  • incident_edges (1114-1116)
  • edges (785-787)
src/core/delaunay_triangulation.rs (4)
  • cells (556-558)
  • incident_edges (962-964)
  • edges (901-903)
  • new (168-172)
src/core/triangulation_data_structure.rs (2)
  • cells (864-866)
  • cells (3703-3712)
src/core/edge.rs (3)
  • new (55-66)
  • v0 (94-96)
  • v1 (124-126)
src/core/util.rs (1)
src/core/delaunay_triangulation.rs (1)
  • tds (617-619)
benches/profiling_suite.rs (1)
src/geometry/algorithms/convex_hull.rs (1)
  • from_triangulation (683-729)
src/geometry/quality.rs (1)
src/core/delaunay_triangulation.rs (1)
  • as_triangulation (659-661)
examples/triangulation_3d_20_points.rs (1)
src/core/edge.rs (1)
  • v0 (94-96)
src/lib.rs (2)
src/geometry/algorithms/convex_hull.rs (1)
  • from_triangulation (683-729)
src/core/edge.rs (1)
  • new (55-66)
src/core/delaunay_triangulation.rs (3)
src/core/triangulation.rs (11)
  • build_adjacency_index (1312-1400)
  • edges (785-787)
  • edges_with_index (812-818)
  • incident_edges (1114-1116)
  • incident_edges_with_index (1139-1146)
  • cell_neighbors (1013-1019)
  • cell_neighbors_with_index (1045-1052)
  • cell_vertices (1228-1230)
  • vertex_coords (1258-1262)
  • vertices (589-591)
  • default (344-346)
src/core/adjacency.rs (3)
  • edges (301-306)
  • incident_edges (184-189)
  • cell_neighbors (242-247)
src/core/edge.rs (2)
  • new (55-66)
  • v0 (94-96)
src/core/triangulation.rs (4)
src/lib.rs (2)
  • fast_hash_map_with_capacity (742-742)
  • fast_hash_set_with_capacity (745-745)
src/topology/characteristics/validation.rs (2)
  • validate_triangulation_euler_with_facet_to_cells_map (157-198)
  • validate_triangulation_euler (133-155)
src/core/delaunay_triangulation.rs (8)
  • tds (617-619)
  • edges (901-903)
  • incident_edges (962-964)
  • build_adjacency_index (875-877)
  • new (168-172)
  • as_triangulation (659-661)
  • validate (1448-1454)
  • cells (556-558)
src/core/adjacency.rs (5)
  • edges (301-306)
  • number_of_edges (328-330)
  • adjacent_cells (123-128)
  • incident_edges (184-189)
  • number_of_incident_edges (213-215)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: Code Coverage
  • GitHub Check: Codacy Security Scan
  • GitHub Check: performance-regression
🔇 Additional comments (45)
src/geometry/quality.rs (1)

219-219: LGTM! Consistent mechanical refactoring across documentation and tests.

All call sites have been systematically updated from dt.triangulation() to dt.as_triangulation() throughout the documentation examples and test suite. The changes are purely mechanical, preserve all functionality, and align with the breaking API change documented in the PR objectives.

Also applies to: 321-321, 435-435, 443-443, 463-464, 467-468, 487-488, 491-492, 592-592, 604-604, 668-669, 671-672, 697-697, 702-702, 730-731, 755-755, 773-774, 801-801, 805-805, 823-823, 904-904, 907-907, 974-980, 1009-1009, 1021-1021, 1050-1051, 1077-1077, 1093-1093, 1173-1173, 1191-1191, 1213-1213, 1250-1250, 1298-1298

examples/triangulation_3d_20_points.rs (4)

29-29: LGTM: Import scope narrowed to query module.

Narrowing the import from delaunay::prelude::* to delaunay::prelude::query::* reduces namespace pollution and makes dependencies more explicit. This is a good practice.


105-105: LGTM: CoordinateScalar trait bounds correctly added.

The CoordinateScalar trait bounds are necessary for the new topology traversal API methods (number_of_edges_with_index, number_of_incident_edges_with_index). The bounds are correctly placed on both analysis and validation functions.

Also applies to: 184-184


115-128: LGTM: Clear demonstration of the new topology traversal API.

This code effectively demonstrates the opt-in AdjacencyIndex pattern for efficient repeated adjacency queries. The example shows:

  • Building the index via build_adjacency_index()
  • Using it for edge counting and degree queries
  • Avoiding per-call allocations

This is a helpful addition to the example.


193-193: LGTM: API migration to as_triangulation() is correct.

The update from dt.triangulation().validate() to dt.as_triangulation().validate() correctly aligns with the breaking API change documented in the PR objectives.

benches/large_scale_performance.rs (1)

249-249: LGTM: API migration to as_triangulation() is correct.

The update from dt.triangulation() to dt.as_triangulation() in the validation benchmark correctly aligns with the breaking API change. The rest of the benchmark logic remains unchanged.

benches/profiling_suite.rs (2)

64-64: LGTM: Import scope narrowed to query module.

Consistent with the example file changes, narrowing the import to delaunay::prelude::query::* reduces namespace pollution and improves code clarity.


789-789: LGTM: API migration in ConvexHull construction is correct.

The update from dt.triangulation() to dt.as_triangulation() in the ConvexHull::from_triangulation call correctly aligns with the breaking API change.

tests/proptest_delaunay_triangulation.rs (1)

311-311: LGTM: API migrations in property tests are correct.

All validation call sites are correctly updated from triangulation().validate() to as_triangulation().validate(), including:

  • The prop_assert_levels_1_to_3_valid! macro (line 311)
  • Four test validation sites (lines 754, 792, 865, 873)

The updates are consistent and maintain the existing test logic.

Also applies to: 754-754, 792-792, 865-865, 873-873

tests/check_perturbation_stats.rs (1)

25-25: LGTM: Both accessor variants correctly updated.

The updates demonstrate correct usage of both the mutable and immutable accessor variants:

  • dt.triangulation_mut()dt.as_triangulation_mut() (line 25)
  • dt.triangulation()dt.as_triangulation() (line 73)

The API migration is complete and correct.

Also applies to: 73-73

tests/delaunay_edge_cases.rs (1)

224-224: LGTM! Clean API migration to as_triangulation().

The method rename from triangulation() to as_triangulation() is applied consistently in validation contexts. The changes preserve the same semantics and correctly obtain a triangulation reference for validation.

Also applies to: 261-261

tests/proptest_convex_hull.rs (1)

49-49: LGTM! Comprehensive and consistent API migration.

All convex hull property tests (dimensions 2–5) have been updated to use as_triangulation() instead of triangulation(). The changes are applied consistently across:

  • ConvexHull::from_triangulation() calls
  • hull.is_valid_for_triangulation() checks
  • extract_hull_facet_set() calls

The test semantics and coverage remain unchanged—this is a clean accessor rename.

Also applies to: 60-60, 84-84, 135-135, 138-138, 152-152, 158-158, 167-167, 186-186, 236-236, 258-259, 270-271, 278-283

src/core/util.rs (1)

1054-1056: LGTM! Documentation examples updated correctly.

The doc examples for extract_hull_facet_set have been updated to use the new as_triangulation() accessor, keeping documentation in sync with the API changes.

examples/convex_hull_3d_20_points.rs (2)

31-31: LGTM! Import scope appropriately narrowed.

The import has been narrowed from delaunay::prelude::* to delaunay::prelude::query::*, which aligns with this example's focus on read-only topology queries and convex hull operations.


135-135: LGTM! Consistent API migration across the example.

All convex hull operations have been updated to use as_triangulation():

  • Hull construction (ConvexHull::from_triangulation)
  • Hull validation (hull.validate)
  • Point containment tests (hull.is_point_outside)
  • Visible facet queries (hull.find_visible_facets, hull.find_nearest_visible_facet)

The example functionality is preserved while using the new API surface.

Also applies to: 156-156, 206-206, 276-276, 304-304, 328-328, 365-365, 401-401, 418-418, 424-424, 442-442

docs/validation.md (1)

284-284: LGTM! Documentation updated consistently.

All validation examples and patterns have been updated to use as_triangulation(), maintaining consistency with the API changes. The validation level concepts and usage patterns remain clear and accurate.

Also applies to: 367-367, 422-422, 439-439, 452-452

tests/proptest_triangulation.rs (1)

221-235: Accessor rename usage looks correct and preserves semantics

The replacements of dt.triangulation() / dt.triangulation_mut() with dt.as_triangulation() / dt.as_triangulation_mut() are all in read‑only vs mutable contexts that match the new API, and there’s no change to the surrounding test logic or invariants. This keeps the property tests aligned with the public API rename without altering behavior.

Also applies to: 327-357, 395-407, 463-491, 507-539, 627-642, 653-675, 687-697

src/core/triangulation_data_structure.rs (2)

61-71: New invariant kinds and docs are consistent with implementation

The added documentation for “Cell Vertex Keys” and “Vertex Incidence” invariants matches the corresponding InvariantKind variants and the validation helpers wired into Tds::is_valid() / validation_report(). This keeps the validation contract clear and aligned with the actual checks.

Also applies to: 81-92, 470-497


4010-4049: Tests correctly exercise new invariants via as_triangulation().tds

The updated tests now:

  • Access the underlying Tds through dt.as_triangulation().tds instead of the old triangulation() accessor, in both validity checks and helper calls.
  • Explicitly assert tds.is_valid().is_ok() after vertex insertions/removals and after remove_cells_by_keys, ensuring the new CellVertexKeys and VertexIncidence invariants are preserved by these mutating operations.
  • Add direct tests for validate_cell_vertex_keys() and for the early failure path in Tds::is_valid() when cells reference non‑existent vertex keys.

These changes keep the test suite aligned with the new API surface while giving good coverage of the strengthened structural invariants.

Also applies to: 4089-4097, 4180-4220, 4245-4308, 4518-4520, 5287-5295

src/lib.rs (6)

66-66: LGTM: Consistent API rename throughout examples.

The triangulation()as_triangulation() rename is consistently applied across all documentation examples. The as_ prefix better follows Rust conventions for zero-cost reference conversions (similar to as_ref(), as_bytes(), etc.).

Also applies to: 74-75, 78-78, 91-91, 143-144, 192-192


434-434: LGTM: Clean module organization for new topology API.

The new adjacency and edge modules are properly declared and re-exported following the existing pattern used for other core modules. This provides a clear public API surface for topology traversal functionality.

Also applies to: 441-441, 460-460, 463-463


596-606: LGTM: Prelude re-exports follow established pattern.

The adjacency::* and edge::* re-exports are consistently integrated into the existing prelude structure alongside other core types.


720-721: LGTM: Type safety checks for new public types.

The EdgeKey and AdjacencyIndex types are properly added to the normal_types test, ensuring they implement the required auto traits (Send + Sync + Unpin).


771-774: LGTM: Test updated for API rename.

Test correctly uses the new as_triangulation() method.


852-852: LGTM: All test call sites updated consistently.

All remaining test call sites properly use as_triangulation() following the API rename.

Also applies to: 940-940, 946-947, 952-953

src/core/delaunay_triangulation.rs (14)

12-12: LGTM: Necessary imports for new topology API.

The AdjacencyIndex, AdjacencyIndexBuildError, and EdgeKey imports are required for the forwarding methods added later in the file.

Also applies to: 17-17


659-661: LGTM: API rename follows Rust conventions.

The as_triangulation() rename improves API consistency with Rust's idiomatic as_* prefix for zero-cost reference conversions. The implementation remains const and unchanged.


720-722: LGTM: Mutable accessor renamed consistently.

The as_triangulation_mut() rename is consistent with the immutable variant and maintains the comprehensive safety warnings in the documentation.


846-877: LGTM: Clean forwarding to underlying triangulation.

The build_adjacency_index() method properly delegates to the Triangulation layer with clear documentation and examples.


879-903: LGTM: Edge iteration forwarding with correct example.

The edges() forwarding is clean and the documentation example correctly shows 6 edges for a tetrahedron.


905-935: LGTM: Index-based edge iteration with proper lifetimes.

The lifetime parameter 'a is correctly propagated from the AdjacencyIndex reference to the returned iterator.


937-964: LGTM: Incident edge iteration with graceful missing-key handling.

The method correctly delegates and documents the empty iterator behavior for missing vertex keys. The tetrahedron example (degree 3) is mathematically correct.


966-998: LGTM: Index-based incident edges with consistent design.

This method maintains consistency with the edges_with_index() pattern and correctly handles lifetimes.


1000-1027: LGTM: Cell neighbor iteration with correct boundary example.

The single tetrahedron example correctly shows 0 neighbors (all facets are on the boundary).


1029-1064: LGTM: Index-based cell neighbors with realistic example.

The two-tetrahedra example (sharing a triangular facet) correctly demonstrates the neighbor relationship.


1066-1095: LGTM: Zero-allocation accessors with proper trait bounds.

Both cell_vertices() and vertex_coords() provide efficient slice views with appropriate CoordinateScalar bounds and graceful None returns for missing keys.

Also applies to: 1097-1130


2547-2574: LGTM: Validation report test for vertex incidence.

This test properly verifies that VertexIncidence violations are detected and reported. The intentional corruption of incident_cell is an appropriate way to test this error path.


2625-2691: LGTM: Comprehensive topology traversal forwarding tests.

Excellent test coverage comparing DelaunayTriangulation forwarding methods against direct Triangulation calls. The test verifies:

  • All edge iteration variants (with/without index)
  • Incident edge queries (with/without index)
  • Cell neighbor queries (with/without index)
  • Zero-allocation accessors (cell_vertices, vertex_coords)
  • Missing key behavior

This ensures the forwarding layer maintains perfect consistency with the underlying implementation.


1329-1329: LGTM: Example updated for API rename.

Consistent use of as_triangulation() in the documentation example.

src/core/triangulation.rs (6)

734-1441: Excellent public topology API design!

The new read-only topology traversal API is well-designed and thoroughly documented:

  • Consistent pattern: Each query has a base method (e.g., edges()), an index-accelerated variant (edges_with_index()), and a count method (number_of_edges()).
  • Clear performance characteristics: Documentation explains allocation behavior and recommends building an AdjacencyIndex for repeated queries.
  • Defensive programming: debug_assert_adjacency_index_matches() catches index mismatches in debug builds; missing keys return empty iterators/None instead of panicking.
  • Comprehensive examples: Every method includes a compilable doctest.
  • Proper validation: build_adjacency_index() validates that all referenced keys exist and returns descriptive errors.

The implementation is efficient (e.g., zero-allocation slice accessors at lines 1228-1262, correct edge-count formula at line 1328) and the test coverage is thorough (lines 4540-4786).


1481-1511: Good optimization: facet map is built once and reused.

Building the facet_to_cells_map once at line 1485 and passing it to both validate_manifold_facets_with_map() (line 1486) and validate_triangulation_euler_with_facet_to_cells_map() (line 1499) avoids redundant O(N·D) scans. This is a clean performance improvement.

The ordering of checks is also logical:

  1. Manifold facets
  2. Connectedness
  3. No isolated vertices (new)
  4. Euler characteristic

1757-1788: Isolated vertex check correctly enforces Level 3 manifold invariant.

The new validate_no_isolated_vertices() method correctly rejects vertices that are not incident to any cell. This prevents the triangulation from representing a non-manifold topology where vertices exist but are not part of the simplicial complex.

Key design decisions:

  • Bootstrap phase rejection is correct: A triangulation with vertices but no cells (bootstrap phase) is not a valid manifold and correctly fails Level 3 validation (verified by test at line 3678).
  • Complexity is acceptable: O(N_cells·D + N_vertices) is appropriate for Level 3 validation, which is already expensive.
  • Error message is clear: Lines 1778-1781 include both the vertex UUID and key for debugging.

This complements the existing connectedness check (line 1492) and strengthens the manifold-with-boundary guarantee.


1625-1629: Good refactoring: test-only method avoids duplicate work.

Marking validate_manifold_facets() as #[cfg(test)] ensures production code uses the optimized validate_manifold_facets_with_map() path that accepts a precomputed facet map. This prevents accidental double-builds of the O(N·D) facet map.


4540-4786: Comprehensive test coverage for the new topology API.

The test suite thoroughly exercises the new public API:

  • Dimensional coverage: 2D triangle (line 4541) and 3D double-tetrahedron (line 4572) configurations.
  • Edge cases: Missing keys (line 4653), empty triangulation (line 4777).
  • Index invariants: Consistency checks for cell neighbors, vertex-edge incidence (line 4733).
  • Accessor round-trips: Verifies cell_vertices() and vertex_coords() correctness (line 4709).

All assertions verify expected counts and relationships (e.g., line 4554: triangle has 3 edges, line 4592: double-tetrahedron has 9 edges). Excellent coverage.


3678-3733: Isolated vertex tests correctly verify Level 3 manifold enforcement.

Two tests confirm the new validation check works correctly:

  1. Bootstrap phase (line 3678): Vertices without cells are correctly rejected with an "Isolated vertex detected" error.
  2. Manual insertion (line 3702): A vertex inserted directly into the TDS without incident cells is correctly caught during validation.

Both tests verify the error message content, ensuring users receive actionable diagnostics.

Updates documentation to clarify isolated vertex handling during
triangulation data structure (TDS) validation levels.

Also enhances API documentation for the public topology API.
Changed descriptions for better user understanding.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/core/triangulation_data_structure.rs (1)

485-499: Public InvariantKind growth is a breaking surface; consider #[non_exhaustive]

Adding CellVertexKeys and VertexIncidence is reasonable, but because InvariantKind is public and not marked #[non_exhaustive], any downstream exhaustive match will break on upgrade. If you expect this enum to keep evolving, consider adding #[non_exhaustive] (and documenting it) to reduce future breakage.

src/lib.rs (1)

629-681: prelude::query is a well‑scoped, read‑only convenience surface

The new prelude::query module cleanly packages:

  • Read‑only traversal types/APIs (DelaunayTriangulation, Triangulation, EdgeKey, AdjacencyIndex, CellKey, VertexKey, FacetView),
  • Lightweight predicates/convex‑hull extraction, and
  • Common generators/instrumentation/macros,

so consumers can use delaunay::prelude::query::*; without pulling in the full prelude. The selection is tight and focused on read‑only workflows as advertised; this should make docs/examples easier to mirror in downstream code.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b54f7a6 and e8d7018.

📒 Files selected for processing (2)
  • src/core/triangulation_data_structure.rs
  • src/lib.rs
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/core/triangulation_data_structure.rs
  • src/lib.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/triangulation_data_structure.rs
  • src/lib.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.

Applied to files:

  • src/lib.rs
📚 Learning: 2025-08-30T00:51:50.849Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:481-512
Timestamp: 2025-08-30T00:51:50.849Z
Learning: The DOI 10.5281/zenodo.16931097 in the delaunay project documentation is valid and correctly resolves to acgetchell's Rust delaunay triangulation library on Zenodo.

Applied to files:

  • src/lib.rs
🧬 Code graph analysis (1)
src/lib.rs (1)
src/core/edge.rs (1)
  • new (55-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (windows-latest)
  • GitHub Check: Code Coverage
  • GitHub Check: Codacy Security Scan
  • GitHub Check: performance-regression
🔇 Additional comments (21)
src/core/triangulation_data_structure.rs (12)

61-96: Docs for new Level‑2/3 invariants match implementation

The additions for “Cell Vertex Keys” and “Vertex Incidence” in the tables and hierarchy bullets accurately describe what validate_cell_vertex_keys() and validate_vertex_incidence() enforce, including the allowance of isolated vertices at Level 2 and rejection at Level 3. No mismatches spotted between docs and code.


2887-2905: validate_cell_vertex_keys is correct and defensive

This pass cleanly detects cells that reference non‑existent VertexKeys and reports a precise InconsistentDataStructure error with cell UUID, key and position. It also complements the earlier insertion‑time checks in insert_cell_with_mapping and get_cell_vertices(). The duplicated logic is acceptable here for clarity and the more specific diagnostics.


2907-2941: Vertex incidence validation matches Level‑2 semantics

validate_vertex_incidence() correctly enforces that any incident_cell pointer:

  • refers to an existing cell, and
  • that cell actually contains the vertex key.

It intentionally allows incident_cell == None even when the vertex is used by cells (Level‑2 allows isolated or unseeded vertices), which aligns with the surrounding docs and with incremental/repair paths that may not seed every vertex. Complexity is proportional to #vertices × D, which is fine for validation‑only code.


3053-3110: is_valid() ordering of new invariants is sound

The structural validation now runs in a good dependency order:

  1. Vertex/Cell UUID↔Key mappings
  2. Cell→vertex key validity (validate_cell_vertex_keys)
  3. Vertex incidence consistency (validate_vertex_incidence)
  4. Higher‑level invariants (duplicates, facets, neighbors)

This ensures that derived checks only run after basic key/mapping integrity, avoiding confusing secondary errors. No issues found with the added calls.


3178-3240: validation_report() short‑circuit behavior for new invariants is consistent

The report now:

  1. Fails fast on mapping inconsistencies (returns mapping violations only).
  2. Independently records CellVertexKeys and VertexIncidence violations.
  3. If cell→vertex keys are bad, skips duplicate/facet/neighbor checks that assume key validity.

Running validate_vertex_incidence() even when key references fail is safe because it only depends on vertices and cells, not cell→vertex links, and matches the doc comment about surfacing both key and incidence problems before short‑circuiting.


4016-4055: as_triangulation() use in vertex‑addition test keeps the new invariants exercised

Switching the validation in test_add_vertex_comprehensive to:

assert!(dt.as_triangulation().tds.is_valid().is_ok());

ensures the TDS‑level checks (including the new vertex‑incidence and cell‑vertex‑key invariants) are exercised after incremental insertion. The test still validates both counts and structural soundness; no issues here.


4070-4134: Vertex‑removal topology test meaningfully stresses new invariants

test_remove_vertex_maintains_topology_consistency now:

  • Verifies no dangling neighbor keys (using cells.contains_key),
  • Uses as_triangulation().tds instead of older accessors, and
  • Ends with tds.is_valid().is_ok(), which includes the new VertexIncidence and CellVertexKeys checks.

This is a solid regression test for the local‑repair logic in remove_vertex/remove_cells_by_keys; it should catch most regressions in the new invariants.


4246-4316: test_remove_vertex_no_dangling_references thoroughly covers incidence + cell membership

The test now asserts:

  • No remaining cell contains the removed vertex key,
  • The removed vertex is absent from both storage and UUID→key maps, and
  • All remaining vertices with incident_cell set point to an existing cell that actually contains them, followed by tds.is_valid().is_ok().

This directly exercises validate_vertex_incidence() and validate_cell_vertex_keys() on a non‑trivial mutation path; coverage looks excellent.


4483-4526: remove_cells_by_keys test correctly encodes Level‑2 stance on isolated vertices

After removing one of two adjacent triangles, the test now asserts tds.is_valid().is_ok() and comments that isolated vertices are allowed at the TDS structural level. This matches the Level‑2 semantics documented earlier and confirms that the incremental incidence repair is compatible with the new invariants.


5191-5225: Facet‑index overflow test is a good corruption guardrail

test_build_facet_to_cells_map_errors_on_u8_facet_index_overflow intentionally corrupts a cell’s vertex buffer to force facet indices beyond u8::MAX and verifies that build_facet_to_cells_map() fails with InconsistentDataStructure. This is an appropriate defensive check for malformed TDS state (e.g., unsafe internal mutation or bad deserialization) and aligns with the usize_to_u8 contract.


5228-5272: Mapping‑consistency tests robustly cover both directions of the maps

test_validate_vertex_and_cell_mappings_detect_inconsistencies uses:

  • Length mismatches, and
  • Wrong UUID→Key entries

to trigger MappingInconsistency for both vertices and cells. This directly exercises the optimized vertex/cell mapping validators and their error classification, giving good confidence in the new Level‑2 mapping invariants.


5275-5301: Cell‑vertex‑key tests confirm is_valid() now fails early on stale keys

test_validate_cell_vertex_keys_detects_missing_vertices first checks the direct helper, then asserts that tds.is_valid() fails with an InconsistentDataStructure message about a non‑existent vertex key. This ensures the new key‑sanity invariant is wired into the main structural validator as intended.

src/lib.rs (9)

1-96: Top‑level docs align with new API (including as_triangulation)

The crate‑level documentation now consistently uses dt.as_triangulation() in examples (e.g., convex hull extraction and 4D hull) and describes the validation hierarchy in terms of Tds/Triangulation/DelaunayTriangulation exactly as implemented. The Grokipedia simplicial‑complex link is retained; given your earlier clarification that this is intentional, I see no action needed here. Based on learnings, this respects your chosen reference source.


140-147: Validation‑hierarchy section correctly reflects as_triangulation plumbing

The bullets now point users to:

  • dt.tds().is_valid()/validate() for Level 2,
  • dt.as_triangulation().is_valid()/validate() for Level 3, and
  • dt.is_valid()/validate() for Level 4,

which matches the underlying API structure after the accessor rename. This is important for discoverability and looks consistent.


434-469: New core::adjacency and core::edge modules and re‑exports are coherent

Adding pub mod adjacency; and pub mod edge; inside core, then re‑exporting them (pub use adjacency::*;, pub use edge::*;) makes EdgeKey, AdjacencyIndex, etc. first‑class public APIs alongside existing core types. This matches the PR’s topology‑traversal goals and keeps collections intentionally out of the re‑exports to avoid namespace noise. No structural issues spotted.


595-627: Prelude: adding adjacency::* and edge::* keeps the “kitchen‑sink” surface current

Including adjacency::* and edge::* in prelude matches the existing pattern of exposing the main core types (cells, vertices, triangulations) via a single import. For users already comfortable with delaunay::prelude::*, this makes the new topology and edge APIs immediately available without additional imports. Looks consistent with the crate’s ergonomics.


591-607: Prelude exports stay internally consistent with core and geometry modules

The prelude’s combination of:

  • Core types/traits (cell, delaunay_triangulation, triangulation, Tds, boundary_analysis, DataType, etc.),
  • Core algorithms (locate, incremental insertion), and
  • Geometry/kernel/predicate utilities,

still composes cleanly with the new adjacency/edge additions. The collections re‑exports remain opt‑in but handy for advanced users. No conflicts or missing pieces noticed.


700-726: is_normal type‑safety tests now cover EdgeKey and AdjacencyIndex

Extending normal_types to assert is_normal::<EdgeKey>() and is_normal::<AdjacencyIndex>() ensures both new core topology types satisfy the expected auto traits (Sized + Send + Sync + Unpin). This is a good smoke test for their internal representation and future changes.


759-780: Quality‑metric tests updated to as_triangulation() keep signatures consistent

radius_ratio and normalized_volume are now called as:

let ratio = radius_ratio(dt.as_triangulation(), cell_key).unwrap();
let norm_vol = normalized_volume(dt.as_triangulation(), cell_key).unwrap();

which matches the read‑only &Triangulation API and aligns with the rest of the crate/docs using as_triangulation(). The assertions remain the same; the change is purely API‑surface alignment.


855-861: Prelude core‑types test correctly exercises as_triangulation and TDS access

In test_prelude_core_types, switching to:

let tri = dt.as_triangulation();
let tds = &tri.tds;

is consistent with the new accessor naming and keeps the checks on number_of_vertices/number_of_cells and get_cell intact. Accessing tri.tds directly from inside the crate remains appropriate for this internal test.


931-959: Convex‑hull prelude test now mirrors the convex‑hull docs

Using ConvexHull::from_triangulation(dt.as_triangulation()) and passing dt.as_triangulation() into is_point_outside aligns the test with the updated crate‑level convex‑hull example. This ensures the read‑only hull API is exercised through the same pattern users see in documentation.

@acgetchell acgetchell merged commit 3748ebb into main Jan 3, 2026
13 checks passed
@acgetchell acgetchell deleted the feat/public-topology-api branch January 3, 2026 07:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation geometry Geometry-related issues rust Pull requests that update rust code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants