docs: add multiple languages (Rust/JS/Python) to book#405
Conversation
Starting with the installation page. This PR uses mdbook-langtabs to turn the code blocks into tabs, one per language. The code examples are tested against the latest version of the libraries in CI.
WalkthroughAdds an mdBook language-tabs UI (CSS/JS) and config, converts many docs to multi-language langtabs, adds a Python test runner to extract and execute Rust/JS/Python examples, extends Justfile targets, and adds a GitHub Actions workflow to run and verify documentation examples. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Developer
participant GH as GitHub Actions
participant Runner as CI Runner
participant Script as test_examples.py
participant Rust as Cargo/Rust
participant Node as Node/npm
participant Py as Python
participant Artifacts as Artifact Storage
Developer->>GH: push / PR triggers workflow
GH->>Runner: start job (test-examples)
Runner->>Runner: checkout repo, setup Rust/Node/Python, install wasm-pack, mdbook-langtabs
Runner->>Script: execute book/scripts/test_examples.py
Script->>Runner: scan docs -> extract langtabs code blocks
Script->>Rust: scaffold/run cargo check (batch or per-block)
Script->>Node: setup JS test env, run JS snippets
Script->>Py: run Python snippets (venv or interpreter)
Rust-->>Script: return per-block results
Node-->>Script: return per-block results
Py-->>Script: return per-block results
Script->>Runner: aggregate logs/results, write artifacts on failure
Runner->>Artifacts: upload artifacts (on failure)
Runner-->>GH: job complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
.github/workflows/test-book-examples.yml (2)
30-30: Consider pinning action versions by SHA.The workflow uses semantic versioning for actions (e.g.,
@v4,@v5) rather than pinning to specific commit SHAs. While this is common practice, pinning to SHAs provides better security guarantees against supply chain attacks.For example:
-uses: dtolnay/rust-toolchain@stable +uses: dtolnay/rust-toolchain@<SHA>This is an optional security hardening measure. The current approach is acceptable for most projects.
Also applies to: 35-35, 177-177
184-213: Complex inline Python script in workflow.The inline Python script for extracting and testing Rust examples is functional but quite complex for a workflow file. Consider moving this logic to a dedicated script or function in
test_examples.pyto improve maintainability.For example, add a CLI option to
test_examples.py:# In test_examples.py if __name__ == "__main__": import argparse parser = argparse.ArgumentParser() parser.add_argument('--lang', help='Filter by language') args = parser.parse_args() # ... use args.lang to filterThen simplify the workflow to:
- name: Extract and test Rust examples run: | cd book python3 scripts/test_examples.py --lang rustbook/src/getting-started/quick-start.md (1)
304-305: Clarify Python clustering availability.The comment suggests clustering may not be available in Python bindings yet. If this feature is not implemented, consider tracking this with a TODO comment that references a specific issue.
Example:
-# Note: Clustering may not be available in Python bindings yet +# TODO(#<issue_number>): Clustering not yet available in Python bindingsbook/langtabs.js (1)
1-7: CDN dependency for Devicon CSS.The script loads Devicon CSS from a CDN at runtime. This introduces an external dependency that could fail in offline environments or corporate networks with strict CSP policies. Consider documenting this requirement or providing a fallback.
Alternatively, you could bundle the Devicon assets locally:
- Download the devicon CSS and fonts
- Include them in the book's static assets
- Reference them locally instead of via CDN
This is optional—the current approach works fine for most deployments.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
.github/workflows/test-book-examples.yml(1 hunks)book/book.toml(1 hunks)book/justfile(1 hunks)book/langtabs.css(1 hunks)book/langtabs.js(1 hunks)book/scripts/test_examples.py(1 hunks)book/src/getting-started/installation.md(2 hunks)book/src/getting-started/quick-start.md(5 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2024-10-08T16:07:31.990Z
Learnt from: sd2k
Repo: grafana/augurs PR: 118
File: crates/augurs-prophet/src/lib.rs:14-14
Timestamp: 2024-10-08T16:07:31.990Z
Learning: The `optimizer` module in `crates/augurs-prophet/src/optimizer.rs` already has inline module-level documentation.
Applied to files:
book/src/getting-started/quick-start.mdbook/src/getting-started/installation.md
📚 Learning: 2025-01-12T13:54:18.009Z
Learnt from: sd2k
Repo: grafana/augurs PR: 0
File: :0-0
Timestamp: 2025-01-12T13:54:18.009Z
Learning: In the augurs clustering implementation, the `DbscanCluster` enum with `Cluster(NonZeroU32)` and `Noise` variants is preferred over using `isize` with `-1` for noise points, as it provides better type safety and self-documentation despite being more verbose.
Applied to files:
book/src/getting-started/quick-start.md
🪛 GitHub Check: zizmor
.github/workflows/test-book-examples.yml
[failure] 30-30:
unpinned action reference
[failure] 35-35:
unpinned action reference
[failure] 177-177:
unpinned action reference
🪛 Ruff (0.14.6)
book/scripts/test_examples.py
1-1: Shebang is present but file is not executable
(EXE001)
134-134: subprocess call: check for execution of untrusted input
(S603)
137-142: Consider moving this statement to an else block
(TRY300)
150-150: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (21)
- GitHub Check: Run benchmarks
- GitHub Check: Test Book
- GitHub Check: Tests
- GitHub Check: Check
- GitHub Check: Clippy
- GitHub Check: JS tests
- GitHub Check: linux (ubuntu-22.04, s390x)
- GitHub Check: linux (ubuntu-22.04, x86)
- GitHub Check: windows (windows-2022, x86)
- GitHub Check: linux (ubuntu-22.04, x86_64)
- GitHub Check: musllinux (ubuntu-22.04, x86)
- GitHub Check: macos (macos-14, aarch64)
- GitHub Check: linux (ubuntu-22.04, ppc64le)
- GitHub Check: linux (ubuntu-22.04, armv7)
- GitHub Check: musllinux (ubuntu-22.04, x86_64)
- GitHub Check: windows (windows-2022, x64)
- GitHub Check: macos (macos-13, x86_64)
- GitHub Check: linux (ubuntu-22.04, aarch64)
- GitHub Check: musllinux (ubuntu-22.04, aarch64)
- GitHub Check: musllinux (ubuntu-22.04, armv7)
- GitHub Check: Prophet WASMStan component
🔇 Additional comments (12)
book/justfile (1)
10-27: LGTM! Well-structured test targets.The new test targets provide a clean interface for testing code examples across different languages. The separation of language-specific tests (test-rust, test-javascript, test-python) alongside a comprehensive test-all target is a good design choice.
book/book.toml (1)
10-15: LGTM! Clean preprocessor configuration.The langtabs preprocessor and HTML output configuration is properly set up to integrate the multi-language code block functionality.
book/langtabs.css (1)
1-136: LGTM! Well-structured and responsive CSS.The CSS implementation for language tabs is clean and comprehensive:
- Proper use of CSS variables for theming
- Browser compatibility considerations (webkit-overflow-scrolling, scrollbar hiding)
- Appropriate print media styles
- Mobile responsiveness
book/langtabs.js (2)
42-86: LGTM! Robust localStorage handling and initialization.The localStorage helpers properly handle errors, and the initialization logic includes appropriate fallbacks. The approach to activate saved language or default to the first tab is well-designed.
106-150: Excellent fallback logic for language availability.The
activateLanguageInContainerfunction properly handles the case where a requested language isn't available in a specific container by falling back to the first available language. This prevents UI breakage when containers have different language subsets.book/scripts/test_examples.py (5)
50-121: LGTM! Robust code block extraction.The
extract_code_blocksfunction correctly parses markdown files to extract code blocks within langtabs sections. The state machine approach with proper handling of fence markers is well-implemented.
129-151: Appropriate error handling in run_command.The function properly handles subprocess execution with timeout and error catching. The broad exception handling on line 150 is appropriate here since we want to capture any execution errors and return them in a structured format.
194-260: Well-designed JavaScript test with WASM initialization.The JavaScript test function properly handles WASM module initialization by detecting imports and adding appropriate
initSynccalls. The handling of top-level await is also correct.
295-349: Comprehensive Rust test setup.The Rust test function creates a proper Cargo project structure with local path dependencies. Using
cargo checkinstead ofcargo buildis a good performance optimization.
376-473: Well-structured main function with clear output.The main function provides excellent user feedback with colored output and a comprehensive summary. The error reporting with truncated error messages is user-friendly.
book/src/getting-started/installation.md (1)
144-149: I'll help you verify the Python API usage foraug.MSTL.ets([2]). Let me search the codebase for the actual Python bindings implementation.
<function_calls>
#!/bin/bashSearch for MSTL class definition in the pyaugurs crate
rg -nP --type=py -C5 'class MSTL' crates/pyaugurs/
</function_calls>book/src/getting-started/quick-start.md (1)
232-234: Based on my verification, I've found clear evidence about Python outlier detection availability:Key findings:
- Outlier detection IS implemented in the main Rust crate (
crates/augurs-outlier/) with MADDetector and DbscanDetector- Outlier detection IS exposed in JavaScript bindings (
js/augurs-outlier-js/src/lib.rs)- Outlier detection IS tested in integration tests
- Outlier detection is NOT exposed in Python bindings - the
augurs.pyifile (authoritative Python API) contains no outlier detection methods, classes, or imports- The Python bindings in
crates/pyaugurs/have no outlier detection implementation (no corresponding .rs source file or tests)The review comment is accurate: outlier detection is not available in Python bindings yet. The guidance to check
augurs.pyiis appropriate.
Clarify Python outlier detection availability.
The comment accurately reflects the current state: outlier detection is not yet available in Python bindings, though it is implemented in the main Rust crate and exposed in JavaScript bindings. The guidance to check
augurs.pyiis appropriate for confirming available methods. Consider adding a tracking issue reference if one exists, or remove the "coming soon" placeholder and replace it with a note redirecting users to the JavaScript or Rust implementations.
| @@ -0,0 +1,477 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
Make the script executable.
The shebang is present but the file is not marked as executable. This should be fixed for consistency.
Run the following command to make the script executable:
#!/bin/bash
chmod +x book/scripts/test_examples.py🧰 Tools
🪛 Ruff (0.14.6)
1-1: Shebang is present but file is not executable
(EXE001)
🤖 Prompt for AI Agents
In book/scripts/test_examples.py around line 1, the file has a shebang but is
not executable; mark the file executable by setting the executable bit (e.g.,
run chmod +x on book/scripts/test_examples.py) so the script can be run directly
from the shell.
| # Requires Python 3.8 or higher | ||
| # Check your Python version: | ||
| python --version | ||
| ``` |
There was a problem hiding this comment.
Inconsistent Python command reference.
The Python version check shows python --version, but the installation instructions and other examples consistently use python3. Consider using python3 --version for consistency.
Apply this diff:
-python --version
+python3 --version📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Requires Python 3.8 or higher | |
| # Check your Python version: | |
| python --version | |
| ``` | |
| # Requires Python 3.8 or higher | |
| # Check your Python version: | |
| python3 --version |
🤖 Prompt for AI Agents
In book/src/getting-started/installation.md around lines 169 to 172, the Python
version check uses "python --version" which is inconsistent with the rest of the
document that uses "python3"; update the command to "python3 --version" so all
instructions consistently reference python3 and avoid platform ambiguity.
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.github/workflows/test-book-examples.yml (1)
168-194: Extract duplicated inline Python code to reduce complexity.The test extraction and execution logic is repeated identically across three test jobs (Rust, JavaScript, Python), with only the language filter changing. This duplication increases maintenance burden and readability complexity.
Consider refactoring by either:
- Creating a parametrized reusable workflow that accepts the language filter as input.
- Extracting the inline Python to a new utility script (e.g.,
book/scripts/run_language_tests.py) that accepts language as an argument.This would reduce the workflow file from ~300 lines to ~150 lines and centralize test logic.
Also applies to: 223-249, 274-300
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/rust.yml(1 hunks).github/workflows/test-book-examples.yml(1 hunks)
🧰 Additional context used
🪛 GitHub Check: zizmor
.github/workflows/test-book-examples.yml
[failure] 30-30:
unpinned action reference
[failure] 35-35:
unpinned action reference
[failure] 57-57:
unpinned action reference
[failure] 158-158:
unpinned action reference
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (21)
- GitHub Check: macos (macos-13, x86_64)
- GitHub Check: musllinux (ubuntu-22.04, armv7)
- GitHub Check: macos (macos-14, aarch64)
- GitHub Check: musllinux (ubuntu-22.04, aarch64)
- GitHub Check: linux (ubuntu-22.04, aarch64)
- GitHub Check: musllinux (ubuntu-22.04, x86_64)
- GitHub Check: musllinux (ubuntu-22.04, x86)
- GitHub Check: linux (ubuntu-22.04, s390x)
- GitHub Check: windows (windows-2022, x64)
- GitHub Check: windows (windows-2022, x86)
- GitHub Check: linux (ubuntu-22.04, x86_64)
- GitHub Check: linux (ubuntu-22.04, ppc64le)
- GitHub Check: linux (ubuntu-22.04, x86)
- GitHub Check: linux (ubuntu-22.04, armv7)
- GitHub Check: Clippy
- GitHub Check: Test Book
- GitHub Check: Run benchmarks
- GitHub Check: Tests
- GitHub Check: Check
- GitHub Check: JS tests
- GitHub Check: Prophet WASMStan component
🔇 Additional comments (3)
.github/workflows/rust.yml (1)
85-85: Appropriate tooling addition for mdBook integration.Installing mdbook-langtabs alongside mdbook and other essential tools is the right approach for enabling language-tabbed documentation builds in CI.
.github/workflows/test-book-examples.yml (2)
65-65: Clarify intent ofcontinue-on-erroron package builds.Lines 65 and 73 use
continue-on-error: truewhen building augurs for JavaScript and Python. If these builds fail, the subsequent test steps may execute against missing or incomplete packages, producing false negatives.Verify whether this is intentional (e.g., to allow partial testing when builds are unavailable) or whether it should fail the workflow on build errors.
If the intent is to allow graceful degradation, consider using conditional job dependencies instead so that test jobs skip when their dependencies fail, rather than silently running against missing packages.
Also applies to: 73-73
99-146: Example coverage verification is well-designed.The
verify-coveragejob cleanly validates that key documentation files contain langtabs markers and have code examples across all three languages. The warning for imbalanced language coverage provides useful guidance for documentation maintainers.
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
.github/workflows/test-book-examples.yml (2)
135-186: Verify-coverage job provides helpful non-blocking validation.The coverage verification approach is sound: it checks for code example markers and language balance without blocking the workflow. The script correctly uses triple-backtick patterns to count markdown code blocks.
One minor consideration: the FILES array is hardcoded (lines 150–153), so future documentation will need manual addition to this list to receive coverage checks. Consider adding a comment noting this.
3-37: workflow_run trigger flagged as insecure — mitigations in place but document trade-off.Static analysis correctly identifies
workflow_runas a security concern. While the code has reasonable mitigations (success-gate on line 37, conditional artifact handling), this pattern carries inherent supply-chain risk when pulling artifacts from potentially-modified upstream workflows.The current approach is acceptable for the stated use case (testing documentation examples), but consider adding a comment documenting this security trade-off and the rationale for accepting it. If this pattern becomes widespread in the repo, consider reviewing GitHub's guidance on securing workflow_run events.
Example comment to add at line 4–5:
# Security note: workflow_run is used to coordinate cross-workflow artifact testing. # Gated to successful completions only. Review GitHub docs for workflow_run security best practices.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/js.yml(1 hunks).github/workflows/test-book-examples.yml(1 hunks)
🧰 Additional context used
🪛 GitHub Check: zizmor
.github/workflows/test-book-examples.yml
[failure] 3-24:
use of fundamentally insecure workflow trigger
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (22)
- GitHub Check: linux (ubuntu-22.04, aarch64)
- GitHub Check: linux (ubuntu-22.04, s390x)
- GitHub Check: Clippy
- GitHub Check: windows (windows-2022, x86)
- GitHub Check: windows (windows-2022, x64)
- GitHub Check: linux (ubuntu-22.04, armv7)
- GitHub Check: linux (ubuntu-22.04, ppc64le)
- GitHub Check: linux (ubuntu-22.04, x86)
- GitHub Check: linux (ubuntu-22.04, x86_64)
- GitHub Check: musllinux (ubuntu-22.04, armv7)
- GitHub Check: JS tests
- GitHub Check: macos (macos-13, x86_64)
- GitHub Check: Test Book
- GitHub Check: Test Documentation Examples
- GitHub Check: musllinux (ubuntu-22.04, x86_64)
- GitHub Check: Run benchmarks
- GitHub Check: musllinux (ubuntu-22.04, x86)
- GitHub Check: musllinux (ubuntu-22.04, aarch64)
- GitHub Check: Check
- GitHub Check: macos (macos-14, aarch64)
- GitHub Check: Tests
- GitHub Check: Prophet WASMStan component
🔇 Additional comments (2)
.github/workflows/js.yml (1)
35-39: Artifact upload correctly configured and pinned.The new Upload JS packages step is properly integrated with correct action pinning and placement. This aligns well with the cross-workflow artifact sharing pattern for multi-language testing.
.github/workflows/test-book-examples.yml (1)
102-117: Python installation silently continues on failure — consider explicit error messaging.The Python package installation uses
continue-on-error: trueglobally, allowing the test script to run even if setup fails. While the fallback logic (wheel → source build) is sound, a failed installation should ideally fail the entire job to avoid confusing test errors later.Consider either:
- Removing
continue-on-error: trueto fail early and clearly- Adding explicit validation before running tests (e.g., check that augurs module imports successfully)
- Providing clear error context in the test script when dependencies are missing
This is acceptable as-is if the test script has robust error handling for missing dependencies.
This comment has been minimized.
This comment has been minimized.
e8301fa to
34d5c79
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/pyaugurs/augurs.pyi (1)
81-108: Fix type stub:Dtw.distance_matrix()returnsDistanceMatrix, notNDArrayThe
distance_matrix()stub currently declares a return type ofnpt.NDArray[np.float64], but the Rust implementation returnsPyResult<DistanceMatrix>. TheDistanceMatrixclass is a proper#[pyclass]exposed to Python and is intentionally opaque (it can be converted to NumPy via.to_numpy()or passed to other augurs functions).Add
DistanceMatrixto the stub file with its public methods, update thedistance_matrix()return type, and consider updatingDbscan.fit()to acceptDistanceMatrixas an input type alongside the currentnpt.NDArray[np.float64] | list[list[float]].Additionally, move the docstrings on lines 98–100 and 106–108 into the method signatures:
def distance(self, a: npt.NDArray[np.float64], b: npt.NDArray[np.float64]) -> float: """Compute the distance between two time series using DTW.""" ... def distance_matrix(self, series: list[npt.NDArray[np.float64]]) -> DistanceMatrix: """Compute the pairwise distance matrix between a list of time series using DTW.""" ...book/src/tutorials/forecasting-with-prophet.md (1)
173-275: Mismatched langtabs markers around extra JS customization exampleIn the “Customizing the Model” section, there’s a second
<!-- langtabs-end -->after the standalone JavaScript example, but no matching<!-- langtabs-start -->for that block. That extralangtabs-endis likely unintended and could confuse the langtabs preprocessor and your example extractor.Consider either:
- wrapping the second JS block in its own
<!-- langtabs-start --> ... <!-- langtabs-end -->, or- removing the trailing
<!-- langtabs-end -->if it’s meant to be a plain code block.
♻️ Duplicate comments (2)
book/src/getting-started/installation.md (1)
169-171: Align Python version check withpython3usage (prior comment)The requirements section still uses
python --version, while the rest of the docs and tooling assumepython3. For consistency and to avoid ambiguity on systems wherepythonis Python 2 or absent, consider switching topython3 --version.-# Check your Python version: -python --version +# Check your Python version: +python3 --versionPlease verify this matches the Python invocation used in your CI and user environments.
book/scripts/test_examples.py (1)
1-1: Script has shebang but is not marked executable (Ruff EXE001)Since this file has a shebang, it should also be marked executable so it can be run directly (
./test_examples.py).You can fix this by running (once, outside of Git):
chmod +x book/scripts/test_examples.py
🧹 Nitpick comments (1)
book/src/getting-started/installation.md (1)
8-12: Use TOML-style comments for copy-paste friendlinessThe
tomlblock uses// Add to your Cargo.toml:; in TOML, comments are conventionally# .... Switching to#avoids confusion when users copy/paste directly intoCargo.toml.-```toml -// Add to your Cargo.toml: +```toml +# Add to your Cargo.toml:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.github/workflows/js.yml(1 hunks)book/.gitignore(1 hunks)book/justfile(1 hunks)book/scripts/test_examples.py(1 hunks)book/src/getting-started/installation.md(2 hunks)book/src/getting-started/quick-start.md(5 hunks)book/src/tutorials/clustering.md(2 hunks)book/src/tutorials/forecasting-with-prophet.md(5 hunks)book/src/tutorials/outlier-detection.md(6 hunks)crates/pyaugurs/.gitignore(1 hunks)crates/pyaugurs/augurs.pyi(5 hunks)
✅ Files skipped from review due to trivial changes (1)
- book/.gitignore
🚧 Files skipped from review as they are similar to previous changes (2)
- .github/workflows/js.yml
- book/justfile
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2024-10-08T16:07:31.990Z
Learnt from: sd2k
Repo: grafana/augurs PR: 118
File: crates/augurs-prophet/src/lib.rs:14-14
Timestamp: 2024-10-08T16:07:31.990Z
Learning: The `optimizer` module in `crates/augurs-prophet/src/optimizer.rs` already has inline module-level documentation.
Applied to files:
book/src/getting-started/installation.mdbook/src/tutorials/forecasting-with-prophet.mdbook/src/getting-started/quick-start.md
📚 Learning: 2025-01-12T13:54:18.009Z
Learnt from: sd2k
Repo: grafana/augurs PR: 0
File: :0-0
Timestamp: 2025-01-12T13:54:18.009Z
Learning: In the augurs clustering implementation, the `DbscanCluster` enum with `Cluster(NonZeroU32)` and `Noise` variants is preferred over using `isize` with `-1` for noise points, as it provides better type safety and self-documentation despite being more verbose.
Applied to files:
book/src/tutorials/outlier-detection.mdbook/src/tutorials/clustering.mdbook/src/getting-started/quick-start.md
📚 Learning: 2025-03-05T13:29:11.470Z
Learnt from: sd2k
Repo: grafana/augurs PR: 255
File: js/augurs-prophet-js/src/lib.rs:180-186
Timestamp: 2025-03-05T13:29:11.470Z
Learning: The `add_regressor` method in `augurs_prophet::Prophet` returns `&mut Self` (not a `Result`), allowing for method chaining. When wrapping this in JS bindings, there's no need to use the `?` operator as the inner function call doesn't return a Result that needs propagation.
Applied to files:
book/src/tutorials/forecasting-with-prophet.md
📚 Learning: 2025-03-05T13:29:11.469Z
Learnt from: sd2k
Repo: grafana/augurs PR: 255
File: js/augurs-prophet-js/src/lib.rs:180-186
Timestamp: 2025-03-05T13:29:11.469Z
Learning: The `add_regressor` method in `augurs_prophet::Prophet` doesn't return a `Result` type, so it doesn't need error handling with the `?` operator.
Applied to files:
book/src/tutorials/forecasting-with-prophet.md
🧬 Code graph analysis (1)
crates/pyaugurs/augurs.pyi (3)
crates/pyaugurs/src/clustering.rs (1)
fit(81-95)crates/pyaugurs/src/dtw.rs (3)
distance_matrix(60-65)distance_matrix(192-201)series(196-199)crates/augurs-dtw/src/lib.rs (2)
distance_matrix(471-510)distance_matrix(588-601)
🪛 LanguageTool
book/src/tutorials/forecasting-with-prophet.md
[style] ~355-~355: As an alternative to the over-used intensifier ‘very’, consider replacing this phrase.
Context: ...ling your target variable if values are very large - Sort your data by timestamp ### 2...
(EN_WEAK_ADJECTIVE)
🪛 Ruff (0.14.6)
book/scripts/test_examples.py
1-1: Shebang is present but file is not executable
(EXE001)
153-153: subprocess call: check for execution of untrusted input
(S603)
156-161: Consider moving this statement to an else block
(TRY300)
169-169: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (23)
- GitHub Check: windows (windows-2022, x86)
- GitHub Check: windows (windows-2022, x64)
- GitHub Check: musllinux (ubuntu-22.04, armv7)
- GitHub Check: macos (macos-13, x86_64)
- GitHub Check: musllinux (ubuntu-22.04, x86_64)
- GitHub Check: musllinux (ubuntu-22.04, aarch64)
- GitHub Check: musllinux (ubuntu-22.04, x86)
- GitHub Check: macos (macos-14, aarch64)
- GitHub Check: linux (ubuntu-22.04, s390x)
- GitHub Check: linux (ubuntu-22.04, ppc64le)
- GitHub Check: linux (ubuntu-22.04, armv7)
- GitHub Check: linux (ubuntu-22.04, aarch64)
- GitHub Check: linux (ubuntu-22.04, x86_64)
- GitHub Check: linux (ubuntu-22.04, x86)
- GitHub Check: JS tests
- GitHub Check: Test Documentation Examples
- GitHub Check: Rustfmt
- GitHub Check: Prophet WASMStan component
- GitHub Check: Clippy
- GitHub Check: Test Book
- GitHub Check: Check
- GitHub Check: Tests
- GitHub Check: Run benchmarks
🔇 Additional comments (6)
crates/pyaugurs/augurs.pyi (2)
41-51:Forecasttyping and API surface look consistentConstructor and accessors (
point,lower,upper) are clearly typed and line up with howForecastis referenced elsewhere in this stub (TrendModel,MSTL,AutoETS). No issues from a typing/API standpoint.
54-78: MSTL/AutoETS/seasonalitiessignatures are clear and coherentThe new
MSTL.custom_trendclassmethod,AutoETSmethods, andseasonalitieshelper all have preciseNDArrayandSequence[int]annotations, and the use ofForecastas the return type keeps the API surface consistent. I don’t see any mismatches or obvious omissions here.crates/pyaugurs/.gitignore (1)
1-1: Ignoring.coverageis appropriateAdding
.coverageto.gitignoreis standard for Python coverage artifacts and avoids polluting the repo with local test data.book/src/tutorials/outlier-detection.md (1)
9-172: Cross-language MAD/DBSCAN examples look consistent and accurateThe Rust and JavaScript examples for MAD/DBSCAN, handling results, and real-time monitoring are consistent with each other, use the expected detector APIs, and clearly call out that Python bindings don’t yet support outlier detection. The langtabs structure is also well-formed.
Also applies to: 185-193, 198-265
book/src/getting-started/quick-start.md (1)
7-24: Quick-start multi-language examples are coherent and respect feature gapsThe Rust/JS/Python quick-start examples are consistent across sections, and the Python notes correctly indicate where outlier detection and clustering are not yet supported. Langtabs usage is clean, and the examples are suitable for automated testing via the new harness.
Also applies to: 30-95, 101-176, 181-236, 242-306
book/src/tutorials/clustering.md (1)
16-52: Clustering examples correctly useDbscanClusterin Rust and document noise handlingThe Rust examples use the
DbscanClusterenum rather than raw integer labels, which matches the preferred, type-safe pattern. The JS and Python examples consistently treat-1as noise and document cluster membership clearly, keeping cross-language behavior aligned.Based on learnings, this is the desired representation for Rust while acknowledging the simpler numeric scheme in bindings.
Also applies to: 55-88, 90-121, 355-390, 393-440, 442-476
| def test_rust_batch( | ||
| blocks: list[tuple[int, CodeBlock]], temp_dir: Path | ||
| ) -> list[TestResult]: | ||
| """Test all Rust code blocks together in a single Cargo project using modules.""" |
There was a problem hiding this comment.
Align test_rust_batch type annotations with actual return shape
test_rust_batch is annotated to return list[TestResult], but it actually returns a list of (original_idx, TestResult) tuples, which is how callers consume it. This mismatch can confuse type checkers and future maintainers.
Consider updating the signature and type hints to reflect the true structure:
-def test_rust_batch(
- blocks: list[tuple[int, CodeBlock]], temp_dir: Path
-) -> list[TestResult]:
+def test_rust_batch(
+ blocks: list[tuple[int, CodeBlock]], temp_dir: Path
+) -> list[tuple[int, TestResult]]:The rest of the function already builds and returns (idx, TestResult) pairs, so no behavioral change is needed.
Also applies to: 463-477, 479-533
🤖 Prompt for AI Agents
In book/scripts/test_examples.py around lines 354-357 (and also apply similar
fixes in the other ranges noted: 463-477 and 479-533), the function signature
for test_rust_batch declares a return type of list[TestResult] but the function
actually returns a list of (original_idx, TestResult) tuples; update the type
annotations to reflect the real return shape (e.g., list[tuple[int, TestResult]]
or Sequence[tuple[int, TestResult]]), update any local variable/type hints that
assume TestResult-only, and adjust the docstring or comment to state that the
function returns (index, TestResult) pairs; make no behavioral changes.
| def test_code_block( | ||
| block: CodeBlock, | ||
| temp_dirs: Dict[str, Path], | ||
| index: int, | ||
| python_cmd: str = "python3", | ||
| ) -> TestResult: | ||
| """Test a single code block.""" | ||
| language = block.language.lower() | ||
|
|
||
| if language in ("javascript", "js"): | ||
| return test_javascript(block, temp_dirs["js"], index) | ||
| elif language in ("python", "py"): | ||
| return test_python(block, temp_dirs["python"], index, python_cmd) | ||
| elif language in ("rust", "rs"): | ||
| # Rust tests are batched, handled separately | ||
| return None | ||
| else: |
There was a problem hiding this comment.
Have test_code_block always return a TestResult (avoid None special-case)
test_code_block is annotated to return TestResult but returns None for Rust blocks, relying on callers to never pass Rust in. This works today but is brittle for future refactors.
You can make it total and type-safe by returning a skipped TestResult for Rust, e.g.:
- elif language in ("rust", "rs"):
- # Rust tests are batched, handled separately
- return None
+ elif language in ("rust", "rs"):
+ # Rust tests are handled in a separate batch pass
+ return TestResult(
+ success=None,
+ block=block,
+ skipped=True,
+ reason="Rust examples are tested via batched cargo check",
+ )Callers can then treat all languages uniformly without needing to guard against None.
Also applies to: 611-616, 637-645
🤖 Prompt for AI Agents
In book/scripts/test_examples.py around lines 536-552 (and similarly at 611-616
and 637-645), test_code_block is annotated to return TestResult but returns None
for Rust branches; replace the None return with a constructed skipped TestResult
for Rust (including a clear reason like "rust tests are batched/skipped here")
so the function is total and always returns a TestResult; update any helper
callers that expect a TestResult accordingly (no other behavioral change).
| Then import in your Python code: | ||
| ```python | ||
| from augurs import MSTLModel, AutoETS | ||
| from augurs.outlier import MADDetector | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the augurs package directory
find . -type d -name "augurs" | head -20Repository: grafana/augurs
Length of output: 172
🌐 Web query:
augurs Python API MSTLModel AutoETS import documentation
💡 Result:
Short answer — in Python use the augurs binding (it exposes an ETS-backed MSTL helper):
-
Install: pip install augurs. [1]
-
Example import / usage:
import augurs as aug
periods = [24, 168] # example seasonal periods
model = aug.MSTL.ets(periods) # creates an MSTL model using AutoETS for the trend
model.fit(y)
out = model.predict(10, level=0.95)
This usage is shown in the augurs PyPI docs; the underlying Rust crates show AutoETS is the ETS implementation used for trend models. [1][2]
Sources:
[1] augurs on PyPI — usage (MSTL.ets example).
[2] augurs (mstl/ets) docs (Rust crate examples showing AutoETS as trend model).
🌐 Web query:
augurs MADDetector outlier import Python API
💡 Result:
Short answer and example — use the augurs Python bindings (pip package "augurs") and call the outlier MAD detector from the outlier submodule. Example:
-
Install: pip install augurs. [1]
-
Minimal Python snippet:
import augurs as aug
create MAD detector (sensitivity between 0.0 and 1.0)
detector = aug.outlier.MADDetector.with_sensitivity(0.5)
prepare data as list/array of series (each series = 1D array-like)
series = [ [1.0,2.0,1.5,2.3],
[1.9,2.2,1.2,2.4],
[1.5,2.1,6.4,8.5] ]
processed = detector.preprocess(series)
out = detector.detect(processed)
print("outlying indices:", out.outlying_series)
for i, r in enumerate(out.series_results):
print(i, r.is_outlier, r.scores)
References: augurs PyPI (Python bindings) and augurs outlier-detection tutorial. [1][2]
Sources
[1] augurs — PyPI (Python bindings).
[2] Automated Outlier Detection — augurs documentation (MADDetector example).
Update import statements to match the canonical augurs Python API
The snippet uses direct imports (from augurs import MSTLModel, AutoETS and from augurs.outlier import MADDetector), but the official PyPI documentation shows the canonical API requires import augurs as aug followed by method calls like aug.MSTL.ets() and aug.outlier.MADDetector.with_sensitivity(). Update this example to use the recommended import style for consistency with official documentation.
🤖 Prompt for AI Agents
In book/src/getting-started/installation.md around lines 114-118, the example
uses direct imports (from augurs import MSTLModel, AutoETS and from
augurs.outlier import MADDetector); update it to use the canonical API by
replacing those lines with a single import (import augurs as aug) and show usage
examples using aug.MSTL.ets() for the ETS model and
aug.outlier.MADDetector.with_sensitivity(...) (or the appropriate factory
method) to construct the detector, ensuring names and method calls match the
PyPI docs.
| ```javascript | ||
| import { Prophet } from '@bsull/augurs/prophet'; | ||
| import { optimizer } from '@bsull/augurs-prophet-wasmstan'; | ||
|
|
||
| const options = { | ||
| nChangepoints: 10, // Reduce from default 25 | ||
| changepointRange: 0.8, | ||
| }; | ||
|
|
||
| const prophet = new Prophet({ optimizer, ...options }); | ||
|
|
||
| const start = 1704067200; | ||
| const day = 86400; | ||
| const ds = Array.from({ length: 20 }, (_, i) => start + i * day); | ||
| const y = Array.from({ length: 20 }, (_, i) => i * 0.5 + 1.0); | ||
|
|
||
| prophet.fit({ ds, y }); | ||
| console.log("Model fitted with custom options!"); | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and read the tutorial file to check earlier examples
wc -l book/src/tutorials/forecasting-with-prophet.mdRepository: grafana/augurs
Length of output: 108
🏁 Script executed:
# Read the entire tutorial file to see earlier examples and the flagged code
cat -n book/src/tutorials/forecasting-with-prophet.md | head -500Repository: grafana/augurs
Length of output: 19251
🏁 Script executed:
# Search for all Prophet imports and usage patterns in the file
rg -n "import.*Prophet" book/src/tutorials/forecasting-with-prophet.md -A 5Repository: grafana/augurs
Length of output: 1466
🏁 Script executed:
# Look for any package.json or documentation about initialization
fd package.json | head -5Repository: grafana/augurs
Length of output: 268
🏁 Script executed:
# Search the codebase for Prophet initialization patterns
rg "initProphet" --type markdown --type js --type tsRepository: grafana/augurs
Length of output: 1456
Add missing initProphet() initialization to match other examples
Lines 474–492 omit the WASM module initialization that appears consistently across all other JavaScript Prophet examples in this tutorial. Update to:
import initProphet, { Prophet } from '@bsull/augurs/prophet';
import { optimizer } from '@bsull/augurs-prophet-wasmstan';
await initProphet();
const options = {
nChangepoints: 10, // Reduce from default 25
changepointRange: 0.8,
};
const prophet = new Prophet({ optimizer, ...options });
const start = 1704067200;
const day = 86400;
const ds = Array.from({ length: 20 }, (_, i) => start + i * day);
const y = Array.from({ length: 20 }, (_, i) => i * 0.5 + 1.0);
prophet.fit({ ds, y });
console.log("Model fitted with custom options!");🤖 Prompt for AI Agents
In book/src/tutorials/forecasting-with-prophet.md around lines 474 to 492, the
example omits the WASM module initialization used in other JS Prophet examples;
add the default initProphet import and call and await initProphet() before
instantiating Prophet so the WASM optimizer loads correctly. Specifically,
import initProphet alongside Prophet from '@bsull/augurs/prophet' and insert an
awaited initProphet() call prior to creating the new Prophet instance, keeping
the rest of the example unchanged.
| class Dbscan: | ||
| """ | ||
| DBSCAN clustering algorithm. | ||
| """ | ||
|
|
||
| def __init__(self, epsilon: float, min_cluster_size: int) -> None: | ||
| """ | ||
| Create a new DBSCAN clusterer. | ||
|
|
||
| :param epsilon: the maximum distance between two samples for one to be considered | ||
| as in the neighborhood of the other. | ||
| :param min_cluster_size: the number of samples in a neighborhood for a point to be | ||
| considered as a core point. | ||
| """ | ||
| ... | ||
|
|
||
| def fit( | ||
| self, distance_matrix: npt.NDArray[np.float64] | list[list[float]] | ||
| ) -> npt.NDArray[np.int32]: | ||
| """ | ||
| Fit the DBSCAN clustering algorithm to the given distance matrix. | ||
|
|
||
| The distance matrix can be obtained using the `Dtw` class or provided directly. | ||
|
|
||
| :param distance_matrix: square distance matrix between samples. Can be either: | ||
| - a 2D square numpy array | ||
| - a list of lists of floats | ||
| :return: cluster assignments, with -1 indicating noise points. | ||
| """ | ||
| ... |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for InputDistanceMatrix definition and usage
echo "=== InputDistanceMatrix ==="
rg -n "InputDistanceMatrix" crates/pyaugurs -C3
echo -e "\n=== Dbscan bindings ==="
rg -n "Dbscan" crates/pyaugurs -C5
echo -e "\n=== DistanceMatrix in Python bindings ==="
rg -n "DistanceMatrix" crates/pyaugurs -C3
echo -e "\n=== Dtw class and distance_matrix method ==="
rg -n "fn distance_matrix" crates/pyaugurs -C5
rg -n "\.distance_matrix" crates/pyaugurs -C2Repository: grafana/augurs
Length of output: 41912
🏁 Script executed:
#!/bin/bash
# Check the Dtw class and distance_matrix return type in the stub
echo "=== Dtw class and distance_matrix in augurs.pyi ==="
rg -n "class Dtw" crates/pyaugurs/augurs.pyi -A 30
echo -e "\n=== Full Dbscan.fit signature in stub ==="
sed -n '111,140p' crates/pyaugurs/augurs.pyiRepository: grafana/augurs
Length of output: 2040
Update type stubs: Dtw.distance_matrix() returns DistanceMatrix, and Dbscan.fit() accepts it
The stub annotations contain type mismatches with the actual implementation:
Dtw.distance_matrix()(line 102–104) is annotated to returnnpt.NDArray[np.float64], but actually returns aDistanceMatrixobject.Dbscan.fit()(line 128) is annotated to accept onlynpt.NDArray[np.float64] | list[list[float]], but the Rust binding viaInputDistanceMatrixaccepts a third variant:DistanceMatrix(as validated inclustering.rslines 12–19 and confirmed by test usage intest_clustering.pylines 116–120).
Update the stubs to reflect these accepted types so type checkers and users see the correct API.
🤖 Prompt for AI Agents
In crates/pyaugurs/augurs.pyi around lines 111 to 140, the type stubs are
inaccurate: update Dtw.distance_matrix() to return the DistanceMatrix type (not
npt.NDArray[np.float64]) and update Dbscan.fit() to accept DistanceMatrix in its
accepted input union (npt.NDArray[np.float64] | list[list[float]] |
DistanceMatrix). Add or import a forward declaration for DistanceMatrix in this
module if not present so the name resolves for type checkers. Ensure the
docstring remains accurate and run type checks to verify the new union is
accepted.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
book/scripts/test_examples.py (2)
541-563: Havetest_code_blockalways return aTestResultfor Rust blocks
test_code_blockis annotated to returnTestResultbut returnsNonein the Rust branch. Todaymain()never calls this function for Rust (it uses the batched path), so it works in practice, but the partial behavior is brittle and contradicts the signature.You can make this total and future‑proof by returning a skipped
TestResultinstead ofNone:elif language in ("rust", "rs"): - # Rust tests are batched, handled separately - return None + # Rust tests are handled in a separate batch pass + return TestResult( + success=None, + block=block, + skipped=True, + reason="Rust examples are tested via batched cargo check", + )Callers can then uniformly treat
test_code_blockas returning aTestResultregardless of language.
1-1: Script has shebang but isn’t marked executableStatic analysis (EXE001) still applies: since the file has a shebang, it should be marked executable so it can be run directly.
You can fix this on your branch with:
chmod +x book/scripts/test_examples.py
🧹 Nitpick comments (1)
book/scripts/test_examples.py (1)
148-171: Broadexcept Exceptioninrun_commandhides useful failure details
run_commandcurrently swallows all exceptions with a broadexcept Exceptionand normalizes them into a generic error dict. That’s OK for a harness script, but it can make debugging non‑subprocess issues (e.g., missing binary, permission errors) harder.If you want slightly clearer behavior without changing callers:
def run_command( cmd: List[str], cwd: Optional[Path] = None, timeout: int = 30 ) -> Dict[str, Any]: @@ - except subprocess.TimeoutExpired: + except subprocess.TimeoutExpired: return { "success": False, "stdout": "", "stderr": "Command timed out", "returncode": -1, } - except Exception as e: - return {"success": False, "stdout": "", "stderr": str(e), "returncode": -1} + except OSError as e: + # e.g. command not found, permission denied + return { + "success": False, + "stdout": "", + "stderr": str(e), + "returncode": -1, + }This keeps the “never raise” contract but narrows the catch to the most relevant error family.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
book/scripts/test_examples.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.6)
book/scripts/test_examples.py
1-1: Shebang is present but file is not executable
(EXE001)
153-153: subprocess call: check for execution of untrusted input
(S603)
156-161: Consider moving this statement to an else block
(TRY300)
169-169: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (22)
- GitHub Check: Clippy
- GitHub Check: Run benchmarks
- GitHub Check: Test Book
- GitHub Check: Check
- GitHub Check: Tests
- GitHub Check: macos (macos-13, x86_64)
- GitHub Check: linux (ubuntu-22.04, s390x)
- GitHub Check: linux (ubuntu-22.04, aarch64)
- GitHub Check: linux (ubuntu-22.04, ppc64le)
- GitHub Check: linux (ubuntu-22.04, armv7)
- GitHub Check: linux (ubuntu-22.04, x86)
- GitHub Check: macos (macos-14, aarch64)
- GitHub Check: linux (ubuntu-22.04, x86_64)
- GitHub Check: Prophet WASMStan component
- GitHub Check: musllinux (ubuntu-22.04, armv7)
- GitHub Check: musllinux (ubuntu-22.04, x86_64)
- GitHub Check: windows (windows-2022, x64)
- GitHub Check: musllinux (ubuntu-22.04, x86)
- GitHub Check: musllinux (ubuntu-22.04, aarch64)
- GitHub Check: windows (windows-2022, x86)
- GitHub Check: JS tests
- GitHub Check: Test Documentation Examples
| def test_rust_batch( | ||
| blocks: list[tuple[int, CodeBlock]], temp_dir: Path | ||
| ) -> list[TestResult]: | ||
| """Test all Rust code blocks together in a single Cargo project using modules.""" | ||
| if not blocks: | ||
| return [] | ||
|
|
||
| # Create a single Cargo project | ||
| project_dir = temp_dir / "rust_batch_tests" | ||
| project_dir.mkdir(exist_ok=True) | ||
|
|
||
| src_dir = project_dir / "src" | ||
| src_dir.mkdir(exist_ok=True) | ||
|
|
||
| # Track which modules to create | ||
| modules = [] | ||
| skipped_results = [] | ||
|
|
||
| for idx, (original_idx, block) in enumerate(blocks): | ||
| code = block.code.strip() | ||
|
|
||
| # Skip if marked with no_test attribute | ||
| if block.attributes and "no_test" in block.attributes: | ||
| skipped_results.append( | ||
| ( | ||
| original_idx, | ||
| TestResult( | ||
| success=None, | ||
| block=block, | ||
| skipped=True, | ||
| reason="Marked with no_test", | ||
| ), | ||
| ) | ||
| ) | ||
| continue | ||
|
|
||
| # Skip if it's just a comment or TOML config | ||
| if (code.startswith("//") or "[dependencies]" in code) and len( | ||
| code.split("\n") | ||
| ) <= 5: | ||
| skipped_results.append( | ||
| ( | ||
| original_idx, | ||
| TestResult( | ||
| success=None, | ||
| block=block, | ||
| skipped=True, | ||
| reason="Installation/comment only", | ||
| ), | ||
| ) | ||
| ) | ||
| continue | ||
|
|
||
| # Create a module file for this example | ||
| module_name = f"test_{idx}" | ||
| modules.append(module_name) | ||
| module_file = src_dir / f"{module_name}.rs" | ||
|
|
||
| # If code has fn main, rename it to run | ||
| # Otherwise wrap in a run function | ||
| if "fn main" in code: | ||
| # Replace main with run | ||
| code = code.replace("fn main()", "pub fn run()") | ||
| code = code.replace("fn main() {", "pub fn run() {") | ||
| code = code.replace("fn main() -> Result<", "pub fn run() -> Result<") | ||
| else: | ||
| # Wrap in run function | ||
| code = f"#[allow(dead_code, unused_variables, unused_imports)]\npub fn run() {{\n{code}\n}}" | ||
|
|
||
| # Add allow attributes at the top | ||
| code = f"#![allow(dead_code, unused_variables, unused_imports)]\n\n{code}" | ||
|
|
||
| module_file.write_text(code, encoding="utf-8") | ||
|
|
||
| # Create main.rs that declares all modules and calls them | ||
| main_code = ["// Auto-generated test runner", ""] | ||
| for module_name in modules: | ||
| main_code.append(f"mod {module_name};") | ||
| main_code.append("") | ||
| main_code.append("fn main() {") | ||
| for module_name in modules: | ||
| main_code.append(f" {module_name}::run();") | ||
| main_code.append("}") | ||
|
|
||
| main_rs = src_dir / "main.rs" | ||
| main_rs.write_text("\n".join(main_code), encoding="utf-8") | ||
|
|
||
| # Create Cargo.toml | ||
| script_dir = Path(__file__).parent.resolve() | ||
| repo_root = script_dir.parent.parent | ||
| augurs_path = repo_root / "crates" / "augurs" | ||
|
|
||
| cargo_toml = f"""[package] | ||
| name = "rust_batch_tests" | ||
| version = "0.1.0" | ||
| edition = "2021" | ||
|
|
||
| [dependencies] | ||
| augurs = {{ path = "{augurs_path}", features = ["mstl", "ets", "forecaster", "outlier", "clustering", "dtw", "seasons", "prophet", "prophet-wasmstan"] }} | ||
|
|
||
| [workspace] | ||
| """ | ||
|
|
||
| cargo_toml_path = project_dir / "Cargo.toml" | ||
| cargo_toml_path.write_text(cargo_toml, encoding="utf-8") | ||
|
|
||
| # Run cargo check once for all tests | ||
| result = run_command(["cargo", "check", "--quiet"], cwd=project_dir, timeout=120) | ||
|
|
||
| results_list = [] | ||
|
|
||
| # If batch compilation succeeded, all tests pass! | ||
| if result["success"]: | ||
| for original_idx, block in blocks: | ||
| skip_result = next( | ||
| (r for i, r in skipped_results if i == original_idx), None | ||
| ) | ||
| if skip_result: | ||
| results_list.append((original_idx, skip_result)) | ||
| else: | ||
| results_list.append( | ||
| (original_idx, TestResult(success=True, block=block)) | ||
| ) | ||
| return results_list | ||
|
|
||
| # Batch compilation failed - test each module individually to get specific errors | ||
| print( | ||
| f" {Colors.YELLOW}Batch compilation failed, testing individually...{Colors.RESET}" | ||
| ) | ||
|
|
||
| module_idx = 0 | ||
| for original_idx, block in blocks: | ||
| # Check if this was skipped | ||
| skip_result = next((r for i, r in skipped_results if i == original_idx), None) | ||
| if skip_result: | ||
| results_list.append((original_idx, skip_result)) | ||
| continue | ||
|
|
||
| module_name = f"test_{module_idx}" | ||
| module_idx += 1 | ||
|
|
||
| # Create a temporary main.rs that only calls this one module | ||
| temp_main = f"""// Test single module | ||
| mod {module_name}; | ||
|
|
||
| fn main() {{ | ||
| {module_name}::run(); | ||
| }} | ||
| """ | ||
| temp_main_path = src_dir / "main.rs.tmp" | ||
| temp_main_path.write_text(temp_main, encoding="utf-8") | ||
|
|
||
| # Swap main.rs temporarily | ||
| original_main = main_rs.read_text() | ||
| main_rs.write_text(temp_main, encoding="utf-8") | ||
|
|
||
| # Check just this module | ||
| individual_result = run_command( | ||
| ["cargo", "check", "--quiet"], cwd=project_dir, timeout=30 | ||
| ) | ||
|
|
||
| # Restore original main.rs | ||
| main_rs.write_text(original_main, encoding="utf-8") | ||
|
|
||
| if individual_result["success"]: | ||
| results_list.append((original_idx, TestResult(success=True, block=block))) | ||
| else: | ||
| results_list.append( | ||
| ( | ||
| original_idx, | ||
| TestResult( | ||
| success=False, | ||
| block=block, | ||
| error=individual_result["stderr"] | ||
| or individual_result["stdout"], | ||
| ), | ||
| ) | ||
| ) | ||
|
|
||
| return results_list | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file and read the function signature
wc -l book/scripts/test_examples.pyRepository: grafana/augurs
Length of output: 91
🏁 Script executed:
# Read the test_rust_batch function to verify the return type and implementation
sed -n '359,539p' book/scripts/test_examples.py | head -100Repository: grafana/augurs
Length of output: 3430
🏁 Script executed:
# Search for how test_rust_batch is called/used to verify the actual return type usage
rg "test_rust_batch" book/scripts/test_examples.py -B 2 -A 5Repository: grafana/augurs
Length of output: 666
🏁 Script executed:
# Read the rest of the function to see the complete logic
sed -n '359,539p' book/scripts/test_examples.py | tail -150Repository: grafana/augurs
Length of output: 5003
🏁 Script executed:
# Let's check the entire function to understand module indexing
sed -n '359,539p' book/scripts/test_examples.pyRepository: grafana/augurs
Length of output: 5986
Fix test_rust_batch return type and per-module fallback indexing
Two confirmed issues:
-
Type annotation mismatch
The function is annotated as-> list[TestResult], but it returns a list of(original_idx, TestResult)tuples. Callers rely on this tuple structure (rust_results_map = {idx: result for idx, result in rust_results_list}), making the annotation incorrect. -
Fallback per-module check uses inconsistent module indices
- During module generation you use
for idx, (original_idx, block) in enumerate(blocks):and name filestest_{idx}whereidxis the enumerate index over all blocks. When blocks are skipped (viacontinue), enumerate still increments, so module names may have gaps (e.g.,test_0,test_2if block at enumerate position 1 is skipped). - In the fallback path (after batch
cargo checkfails), you iterate overblocksagain with a separatemodule_idxcounter that increments only for non-skipped blocks, yielding consecutive indices (e.g.,test_0,test_1). - This mismatch causes the fallback to check the wrong module files when blocks are skipped, producing confusing cargo errors instead of the real example failures.
- During module generation you use
Address both by establishing a stable original_idx → module_name mapping and correcting the return type:
-def test_rust_batch(
- blocks: list[tuple[int, CodeBlock]], temp_dir: Path
-) -> list[TestResult]:
+def test_rust_batch(
+ blocks: list[tuple[int, CodeBlock]], temp_dir: Path
+) -> list[tuple[int, TestResult]]:
"""Test all Rust code blocks together in a single Cargo project using modules."""
if not blocks:
return []
# ... project setup code ...
- # Track which modules to create
- modules = []
- skipped_results = []
-
- for idx, (original_idx, block) in enumerate(blocks):
+ # Track which modules to create
+ modules: list[str] = []
+ skipped_results: list[tuple[int, TestResult]] = []
+ module_map: dict[int, str] = {}
+
+ module_idx = 0
+ for original_idx, block in blocks:Then in the module generation section:
# Create a module file for this example
- module_name = f"test_{idx}"
+ module_name = f"test_{module_idx}"
+ module_idx += 1
modules.append(module_name)
+ module_map[original_idx] = module_name
module_file = src_dir / f"{module_name}.rs"And in the fallback path, replace the problematic module_idx initialization and lookup:
- module_idx = 0
for original_idx, block in blocks:
# Check if this was skipped
skip_result = next((r for i, r in skipped_results if i == original_idx), None)
if skip_result:
results_list.append((original_idx, skip_result))
continue
- module_name = f"test_{module_idx}"
- module_idx += 1
+ module_name = module_map.get(original_idx)
+ if module_name is None:
+ # No compiled module for this example; mark as skipped defensively
+ results_list.append(
+ (
+ original_idx,
+ TestResult(
+ success=None,
+ block=block,
+ skipped=True,
+ reason="No compiled module created for this example",
+ ),
+ )
+ )
+ continueThis ensures module names are consistent across both paths and aligns the function signature with actual usage.
Starting with the installation page.
This PR uses mdbook-langtabs to turn the code blocks into tabs,
one per language.
The code examples are tested against the latest version of the
libraries in CI.
Summary by CodeRabbit
New Features
Documentation
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.