Fix doc build outputs: replace epub with pdf, silence asciinema for non-HTML#1100
Fix doc build outputs: replace epub with pdf, silence asciinema for non-HTML#1100
Conversation
…on-HTML - .readthedocs.yml: remove epub format, add pdf format - asciinema.py: guard copy_asset_files to HTML builder only - asciinema.py: silently skip asciinema nodes in non-HTML builders (epub, latex, etc.) instead of emitting a warning that can break PDF builds - asciinema.py: add epub to _NODE_VISITORS so epub builds don't fail with unregistered node type https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
- docs.yml: add pdf job that installs LaTeX, builds with latexpdf, uploads as artifact, and uploads to release when release_id is set - docs.yml: accept release_id input (mirrors how build.yml works) - ci.yml: pass release_id to docs workflow and grant contents: write https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
- Dockerfile: install texlive-latex-recommended, texlive-fonts-recommended, texlive-latex-extra, latexmk so the devcontainer can generate PDFs - docs.yml: remove redundant 'pip install sphinx_design' from all jobs; sphinx_design is already declared in [project.optional-dependencies.docs] so 'pip install .[docs]' is sufficient - dfetch.code-workspace: add 'Build PDF Docs' task (make latexpdf) https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
WalkthroughAdds PDF build and release steps for documentation: LaTeX packages added to devcontainer, CI/docs workflows updated to build and optionally publish PDFs, ReadTheDocs/formats and Sphinx LaTeX configs adjusted, a VS Code task added, and small docs/extension changes for non-HTML builders. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer
participant GH as GitHub Actions
participant Runner as Runner
participant Artifact as Artifact Storage
participant Release as GitHub Release
Dev->>GH: push / open PR
GH->>Runner: run jobs (prep-release -> docs -> pdf)
Runner->>Runner: checkout, setup Python, apt-get LaTeX
Runner->>Runner: run `make -C doc latexpdf`
Runner->>Artifact: upload PDF artifact
alt inputs.release_id != ''
Runner->>Release: upload PDF to draft release (release_id)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/docs.yml:
- Around line 92-95: The "Install LaTeX" GitHub Actions step runs apt-get
install without updating the package index; modify the step named "Install
LaTeX" in the run block so it executes a package index refresh (e.g., run `sudo
apt-get update`) immediately before the existing `sudo apt-get install -y
texlive-latex-recommended ... latexmk` command to ensure package resolution
succeeds on GitHub-hosted runners.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: c865569f-cce9-4f93-8bb7-6f4e90915318
📒 Files selected for processing (6)
.devcontainer/Dockerfile.github/workflows/ci.yml.github/workflows/docs.yml.readthedocs.ymldfetch.code-workspacedoc/_ext/sphinxcontrib_asciinema/asciinema.py
| - name: Install LaTeX | ||
| run: | | ||
| sudo apt-get install -y texlive-latex-recommended texlive-fonts-recommended \ | ||
| texlive-latex-extra latexmk |
There was a problem hiding this comment.
Add apt-get update before installing LaTeX packages.
The apt-get install runs without a preceding apt-get update. On GitHub-hosted runners, the package cache can become stale, potentially causing package resolution failures.
Proposed fix
- name: Install LaTeX
run: |
- sudo apt-get install -y texlive-latex-recommended texlive-fonts-recommended \
+ sudo apt-get update
+ sudo apt-get install -y texlive-latex-recommended texlive-fonts-recommended \
texlive-latex-extra latexmk📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Install LaTeX | |
| run: | | |
| sudo apt-get install -y texlive-latex-recommended texlive-fonts-recommended \ | |
| texlive-latex-extra latexmk | |
| - name: Install LaTeX | |
| run: | | |
| sudo apt-get update | |
| sudo apt-get install -y texlive-latex-recommended texlive-fonts-recommended \ | |
| texlive-latex-extra latexmk |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/docs.yml around lines 92 - 95, The "Install LaTeX" GitHub
Actions step runs apt-get install without updating the package index; modify the
step named "Install LaTeX" in the run block so it executes a package index
refresh (e.g., run `sudo apt-get update`) immediately before the existing `sudo
apt-get install -y texlive-latex-recommended ... latexmk` command to ensure
package resolution succeeds on GitHub-hosted runners.
The Codespaces badge references an external .svg URL which pdflatex cannot include, causing a fatal error. Wrap the badge in '.. only:: html' and provide a plain hyperlink for non-HTML builders. https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
texlive packages follow the TeX Live release cycle and their Debian package versions change frequently. Pinning to a specific version causes the build to fail when the index is updated. Other system packages (ccache, ruby, etc.) remain pinned as they are stable. https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
The previous fix used '.. only:: html' around the substitution, but docutils resolves substitution references before the only-node transform runs, so the image node still ended up in the LaTeX doctree and caused a pdflatex fatal error. '.. raw:: html' is processed exclusively by the HTML builder and is completely invisible to the LaTeX builder, making it the correct tool for format-specific markup that must not appear in PDF output. https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
plantweb defaults to SVG format, which pdflatex cannot include, causing a fatal 'Unknown graphics extension: .svg' error in the LaTeX build. Sphinx adds the builder name to 'tags' before executing conf.py, so we can override plantweb's defaults cache to use PNG for any non-HTML builder (latex, man, texinfo, etc.). HTML builds continue to use SVG for crisp vector output. https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.devcontainer/Dockerfile:
- Around line 18-25: Add documentation recording the intentional decision to
install TeX Live packages unpinned (texlive-latex-recommended,
texlive-fonts-recommended, texlive-latex-extra, latexmk) and the reproducibility
tradeoff, and note this as an explicit Hadolint DL3008 exception; update the
project README (or create an ADR) with a short section that states why
year-specific TeX Live packages can't be pinned on Debian, that builds may vary
over time, and that the Dockerfile's unpinned apt installs are a deliberate
choice to avoid build failures.
In `@doc/conf.py`:
- Around line 23-30: The plantweb PNG override runs too early using
tags.has("html") in doc/conf.py and therefore forces PNG for all builders; move
this logic into a builder-inited hook: create a setup(app) (or add to existing
setup) that calls app.connect("builder-inited", on_builder_inited) and implement
on_builder_inited(app) which reads plantweb.defaults via
_plantweb_defaults.read_defaults(), sets _pd["format"]="png" only when
app.builder.format is not "html" (or when app.builder.name indicates non-HTML
builders), and assigns the cache back to _plantweb_defaults.read_defaults.cache;
this mirrors the asciinema extension pattern using app.connect and checks
app.builder.format at initialization time.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ad932f59-86aa-47b1-88b1-bd97a262becf
📒 Files selected for processing (3)
.devcontainer/Dockerfiledoc/conf.pydoc/howto/contributing.rst
| # Install LaTeX for PDF documentation generation | ||
| # texlive packages follow the TeX Live release cycle and are not pinned | ||
| RUN apt-get update && apt-get install --no-install-recommends -y \ | ||
| texlive-latex-recommended \ | ||
| texlive-fonts-recommended \ | ||
| texlive-latex-extra \ | ||
| latexmk && \ | ||
| rm -rf /var/lib/apt/lists/* |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if Debian provides year-specific TeX Live packages that could be pinned
# Search for available texlive packages in Debian Trixie repositories
curl -s "https://packages.debian.org/search?keywords=texlive&searchon=names&suite=trixie§ion=all" | \
grep -oP 'texlive-[0-9]{4}' | sort -u || echo "No year-specific TeX Live packages found"
# Alternative: Check if the current approach causes reproducibility issues
echo "Current approach: TeX Live packages unpinned to avoid version conflicts"
echo "Tradeoff: Builds may differ between environments/times"Repository: dfetch-org/dfetch
Length of output: 187
The unpinned TeX Live packages approach is justified by Debian's constraints.
The LaTeX packages are intentionally installed without version pins to avoid build failures when Debian TeX Live package versions change. This is documented in the inline comment and consistent with the CI workflow. However, Debian does not provide year-specific TeX Live packages (e.g., texlive-2023) that could be pinned as an alternative.
Given this limitation, the current approach is reasonable. Consider documenting the reproducibility tradeoff in the project README or an Architecture Decision Record (ADR) so contributors understand that builds may vary slightly between environments or over time due to TeX Live updates.
Hadolint's DL3008 warning is valid for general container best practices, but this is an intentional exception justified by the TeX Live release cycle.
🧰 Tools
🪛 Hadolint (2.14.0)
[warning] 20-20: Pin versions in apt get install. Instead of apt-get install <package> use apt-get install <package>=<version>
(DL3008)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/Dockerfile around lines 18 - 25, Add documentation recording
the intentional decision to install TeX Live packages unpinned
(texlive-latex-recommended, texlive-fonts-recommended, texlive-latex-extra,
latexmk) and the reproducibility tradeoff, and note this as an explicit Hadolint
DL3008 exception; update the project README (or create an ADR) with a short
section that states why year-specific TeX Live packages can't be pinned on
Debian, that builds may vary over time, and that the Dockerfile's unpinned apt
installs are a deliberate choice to avoid build failures.
doc/conf.py
Outdated
| # plantweb renders PlantUML diagrams as SVG by default, which pdflatex cannot | ||
| # include. Override to PNG for all non-HTML builders (e.g. LaTeX/PDF). | ||
| if not tags.has("html"): # type: ignore[name-defined] # noqa: F821 | ||
| import plantweb.defaults as _plantweb_defaults | ||
| _pd = _plantweb_defaults.read_defaults() | ||
| _pd["format"] = "png" | ||
| _plantweb_defaults.read_defaults.cache = _pd | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== places using Sphinx tags/builder checks =="
rg -n "tags\\.has\\(\"html\"\\)|app\\.builder\\.format|builder-inited|only::\\s+html|only::\\s+not html"
echo
echo "== build entrypoints and explicit tag flags (if any) =="
rg -n "latexpdf|sphinx-build|SPHINXOPTS|-t\\s+html|-t\\s+latex"Repository: dfetch-org/dfetch
Length of output: 1972
🏁 Script executed:
# Check how asciinema extension is initialized/configured
cat -n doc/_ext/sphinxcontrib_asciinema/asciinema.py | head -30Repository: dfetch-org/dfetch
Length of output: 1099
🏁 Script executed:
# Check if there's already a setup function or builder-inited hook in conf.py
cat -n doc/conf.py | grep -A5 -B5 "def setup\|builder-inited\|connect"Repository: dfetch-org/dfetch
Length of output: 43
🏁 Script executed:
# Check when exactly tags.has("html") is evaluated and how tags are populated
rg -A3 -B3 "tags\." doc/conf.pyRepository: dfetch-org/dfetch
Length of output: 592
🏁 Script executed:
# Check full conf.py to see setup function or hook registration
wc -l doc/conf.pyRepository: dfetch-org/dfetch
Length of output: 76
🏁 Script executed:
# Search for any explicit tag setup or app.config.tags in the repo
rg -n "app.config.tags|tags.add\(" doc/Repository: dfetch-org/dfetch
Length of output: 43
🏁 Script executed:
# Check Makefile for any -t flags being passed
cat doc/Makefile | grep -E "^[A-Za-z].*html|^[A-Za-z].*latex|SPHINXOPTS"Repository: dfetch-org/dfetch
Length of output: 290
🏁 Script executed:
# Read full conf.py to check for any setup() or hooks
cat -n doc/conf.pyRepository: dfetch-org/dfetch
Length of output: 9512
🏁 Script executed:
# Check Sphinx documentation or behavior: does tags.has("html") work at config-load time?
# Let's search for how plantweb is used and whether the current code actually works
rg -n "plantweb" doc/conf.py -A2 -B2Repository: dfetch-org/dfetch
Length of output: 728
🏁 Script executed:
# Check if there are any RST files that explicitly test the HTML-only behavior
rg -l "only:: html|only:: not html" doc/Repository: dfetch-org/dfetch
Length of output: 87
🏁 Script executed:
# Look at how the asciinema extension registers its hook
cat -n doc/_ext/sphinxcontrib_asciinema/__init__.pyRepository: dfetch-org/dfetch
Length of output: 891
Move the plantweb format override into a builder-inited hook.
The current code at line 25 uses tags.has("html") at config-load time, which always evaluates to False because the html tag is not automatically set during configuration parsing. This causes the PNG override to apply to all builders, including HTML, which is the opposite of the intended behavior—HTML builds should use SVG, while PDF builds need PNG.
The asciinema extension in this repo (line 12 of doc/_ext/sphinxcontrib_asciinema/__init__.py) already demonstrates the correct pattern: register a hook using app.connect() and check app.builder.format inside the hook when the builder is fully initialized.
Proposed fix
-# plantweb renders PlantUML diagrams as SVG by default, which pdflatex cannot
-# include. Override to PNG for all non-HTML builders (e.g. LaTeX/PDF).
-if not tags.has("html"): # type: ignore[name-defined] # noqa: F821
- import plantweb.defaults as _plantweb_defaults
- _pd = _plantweb_defaults.read_defaults()
- _pd["format"] = "png"
- _plantweb_defaults.read_defaults.cache = _pd
+# plantweb renders PlantUML diagrams as SVG by default, which pdflatex cannot
+# include. Override to PNG for non-HTML builders (e.g. LaTeX/PDF).
+def _configure_plantweb_for_builder(app, config):
+ if app.builder.format == "html":
+ return
+ import plantweb.defaults as _plantweb_defaults
+
+ _pd = _plantweb_defaults.read_defaults()
+ _pd["format"] = "png"
+ _plantweb_defaults.read_defaults.cache = _pd
+
+
+def setup(app):
+ app.connect("config-inited", _configure_plantweb_for_builder)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@doc/conf.py` around lines 23 - 30, The plantweb PNG override runs too early
using tags.has("html") in doc/conf.py and therefore forces PNG for all builders;
move this logic into a builder-inited hook: create a setup(app) (or add to
existing setup) that calls app.connect("builder-inited", on_builder_inited) and
implement on_builder_inited(app) which reads plantweb.defaults via
_plantweb_defaults.read_defaults(), sets _pd["format"]="png" only when
app.builder.format is not "html" (or when app.builder.name indicates non-HTML
builders), and assigns the cache back to _plantweb_defaults.read_defaults.cache;
this mirrors the asciinema extension pattern using app.connect and checks
app.builder.format at initialization time.
The conf.py cache-patch approach proved unreliable because Sphinx 8.2.3 loads conf.py before adding the builder name to 'tags' (confirmed in application.py:100-108), and there were timing/subprocess concerns with patching the internal cache. .plantwebrc at the repo root is plantweb's own config mechanism: the 'git://.plantwebrc' provider (last = highest priority in DEFAULTS_PROVIDERS) runs 'git rev-parse --show-toplevel' and reads this file. This works reliably for all builders without any Sphinx internals knowledge. HTML builds now also render PlantUML as PNG, which is acceptable quality for documentation and avoids any format mismatch between builders. https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.plantwebrc:
- Line 1: The project-level config (.plantwebrc) currently forces PNG output via
{"format":"png"}, which breaks hardcoded SVG references like
static/patch-diagram.svg in doc/landing-page/index.rst; either remove or change
the global format setting in .plantwebrc so it doesn't force PNG (e.g., delete
the {"format":"png"} entry or set a builder-aware option), or if you must keep
PNG globally then update the hardcoded reference in doc/landing-page/index.rst
from static/patch-diagram.svg to static/patch-diagram.png so the asset path
matches the generated output.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 85b16e79-a2a4-4ca4-af2d-26ec89605921
📒 Files selected for processing (1)
.plantwebrc
| @@ -0,0 +1 @@ | |||
| {"format": "png"} | |||
There was a problem hiding this comment.
Global PNG output can break existing hardcoded .svg doc references.
This forces PlantWeb artifacts to .png, but doc/landing-page/index.rst (Line 374) still references static/patch-diagram.svg, which will become a broken asset path.
Suggested fix (update hardcoded extension)
--- a/doc/landing-page/index.rst
+++ b/doc/landing-page/index.rst
@@
-:file: static/patch-diagram.svg
+:file: static/patch-diagram.pngIf you need mixed outputs by builder, consider making image references builder-aware instead of hardcoding one extension.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.plantwebrc at line 1, The project-level config (.plantwebrc) currently
forces PNG output via {"format":"png"}, which breaks hardcoded SVG references
like static/patch-diagram.svg in doc/landing-page/index.rst; either remove or
change the global format setting in .plantwebrc so it doesn't force PNG (e.g.,
delete the {"format":"png"} entry or set a builder-aware option), or if you must
keep PNG globally then update the hardcoded reference in
doc/landing-page/index.rst from static/patch-diagram.svg to
static/patch-diagram.png so the asset path matches the generated output.
There was a problem hiding this comment.
The finding is a false positive — no fix needed.
static/patch-diagram.svg is a committed static asset in the repo. It's included via .. raw:: html :file: which just inlines the file content at build time — plantweb never touches it.
.plantwebrc only affects diagrams defined with .. uml:: directives. The landing page has no uml/plantweb directives at all (confirmed by grep). The two are completely independent: plantweb can be forced to PNG without affecting any hardcoded SVG references
There was a problem hiding this comment.
@spoorcc, you're absolutely right — I apologize for the false positive. The distinction is clear:
.plantwebrconly governs the output format for diagrams rendered from.. uml::directives via plantweb... raw:: html :file: static/patch-diagram.svgsimply inlines a committed static asset at build time and is entirely unaffected by plantweb's configuration.
Since the landing page contains no .. uml:: directives, the two are fully independent. No fix is needed.
(^• ω •^) 🐰
✏️ Learnings added
Learnt from: spoorcc
Repo: dfetch-org/dfetch PR: 1100
File: .plantwebrc:1-1
Timestamp: 2026-04-01T16:18:39.670Z
Learning: In dfetch's Sphinx documentation, `.plantwebrc` (e.g., `{"format": "png"}`) only controls the output format of diagrams generated from `.. uml::` directives via plantweb. It has no effect on committed static SVG assets included via `.. raw:: html :file: path/to/file.svg`, which are simply inlined at build time. These two mechanisms are completely independent — changing `.plantwebrc` does not break hardcoded SVG references in `.rst` files.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
explanation/alternatives.rst uses ✔ (U+2714) and ✘ (U+2718) throughout
its comparison table. pdflatex aborts with 'Unicode character not set up
for use with LaTeX' when it encounters them.
Add a LaTeX preamble that maps both characters to pifont dingbats
(\ding{51} and \ding{55}) via the newunicodechar package. Both pifont
and newunicodechar are part of texlive-latex-extra, already installed.
https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
- Set latex_logo to dfetch_logo.png so \sphinxlogo renders on the title page - Switch body font to Helvetica (helvet) matching the Inter/sans-serif design intent - Define design-token colours (dfprimary #c2620a, dfaccent #4e7fa0, etc.) via xcolor - Use sphinxsetup to apply tokens to chapter titles, links, code blocks and admonitions - Add custom maketitle with amber header bar, centred logo, styled title/subtitle and accent footer rule https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
\py@release contains @ which is only a letter inside \makeatletter scope. Without the guard pdflatex aborts with "Missing number, treated as zero". https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
- Set release = __version__ so \py@release renders the version on the title page
- Use \includegraphics[width=0.35\linewidth] instead of \sphinxlogo to constrain
the logo size; \sphinxlogo has no width limit and caused the title page to spill
across 3 pages when the PNG natural resolution was large
- Replace double \vfill with \vspace*{\fill} so the two rules and centered block
sit correctly within the single titlepage environment
- Include version in latex_documents filename: dfetch-<version>.pdf
- Update docs.yml artifact/release paths to dfetch-*.pdf glob
https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
|
@coderabbitai resume |
✅ Actions performedReviews resumed. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/docs.yml (1)
92-95:⚠️ Potential issue | 🟡 MinorStill missing
apt-get updatebefore the TeX install step.This unresolved package-index issue is still present in the new
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/docs.yml around lines 92 - 95, The "Install LaTeX" step in the pdf job is missing an apt-get update, causing package-index failures; modify the pdf job's step named "Install LaTeX" (the block that currently runs sudo apt-get install -y texlive-*) to first run sudo apt-get update (or a combined run that does sudo apt-get update && sudo apt-get install -y ...) so the package index is refreshed before installing LaTeX packages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/docs.yml:
- Around line 5-9: Rename the workflow input named release_id to release_tag and
update all places that reference it (e.g., any if conditions and action inputs
that pass it as tag_name to softprops/action-gh-release); specifically change
the input key release_id to release_tag, replace uses of inputs.release_id with
inputs.release_tag, and change condition checks from if: ${{ inputs.release_id
}} to if: ${{ inputs.release_tag != '' }} (and do the same for the second
occurrence noted around lines 106-110).
---
Duplicate comments:
In @.github/workflows/docs.yml:
- Around line 92-95: The "Install LaTeX" step in the pdf job is missing an
apt-get update, causing package-index failures; modify the pdf job's step named
"Install LaTeX" (the block that currently runs sudo apt-get install -y
texlive-*) to first run sudo apt-get update (or a combined run that does sudo
apt-get update && sudo apt-get install -y ...) so the package index is refreshed
before installing LaTeX packages.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 857305c2-467e-49c7-b4b6-b9182eca86d3
📒 Files selected for processing (2)
.github/workflows/docs.ymldoc/conf.py
| inputs: | ||
| release_id: | ||
| required: false | ||
| type: string | ||
| default: '' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== docs workflow callers =="
rg -n -C3 'uses:\s*\.\/\.github\/workflows\/docs\.yml|release_id|prep-release' .github/workflows
echo
echo "== release_id producer =="
rg -n -C3 'outputs:|release_id|tag_name|upload_url|gh release|create[-_]release|action-gh-release' .github/workflowsRepository: dfetch-org/dfetch
Length of output: 248
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Files in .github/workflows =="
ls -la .github/workflows/ | head -20
echo
echo "== Content of docs.yml (full) =="
cat .github/workflows/docs.yml
echo
echo "== Search for callers of docs.yml =="
grep -r "docs.yml" .github/workflows/ || echo "No callers found"
echo
echo "== Search for prep-release workflow =="
if [ -f .github/workflows/prep-release.yml ]; then
cat .github/workflows/prep-release.yml
else
echo "prep-release.yml not found"
fiRepository: dfetch-org/dfetch
Length of output: 4446
🏁 Script executed:
cat .github/workflows/ci.ymlRepository: dfetch-org/dfetch
Length of output: 1076
🏁 Script executed:
cat .github/workflows/release.ymlRepository: dfetch-org/dfetch
Length of output: 2784
🌐 Web query:
softprops/action-gh-release tag_name parameter documentation
💡 Result:
The tag_name parameter for the softprops/action-gh-release GitHub Action is an optional string input that specifies the name of the tag for the GitHub release. Description: "Name of a tag. defaults to github.ref_name. refs/tags/ values are normalized to ." When omitted, it defaults to the current GitHub ref name (e.g., the tag name from a push event). It can be used to create releases for custom or existing tags, such as in examples like tag_name: stage-2026-03-15. This is documented in the official README inputs table.
Citations:
- 1: https://github.com/softprops/action-gh-release
- 2: https://git.seepine.com/softprops/action-gh-release/src/commit/d5382d3e6f2fa7bd53cb749d33091853d4985daf/README.md
- 3: https://togithub.com/softprops/action-gh-release
- 4: https://github.com/marketplace/actions/gh-release
Rename release_id to release_tag for clarity, since the value is a tag string not a numeric ID.
The variable named release_id in release.yml actually outputs a tag string (${{ steps.release_info.outputs.tag }}), not a numeric release ID. Using it as tag_name in the softprops/action-gh-release action is correct. However, the input should be renamed to release_tag to accurately reflect what it contains and avoid future confusion.
Also improve the condition from if: ${{ inputs.release_id }} to if: ${{ inputs.release_id != '' }} for clarity.
Suggested shape
- release_id:
+ release_tag:
required: false
type: string
default: ''
…
- if: ${{ inputs.release_id }}
+ if: ${{ inputs.release_tag != '' }}
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2.5.0
with:
- tag_name: ${{ inputs.release_id }}
+ tag_name: ${{ inputs.release_tag }}Also applies to: lines 106-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/docs.yml around lines 5 - 9, Rename the workflow input
named release_id to release_tag and update all places that reference it (e.g.,
any if conditions and action inputs that pass it as tag_name to
softprops/action-gh-release); specifically change the input key release_id to
release_tag, replace uses of inputs.release_id with inputs.release_tag, and
change condition checks from if: ${{ inputs.release_id }} to if: ${{
inputs.release_tag != '' }} (and do the same for the second occurrence noted
around lines 106-110).
instead of emitting a warning that can break PDF builds
unregistered node type
https://claude.ai/code/session_014hVQ5UrK1B4N9ar4UfYpm4
Summary by CodeRabbit
New Features
Documentation
Chores
Editor Tasks