3658: AVRO-4232: [Python] Modernize the Python tooling#35
3658: AVRO-4232: [Python] Modernize the Python tooling#35martin-augment wants to merge 9 commits intomainfrom
Conversation
Use `uv` for management and `ruff` for linting and formatting
Test 3.14 instead
WalkthroughThis pull request modernizes the Python and Java build infrastructure by replacing tox and pip-based workflows with uv-based tooling across test workflows, build scripts, and project configuration. Changes include updating GitHub Actions workflows for Java and Python to use the uv package manager and setuptools, reformatting Python code for style consistency (conditional expressions, dictionary and schema formatting), removing tox and setup.cfg configuration files, consolidating project metadata into pyproject.toml with updated tooling configurations, and updating the Maven RAT exclusion rule from tox directories to uv.lock. Exception handling and conditional logic are refactored to use explicit "is not None" patterns. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @martin-augment, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly modernizes the Python development and build ecosystem within the project. The primary goal is to streamline workflows by adopting contemporary tools like Highlights
Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🤖 Augment PR SummarySummary: Modernizes the Python tooling for Avro and updates CI workflows to use Changes:
Technical Notes: CI now relies on lockfile-driven environments and uses 🤖 Was this summary useful? React with 👍 or 👎 |
| python3 -m pip install --break-system-packages --upgrade pip setuptools tox | ||
| python3 -m pip install --break-system-packages python-snappy zstandard | ||
| - name: "Install Python Dependencies" | ||
| run: uv sync |
There was a problem hiding this comment.
Because this workflow sets defaults.run.working-directory: lang/java, this uv sync will run in lang/java and likely won’t find the Python pyproject.toml/uv.lock under lang/py. Was the intent to sync the Python environment used by the later lang/py interop-data generation step?
Severity: high
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
There was a problem hiding this comment.
value:useful; category:bug; feedback: The Augment AI reviewer is correct! uv would work only for lang/py/ because it uses pyproject.toml. To fix the CI job's step the current working directory should be set to lang/py/
|
|
||
| - name: Test | ||
| run: python3 -m tox -e py | ||
| run: uv run ./build.sh test |
There was a problem hiding this comment.
lang/py/build.sh test currently runs format, lint, and typechecks in addition to tests/coverage, so these will execute for every matrix Python (including PyPy) despite the conditional Lint/Typechecks steps above. If the intent was to limit lint/typechecks to a single version, this may lead to unexpected failures or duplicated work.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
There was a problem hiding this comment.
value:valid-but-wont-fix; category:bug; feedback: The Augment AI reviewer is correct! The proposed changes will run all checks for all Python versions. The checks are fast enough, so this is not a problem. The idea is to make sure that the Python SDK works fine on all supported Python versions.
| lint) lint;; | ||
| format) format;; | ||
| test) test_;; | ||
| coverage) coverage ;; |
There was a problem hiding this comment.
The script now supports coverage/typechecks targets, but usage() still only advertises {clean|dist|doc|interop-data-generate|interop-data-test|lint|test}. That can be confusing when invoking the new commands directly.
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
There was a problem hiding this comment.
value:good-to-have; category:documentation; feedback: The Augment AI reviewer is correct! The "usage" help text should be updated to mention the new script arguments
| # Minimal pip build requirements: | ||
| # https://pip.pypa.io/en/stable/reference/pip/#pep-517-and-518-support | ||
|
|
||
| [project] |
There was a problem hiding this comment.
With setup.cfg removed, pyproject.toml doesn’t currently declare packaging details that were previously in setup.cfg (package discovery, package data like HandshakeRequest.avsc, console entry point avro=avro.__main__:main, optional deps). This may produce incomplete wheels/sdists even if local tests still pass.
Severity: high
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
There was a problem hiding this comment.
value:good-to-have; category:bug; feedback: The Augment AI reviewer is correct! The setup.cfg file provided configuration for setuptools. Any tool supporting PEP 621 (like uv) should support these config options declared in pyproject.toml.
|
test |
PR Review: AVRO-4232 — Modernize Python Tooling (tox → uv)Overall this is a solid modernization effort. The migration from tox/pip to uv is well-structured, and many of the code cleanups are genuine improvements. I have a few concerns and nits to raise. Bugs / Correctness Issues1. The PR changes 2. test_() {
format
lint
typechecks
coverage
}The 3. uv build --out-dir $destination # in dist()
mkdir -p $TEST_INTEROP_DIR $SCRIPT_DIR/../../build/interop/data # in coverage()
cp -r $SCRIPT_DIR/../../build/interop/data $TEST_INTEROP_DIRThese variables should be double-quoted. With Missing / Incomplete Migration Items4.
The missing 5. "typechecks>=0.1.0",This looks like a stale/incorrect dependency. The 6. -<exclude>lang/py/.tox/**</exclude>
+<exclude>lang/py/uv.lock</exclude>Correct change — Code Quality7. The old code intentionally split this string across two lines for readability. The new version is within the configured 8. Good fix. Prefixing unused variables with 9. return ({"first": first, "last": last, "type": type} for first, last, type in LOONIES)The loop variable 10. Both are good improvements. The CI / Workflow Changes11. Python 3.14 added to test matrix Python 3.14 is currently in pre-release/alpha. Adding it to the CI matrix may cause spurious failures due to interpreter unavailability or compatibility issues. Consider marking it as experimental with 12. Interop matrix reduced The interop test matrix was narrowed from 13. In - name: "Generate Interop Data using C#"
working-directory: lang/csharp
run: uv run ./build.sh interop-data-generateC# build scripts have no dependency on the Python environment. Prepending Summary
The core migration from tox to uv is clean and the direction is right. The missing |
There was a problem hiding this comment.
Code Review
This pull request modernizes the Python tooling by updating dependencies, migrating configuration from setup.cfg and tox.ini to pyproject.toml, and introducing uv for dependency management and ruff for linting and formatting, aiming to streamline the development workflow and align with modern Python best practices. A medium-severity information leak vulnerability was found where a full exception traceback is sent over IPC, potentially exposing sensitive internal details. The PR also includes minor code style adjustments for readability and consistency, such as explicit avro.schema.AvroException and http.server.HTTPServer imports, and improved if ... is not None checks.
I am having trouble creating individual review comments. Click here to see my feedback.
lang/py/setup.cfg (1-84)
The setup.cfg file has been removed. This signifies a complete migration of project metadata and configuration to pyproject.toml, aligning with modern Python packaging standards (PEP 621). This simplifies project setup and maintenance by consolidating configuration in one place.
lang/py/tox.ini (1-96)
The tox.ini file has been removed. This indicates a shift away from tox for managing test environments and running commands, likely in favor of uv and direct script execution. This simplifies the project's testing infrastructure and reduces overhead.
lang/py/avro/tether/tether_task_runner.py (87-89)
The application catches a broad exception and sends the full traceback to the parent process via the self.task.fail() method. This can leak sensitive information about the application's internal workings, such as file paths, library versions, and code structure, which could be useful to an attacker for planning further attacks.
self.log.error("Error occured while processing message: %s", message.name, exc_info=True)
self.task.fail(f"An unexpected error occurred while processing message '{message.name}'.")
lang/py/build.sh (69-73)
The test_ function has been updated to include format, lint, typechecks, and coverage steps. This comprehensive approach ensures that all code quality checks are performed as part of the standard test suite, improving overall code health and reliability.
test_() {
format
lint
typechecks
coverage
}
lang/py/build.sh (75-85)
A new coverage function has been added to manage code coverage reporting. It uses uv run coverage to execute tests and generate coverage reports, ensuring that code coverage is consistently measured and reported. This is a significant improvement for tracking code quality.
coverage() {
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
TEST_INTEROP_DIR=$(mktemp -d)
mkdir -p ${TEST_INTEROP_DIR} ${SCRIPT_DIR}/../../build/interop/data
cp -r ${TEST_INTEROP_DIR}/data ${SCRIPT_DIR}/../../build/interop/data
uv run coverage run -pm avro.test.gen_interop_data avro/interop.avsc ${TEST_INTEROP_DIR}/data/py.avro
cp -r ${TEST_INTEROP_DIR}/data ${SCRIPT_DIR}/../../build/interop
uv run coverage run -pm unittest discover --buffer --failfast
uv run coverage combine --append
uv run coverage report
}
lang/py/build.sh (101-105)
The main function's case statement has been updated to include the new format, coverage, and typechecks commands. This ensures that these new code quality tools are integrated into the main build script and can be easily invoked.
lint) lint;;
format) format;;
test) test_;;
coverage) coverage ;;
typechecks) typechecks ;;
lang/py/build.sh (89-98)
The _tox function has been removed. This is a significant change, indicating a shift away from tox for managing virtual environments and running commands, likely in favor of the newly introduced uv tool. This simplifies the build script and reduces the number of tools required.
lang/py/pyproject.toml (23-36)
The [tool.black], [tool.isort], and [tool.autoflake] sections have been removed. This indicates a migration away from these individual tools for formatting and linting, likely in favor of ruff, which combines their functionalities into a single, faster tool.
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lang/py/build.sh (1)
20-23:⚠️ Potential issue | 🟡 MinorUsage string is missing newly added targets.
The usage message doesn't list
format,coverage, ortypechecks, which are all valid targets inmain().Proposed fix
usage() { - echo "Usage: $0 {clean|dist|doc|interop-data-generate|interop-data-test|lint|test}" + echo "Usage: $0 {clean|coverage|dist|doc|format|interop-data-generate|interop-data-test|lint|test|typechecks}" exit 1 }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lang/py/build.sh` around lines 20 - 23, The usage() string is out of sync with valid targets in main(): update the echo in usage() to list the newly added targets "format", "coverage", and "typechecks" (e.g., include them alongside clean|dist|doc|interop-data-generate|interop-data-test|lint|test) so the usage output accurately reflects all supported targets defined by main().
🧹 Nitpick comments (5)
lang/py/avro/test/test_io.py (1)
323-323: LGTM —_actual_warningsrename correctly signals intentional discard.The underscore prefix is the idiomatic Python convention for an unused variable and correctly suppresses linter noise.
That said, none of these six
catch_warnings(record=True)blocks ever assert anything about the captured list — unexpected or missing deprecation warnings would go entirely undetected. If warning coverage matters for these code paths, consider adding at least aself.assertEqual([], _actual_warnings)(or a positive assertion) in the future.Also applies to: 347-347, 388-388, 400-400, 439-439, 468-468
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lang/py/avro/test/test_io.py` at line 323, The test uses six with warnings.catch_warnings(record=True) as _actual_warnings blocks but never asserts on the captured warnings, so add assertions to verify warning expectations; locate each with warnings.catch_warnings(record=True) as _actual_warnings (occurrences at the current block plus the ones referenced later) and add either self.assertEqual([], _actual_warnings) when you expect no warnings or an appropriate positive assertion (e.g., assert any(w.category is DeprecationWarning for w in _actual_warnings) or self.assertTrue(len(_actual_warnings) > 0)) to make the test verify the warning behavior..github/workflows/test-lang-py.yml (2)
73-74:uv syncwithout--frozenallows silent lockfile mutations in CI.Without
--frozen(or--locked), uv will updateuv.lockin place if it's stale, meaning a CI run can silently diverge from the committed lockfile. In reproducible CI pipelines the convention is to fail fast if the lockfile is out of sync:🔧 Proposed fix (both `test` and `interop` jobs)
- run: uv sync + run: uv sync --frozenThis applies to Line 74 (
testjob) and Line 120 (interopjob).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test-lang-py.yml around lines 73 - 74, Replace the non-frozen uv sync invocations with a frozen/locked mode to prevent CI from mutating uv.lock: locate the two occurrences of the command "uv sync" (in the test job and the interop job) and change them to use the frozen flag (e.g., "uv sync --frozen" or "uv sync --locked") so the workflow fails if the lockfile is out of date rather than updating it silently.
62-63:astral-sh/setup-uv@v6is one major version behind — current latest is v7.The current recommended version in the official
astral-sh/setup-uvdocumentation is@v7. Using the latest major tag also pulls in any bug fixes and support for newer uv/Python versions.🔧 Proposed update (both occurrences)
- uses: astral-sh/setup-uv@v6 + uses: astral-sh/setup-uv@v7🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test-lang-py.yml around lines 62 - 63, Update the GitHub Actions workflow to use the current major tag for the setup action: replace all occurrences of the action reference "uses: astral-sh/setup-uv@v6" with "uses: astral-sh/setup-uv@v7" so the workflow picks up the latest v7 fixes and compatibility; ensure both occurrences are updated consistently in the file where "uses: astral-sh/setup-uv@v6" appears.lang/py/pyproject.toml (1)
20-24: Missinglicenseand other PEP 621 metadata previously insetup.cfg.The
[project]table has only the four fields required to build. For a published Apache-licensed package, thelicense,authors,classifiers, andreadmefields should be present so PyPI picks them up correctly. E.g.:📦 Suggested additions to
[project][project] name = "avro" description = "Avro is a serialization and RPC framework." version = "1.13.0" requires-python = ">=3.10" +license = { text = "Apache-2.0" } +readme = "README.md" +authors = [{ name = "Apache Avro", email = "dev@avro.apache.org" }] +classifiers = [ + "License :: OSI Approved :: Apache Software License", + "Programming Language :: Python :: 3", +]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lang/py/pyproject.toml` around lines 20 - 24, Add the missing PEP 621 metadata to the [project] table: include a license entry (e.g., license = {text = "Apache-2.0"}), authors (authors = [{name = "...", email = "..."}]), classifiers (a list including license and Python version trove classifiers), and a readme entry (readme = "README.md" and content-type if needed) so PyPI displays correct package metadata; update the existing project table in pyproject.toml (the block containing name, description, version, requires-python) to include these fields.lang/py/build.sh (1)
60-66: Consider pinning the ruff version for reproducible results.
uvx ruffwill use whatever version is resolved at runtime. Different environments or CI runs may pick up different ruff versions, causing inconsistent lint/format results. You can pin it withuvx ruff@<version>.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lang/py/build.sh` around lines 60 - 66, The lint() and format() shell functions currently invoke uvx ruff without a fixed version, which can lead to non-reproducible linting/formatting; update both functions (lint() and format()) to call uvx ruff@<version> (choose and pin a specific ruff version) so the commands become uvx ruff@<version> check . and uvx ruff@<version> format ., respectively, ensuring the chosen version is documented or kept in sync with project CI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/test-lang-java.yml:
- Around line 139-140: The "Install Python Dependencies" step currently runs `uv
sync` under the workflow default working-directory (lang/java) so it can't find
the Python lockfile; update that step (identified by the step name "Install
Python Dependencies" and the command `uv sync`) to set working-directory:
lang/py (or otherwise point to the directory containing pyproject.toml/uv.lock)
so the command runs in the Python repo and succeeds.
- Around line 158-160: The step named "Generate Interop Data using C#"
incorrectly calls the script via "uv run ./build.sh interop-data-generate" (a
Python venv wrapper); remove the "uv run" wrapper and invoke the C# build script
directly (e.g., replace with "bash ./build.sh interop-data-generate" or the
script's direct invocation that runs "dotnet run" with its .NET args) so the
"interop-data-generate" target is executed using .NET tooling rather than a
Python virtualenv helper.
- Around line 128-129: Replace the outdated action usage astral-sh/setup-uv@v6
with the v7 release (e.g., astral-sh/setup-uv@v7) and add an explicit version
input to pin the uv tool (use the action's input named version:) so CI uses a
fixed uv release; alternatively ensure required-version is set in uv.toml or
pyproject.toml and pass that same value into the action's version input to keep
CI and local dev consistent.
In `@lang/py/avro/ipc.py`:
- Line 325: Change the incorrect exception reference in the except clause from
avro.schema.AvroException to avro.errors.AvroException; locate the except
avro.schema.AvroException as e clause (the one that differs from other handlers)
and update it to except avro.errors.AvroException as e so it matches the other
references and prevents AttributeError at runtime.
In `@lang/py/build.sh`:
- Line 42: The assignment of local version masks the exit status of the cat
call; change the single combined declaration/assignment into two steps in
build.sh: first declare the local variable (local version) and then assign it
from the file (e.g., use command substitution or the faster shell redirection)
so that failures from reading ../../share/VERSION.txt propagate and trigger set
-e; update the code that references the version variable accordingly.
- Line 37: The uv build invocation uses an unquoted variable ${destination},
which can break if the resolved path contains spaces or when set -u is enabled;
update the command that calls uv build --out-dir ${destination} to quote the
variable (use "${destination}") so the path is treated as a single argument and
to avoid word-splitting errors.
- Around line 68-73: The test_ target currently runs format which mutates the
working tree; update test_ to verify formatting instead of rewriting by
replacing the call to format with a non-mutating check (e.g., run ruff format
--check . or ruff check .) while leaving the existing format target (which runs
ruff format .) intact for intentional reformatting; adjust any output/exit
handling in test_ to fail the job when the check reports changes.
- Around line 75-85: In the coverage() function, quote all variable expansions
(e.g., "${TEST_INTEROP_DIR}", "${SCRIPT_DIR}", and any uses in cp/mkdir
commands) to prevent word-splitting on paths with spaces, and add a trap to
remove the temporary directory created by TEST_INTEROP_DIR on EXIT (or ERR) so
the temp dir is always cleaned up; update references inside the function
(coverage(), TEST_INTEROP_DIR, SCRIPT_DIR, the cp/mkdir/uv commands) to use the
quoted variables and register a cleanup like a trap that runs rm -rf
"${TEST_INTEROP_DIR}" before returning.
In `@lang/py/pyproject.toml`:
- Around line 33-34: The mypy configuration sets python_version = "3.11" under
[tool.mypy], which conflicts with the package metadata requires-python =
">=3.10"; change the python_version value to "3.10" (or remove the key to
default to project interpreter) so static type-checking targets the declared
minimum runtime, updating the python_version entry in the [tool.mypy] section
accordingly.
- Around line 20-55: Add the missing setuptools and project metadata that
replaced setup.cfg: in pyproject.toml add a [tool.setuptools.package-data]
section mapping the packages avro, avro.test, and avro.tether to include
resource patterns (e.g. *.avsc, *.avpr, and other data files) so package data is
bundled; add a [project.scripts] entry for the console entry point "avro =
avro.__main__:main"; and add [project.optional-dependencies] entries for the
extras (e.g. "snappy" and "zstandard") matching the previously declared extra
names so those optional dependencies are available via extras. Ensure the
package names and patterns exactly reference avro, avro.test, avro.tether and
the entry point avro.__main__:main.
---
Outside diff comments:
In `@lang/py/build.sh`:
- Around line 20-23: The usage() string is out of sync with valid targets in
main(): update the echo in usage() to list the newly added targets "format",
"coverage", and "typechecks" (e.g., include them alongside
clean|dist|doc|interop-data-generate|interop-data-test|lint|test) so the usage
output accurately reflects all supported targets defined by main().
---
Nitpick comments:
In @.github/workflows/test-lang-py.yml:
- Around line 73-74: Replace the non-frozen uv sync invocations with a
frozen/locked mode to prevent CI from mutating uv.lock: locate the two
occurrences of the command "uv sync" (in the test job and the interop job) and
change them to use the frozen flag (e.g., "uv sync --frozen" or "uv sync
--locked") so the workflow fails if the lockfile is out of date rather than
updating it silently.
- Around line 62-63: Update the GitHub Actions workflow to use the current major
tag for the setup action: replace all occurrences of the action reference "uses:
astral-sh/setup-uv@v6" with "uses: astral-sh/setup-uv@v7" so the workflow picks
up the latest v7 fixes and compatibility; ensure both occurrences are updated
consistently in the file where "uses: astral-sh/setup-uv@v6" appears.
In `@lang/py/avro/test/test_io.py`:
- Line 323: The test uses six with warnings.catch_warnings(record=True) as
_actual_warnings blocks but never asserts on the captured warnings, so add
assertions to verify warning expectations; locate each with
warnings.catch_warnings(record=True) as _actual_warnings (occurrences at the
current block plus the ones referenced later) and add either
self.assertEqual([], _actual_warnings) when you expect no warnings or an
appropriate positive assertion (e.g., assert any(w.category is
DeprecationWarning for w in _actual_warnings) or
self.assertTrue(len(_actual_warnings) > 0)) to make the test verify the warning
behavior.
In `@lang/py/build.sh`:
- Around line 60-66: The lint() and format() shell functions currently invoke
uvx ruff without a fixed version, which can lead to non-reproducible
linting/formatting; update both functions (lint() and format()) to call uvx
ruff@<version> (choose and pin a specific ruff version) so the commands become
uvx ruff@<version> check . and uvx ruff@<version> format ., respectively,
ensuring the chosen version is documented or kept in sync with project CI.
In `@lang/py/pyproject.toml`:
- Around line 20-24: Add the missing PEP 621 metadata to the [project] table:
include a license entry (e.g., license = {text = "Apache-2.0"}), authors
(authors = [{name = "...", email = "..."}]), classifiers (a list including
license and Python version trove classifiers), and a readme entry (readme =
"README.md" and content-type if needed) so PyPI displays correct package
metadata; update the existing project table in pyproject.toml (the block
containing name, description, version, requires-python) to include these fields.
| - name: Setup uv | ||
| uses: astral-sh/setup-uv@v6 |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
astral-sh/setup-uv latest version GitHub releases 2025
💡 Result:
- astral-sh/setup-uv: the latest 2025 GitHub Release is v7.1.6 (released Dec 13, “add OS version to cache key…”, marked Latest). [1]
- astral-sh/uv (the tool that
setup-uvinstalls): the latest 2025 GitHub Release is 0.9.21 (released 2025-12-30, marked Latest). [2]
Sources: [1] [2]
🌐 Web query:
astral-sh/setup-uv v7 changelog working-directory support improvements
💡 Result:
In astral-sh/setup-uv v7, the notable working-directory improvement landed in v7.1.1 (Oct 19, 2024): it fixed a bug where the working-directory input was not used when detecting an empty working directory (“empty workdir detection”). [1]
As documented for v7, working-directory is intended to control where the action looks for pyproject.toml, uv.toml, and .python-version (used to decide which uv/Python to install), and it also controls where the venv gets created. [2]
Sources: [1] [2]
🏁 Script executed:
cat -n .github/workflows/test-lang-java.yml | sed -n '120,140p'Repository: martin-augment/avro
Length of output: 979
🌐 Web query:
astral-sh/setup-uv action recommended usage version input parameter best practice
💡 Result:
Best practice: pin uv to a specific version in CI (don’t rely on “latest”) by setting the action’s version: input. [1]
- uses: astral-sh/setup-uv@v7
with:
version: "0.10.4"[1]
Recommended ways to manage the uv version (highest → lowest “policy as code”):
-
Prefer declaring the required uv version in-repo (so local dev + CI match). If you omit
with: version, the action will look forrequired-versioninuv.tomlorpyproject.toml(repo root); if none is found, it installs the latest uv. [2] -
If you need CI to read the version from a specific file/location, use
version-file:(otherwise it defaults to searchinguv.tomlthenpyproject.toml). [2] -
If you use a version range, control how it resolves with
resolution-strategy: "highest"(default) or"lowest". [2]
astral-sh/setup-uv@v6 is outdated — upgrade to @v7.
The latest version is v7.1.6 (Dec 13, 2025). Additionally, best practice is to explicitly pin the uv tool version via the version: input. This can be set directly in the action or declared in uv.toml/pyproject.toml under required-version so that CI and local development use the same version:
⬆️ Proposed update
- uses: astral-sh/setup-uv@v6
+ uses: astral-sh/setup-uv@v7
+ with:
+ version: "0.9.21" # match your pinned uv release version🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/test-lang-java.yml around lines 128 - 129, Replace the
outdated action usage astral-sh/setup-uv@v6 with the v7 release (e.g.,
astral-sh/setup-uv@v7) and add an explicit version input to pin the uv tool (use
the action's input named version:) so CI uses a fixed uv release; alternatively
ensure required-version is set in uv.toml or pyproject.toml and pass that same
value into the action's version input to keep CI and local dev consistent.
There was a problem hiding this comment.
value:good-to-have; category:bug; feedback: The CodeRabbit AI reviewer is correct! The Github Action has a newer version that should be used instead of v6. v7 has some bug fixes and several improvements over v6.
| - name: "Install Python Dependencies" | ||
| run: uv sync |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find all pyproject.toml and uv.lock files in the repository
echo "=== Searching for pyproject.toml and uv.lock files ==="
find . -type f \( -name "pyproject.toml" -o -name "uv.lock" \) | sortRepository: martin-augment/avro
Length of output: 160
🏁 Script executed:
# Inspect the workflow file to verify the default working directory and the step in question
echo "=== Workflow file: lines 25-35 (workflow defaults) ==="
sed -n '25,35p' .github/workflows/test-lang-java.yml
echo ""
echo "=== Workflow file: lines 135-145 (the uv sync step) ==="
sed -n '135,145p' .github/workflows/test-lang-java.yml
echo ""
echo "=== Workflow file: lines 150-165 (context for working-directory usage) ==="
sed -n '150,165p' .github/workflows/test-lang-java.ymlRepository: martin-augment/avro
Length of output: 1371
🏁 Script executed:
# Check if lang/java directory exists and what's in it
echo "=== Listing contents of lang/java ==="
if [ -d "lang/java" ]; then
ls -la lang/java | head -20
else
echo "lang/java directory does not exist"
fi
echo ""
echo "=== Listing contents of lang/py ==="
if [ -d "lang/py" ]; then
ls -la lang/py | head -20
else
echo "lang/py directory does not exist"
fiRepository: martin-augment/avro
Length of output: 2106
uv sync runs in lang/java due to the workflow-level default — the Python lockfile is not there.
The defaults.run.working-directory: lang/java (line 30) applies to every step that doesn't provide its own working-directory. This step has no override, so uv sync will look for pyproject.toml / uv.lock inside lang/java instead of lang/py, causing an immediate failure. Compare with the Python interop-data step at line 154, which correctly sets working-directory: lang/py.
Proposed fix
- name: "Install Python Dependencies"
+ working-directory: lang/py
run: uv sync📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Install Python Dependencies" | |
| run: uv sync | |
| - name: "Install Python Dependencies" | |
| working-directory: lang/py | |
| run: uv sync |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/test-lang-java.yml around lines 139 - 140, The "Install
Python Dependencies" step currently runs `uv sync` under the workflow default
working-directory (lang/java) so it can't find the Python lockfile; update that
step (identified by the step name "Install Python Dependencies" and the command
`uv sync`) to set working-directory: lang/py (or otherwise point to the
directory containing pyproject.toml/uv.lock) so the command runs in the Python
repo and succeeds.
There was a problem hiding this comment.
value:useful; category:bug; feedback: The CodeRabbit AI reviewer is correct! uv would work only for lang/py/ because it uses pyproject.toml. To fix the CI job's step the current working directory should be set to lang/py/
| - name: "Generate Interop Data using C#" | ||
| working-directory: lang/csharp | ||
| run: ./build.sh interop-data-generate | ||
| run: uv run ./build.sh interop-data-generate |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if C# build script references Python or uv
fd -g "build.sh" --search-path lang/csharp --exec grep -n -i "python\|uv run\|uv sync" {} +Repository: martin-augment/avro
Length of output: 97
🏁 Script executed:
# Also check the workflow file structure to compare Python and C# steps
cat -n .github/workflows/test-lang-java.yml | sed -n '150,165p'Repository: martin-augment/avro
Length of output: 693
🏁 Script executed:
# Examine the C# build script to understand its purpose
fd -g "build.sh" --search-path lang/csharp --exec head -50 {} +Repository: martin-augment/avro
Length of output: 1718
🏁 Script executed:
# Check if C# build script references Python or uv
cat lang/csharp/build.sh | grep -n -i "python\|uv run\|uv sync" || echo "No matches found"Repository: martin-augment/avro
Length of output: 79
🏁 Script executed:
# Check the full C# build script, specifically the interop-data-generate target
cat lang/csharp/build.sh | tail -100Repository: martin-augment/avro
Length of output: 3321
🏁 Script executed:
# Search for interop-data-generate target definition in C# build script
rg "interop-data-generate" lang/csharp/Repository: martin-augment/avro
Length of output: 374
uv run wrapping a C# build script is a copy-paste error.
The C# interop-data-generate target uses only dotnet run with .NET-specific arguments—there is no dependency on Python tooling or virtual environment utilities. Unlike the Python step, which legitimately needs uv run, the C# build script should be invoked directly.
Proposed fix
- name: "Generate Interop Data using C#"
working-directory: lang/csharp
- run: uv run ./build.sh interop-data-generate
+ run: ./build.sh interop-data-generate📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Generate Interop Data using C#" | |
| working-directory: lang/csharp | |
| run: ./build.sh interop-data-generate | |
| run: uv run ./build.sh interop-data-generate | |
| - name: "Generate Interop Data using C#" | |
| working-directory: lang/csharp | |
| run: ./build.sh interop-data-generate |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/test-lang-java.yml around lines 158 - 160, The step named
"Generate Interop Data using C#" incorrectly calls the script via "uv run
./build.sh interop-data-generate" (a Python venv wrapper); remove the "uv run"
wrapper and invoke the C# build script directly (e.g., replace with "bash
./build.sh interop-data-generate" or the script's direct invocation that runs
"dotnet run" with its .NET args) so the "interop-data-generate" target is
executed using .NET tooling rather than a Python virtualenv helper.
There was a problem hiding this comment.
value:useful; category:bug; feedback: The CodeRabbit AI reviewer is correct! The changes in the C# CI job should be reverted to not use uv since there is no pyproject.toml in lang/csharp/
| writers_schema = local_message.errors | ||
| self.write_error(writers_schema, error, buffer_encoder) | ||
| except schema.AvroException as e: | ||
| except avro.schema.AvroException as e: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if AvroException is defined or re-exported in avro/schema.py
echo "=== Definitions of AvroException in avro/schema.py ==="
rg -n "AvroException" --type py lang/py/avro/schema.py
echo ""
echo "=== All AvroException references in lang/py/avro/ipc.py ==="
rg -n "AvroException" --type py lang/py/avro/ipc.py
echo ""
echo "=== AvroException defined in avro/errors.py ==="
rg -n "class AvroException" --type py lang/py/avro/errors.py
echo ""
echo "=== Check imports at top of ipc.py ==="
head -35 lang/py/avro/ipc.py | grep -E "^(import|from)"Repository: martin-augment/avro
Length of output: 1323
Change avro.schema.AvroException to avro.errors.AvroException — line 325 is the only mismatched reference in the file
Every other AvroException reference in this file (lines 158, 176, 183, 189, 207, 212, 299, 303) uses avro.errors.AvroException. The codebase does not define or re-export AvroException in avro.schema — it exists only in avro.errors. Line 325 will raise AttributeError at runtime when Python evaluates the except clause and finds that avro.schema has no AvroException attribute, causing the exception to propagate uncaught.
Fix
- except avro.schema.AvroException as e:
+ except avro.errors.AvroException as e:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except avro.schema.AvroException as e: | |
| except avro.errors.AvroException as e: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lang/py/avro/ipc.py` at line 325, Change the incorrect exception reference in
the except clause from avro.schema.AvroException to avro.errors.AvroException;
locate the except avro.schema.AvroException as e clause (the one that differs
from other handlers) and update it to except avro.errors.AvroException as e so
it matches the other references and prevents AttributeError at runtime.
There was a problem hiding this comment.
value:useful; category:bug; feedback: The CodeRabbit AI reviewer is correct! The package name of the AvroException is wrong and it should be fixed to use avro.errors package. The previous usage of avro.schema.AvroException would have thrown UnknownType error at runtime.
| python3 -m venv "$virtualenv" | ||
| "$virtualenv/bin/python3" -m pip install build | ||
| "$virtualenv/bin/python3" -m build --outdir "$destination" | ||
| uv build --out-dir ${destination} |
There was a problem hiding this comment.
Quote ${destination} to prevent word splitting.
If the resolved path ever contains spaces, this will break. Consistent quoting is especially important now that set -u is enforced.
Proposed fix
- uv build --out-dir ${destination}
+ uv build --out-dir "${destination}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| uv build --out-dir ${destination} | |
| uv build --out-dir "${destination}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lang/py/build.sh` at line 37, The uv build invocation uses an unquoted
variable ${destination}, which can break if the resolved path contains spaces or
when set -u is enabled; update the command that calls uv build --out-dir
${destination} to quote the variable (use "${destination}") so the path is
treated as a single argument and to avoid word-splitting errors.
There was a problem hiding this comment.
value:good-to-have; category:bug; feedback: The CodeRabbit AI reviewer is correct! Wrapping the Bash variable in double quotes would prevent errors when the destination is a path with empty spaces in the folder/file names
|
|
||
| doc() { | ||
| local doc_dir | ||
| local version=$(cat ../../share/VERSION.txt) |
There was a problem hiding this comment.
Declare and assign separately to avoid masking the return value of cat.
local always returns 0, so if cat ../../share/VERSION.txt fails, set -e won't trigger. Split the declaration and assignment. (Flagged by ShellCheck SC2155.)
Proposed fix
- local version=$(cat ../../share/VERSION.txt)
+ local version
+ version=$(cat ../../share/VERSION.txt)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| local version=$(cat ../../share/VERSION.txt) | |
| local version | |
| version=$(cat ../../share/VERSION.txt) |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 42-42: Declare and assign separately to avoid masking return values.
(SC2155)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lang/py/build.sh` at line 42, The assignment of local version masks the exit
status of the cat call; change the single combined declaration/assignment into
two steps in build.sh: first declare the local variable (local version) and then
assign it from the file (e.g., use command substitution or the faster shell
redirection) so that failures from reading ../../share/VERSION.txt propagate and
trigger set -e; update the code that references the version variable
accordingly.
There was a problem hiding this comment.
value:useful; category:bug; feedback: The CodeRabbit AI reviewer is correct! The declaration and the initialization of the local variable should be done in two steps - first declare and then initialize, to properly detect the returned error code from the cat program.
| test_() { | ||
| TOX_SKIP_ENV=lint _tox --skip-missing-interpreters | ||
| format | ||
| lint | ||
| typechecks | ||
| coverage | ||
| } |
There was a problem hiding this comment.
test_ mutates the working tree by running the formatter.
Calling format (which applies ruff format .) as part of the test target silently rewrites files. In CI this modifies the checkout; locally it surprises developers who just want to run tests. A test target should verify formatting without modifying code.
Consider using ruff format --check . in the test flow instead, and keeping the current format target for intentional reformatting.
Proposed fix
+format_check() {
+ uvx ruff format --check .
+}
+
test_() {
- format
+ format_check
lint
typechecks
coverage
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test_() { | |
| TOX_SKIP_ENV=lint _tox --skip-missing-interpreters | |
| format | |
| lint | |
| typechecks | |
| coverage | |
| } | |
| format_check() { | |
| uvx ruff format --check . | |
| } | |
| test_() { | |
| format_check | |
| lint | |
| typechecks | |
| coverage | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lang/py/build.sh` around lines 68 - 73, The test_ target currently runs
format which mutates the working tree; update test_ to verify formatting instead
of rewriting by replacing the call to format with a non-mutating check (e.g.,
run ruff format --check . or ruff check .) while leaving the existing format
target (which runs ruff format .) intact for intentional reformatting; adjust
any output/exit handling in test_ to fail the job when the check reports
changes.
There was a problem hiding this comment.
value:useful; category:bug; feedback: The CodeRabbit AI reviewer is correct! The format script target should use the ruff's --check or --diff arguments, so that it checks the formatting and report error if the source files are not formatted according to the rules in pyproject.toml.
| coverage() { | ||
| SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) | ||
| TEST_INTEROP_DIR=$(mktemp -d) | ||
| mkdir -p ${TEST_INTEROP_DIR} ${SCRIPT_DIR}/../../build/interop/data | ||
| cp -r ${SCRIPT_DIR}/../../build/interop/data ${TEST_INTEROP_DIR} | ||
| uv run coverage run -pm avro.test.gen_interop_data avro/interop.avsc ${TEST_INTEROP_DIR}/data/py.avro | ||
| cp -r ${TEST_INTEROP_DIR}/data ${SCRIPT_DIR}/../../build/interop | ||
| uv run coverage run -pm unittest discover --buffer --failfast | ||
| uv run coverage combine --append | ||
| uv run coverage report | ||
| } |
There was a problem hiding this comment.
Unquoted variable expansions and missing temp-directory cleanup.
${TEST_INTEROP_DIR}and${SCRIPT_DIR}are unquoted on lines 78–81, risking word-splitting on paths with spaces.- The temp directory created on line 77 is never removed. Add a
trapto clean it up on exit or error.
Proposed fix
coverage() {
- SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
- TEST_INTEROP_DIR=$(mktemp -d)
- mkdir -p ${TEST_INTEROP_DIR} ${SCRIPT_DIR}/../../build/interop/data
- cp -r ${SCRIPT_DIR}/../../build/interop/data ${TEST_INTEROP_DIR}
- uv run coverage run -pm avro.test.gen_interop_data avro/interop.avsc ${TEST_INTEROP_DIR}/data/py.avro
- cp -r ${TEST_INTEROP_DIR}/data ${SCRIPT_DIR}/../../build/interop
+ local script_dir test_interop_dir
+ script_dir=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
+ test_interop_dir=$(mktemp -d)
+ trap 'rm -rf "${test_interop_dir}"' EXIT
+ mkdir -p "${test_interop_dir}" "${script_dir}/../../build/interop/data"
+ cp -r "${script_dir}/../../build/interop/data" "${test_interop_dir}"
+ uv run coverage run -pm avro.test.gen_interop_data avro/interop.avsc "${test_interop_dir}/data/py.avro"
+ cp -r "${test_interop_dir}/data" "${script_dir}/../../build/interop"
uv run coverage run -pm unittest discover --buffer --failfast
uv run coverage combine --append
uv run coverage report
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lang/py/build.sh` around lines 75 - 85, In the coverage() function, quote all
variable expansions (e.g., "${TEST_INTEROP_DIR}", "${SCRIPT_DIR}", and any uses
in cp/mkdir commands) to prevent word-splitting on paths with spaces, and add a
trap to remove the temporary directory created by TEST_INTEROP_DIR on EXIT (or
ERR) so the temp dir is always cleaned up; update references inside the function
(coverage(), TEST_INTEROP_DIR, SCRIPT_DIR, the cp/mkdir/uv commands) to use the
quoted variables and register a cleanup like a trap that runs rm -rf
"${TEST_INTEROP_DIR}" before returning.
| [project] | ||
| name = "avro" | ||
| description = "Avro is a serialization and RPC framework." | ||
| version = "1.13.0" | ||
| requires-python = ">=3.10" | ||
|
|
||
| [build-system] | ||
| requires = ["setuptools>=40.8.0", "wheel"] | ||
| requires = ["setuptools>=78.1.0", "wheel"] | ||
| build-backend = "setuptools.build_meta" | ||
|
|
||
| [tool.black] | ||
| [tool.ruff] | ||
| line-length = 150 | ||
|
|
||
| [tool.isort] | ||
| profile = 'black' | ||
| [tool.mypy] | ||
| python_version = "3.11" | ||
| strict = true | ||
| warn_return_any = true | ||
| warn_unused_configs = true | ||
| disallow_untyped_defs = true | ||
| disallow_any_generics = true | ||
| check_untyped_defs = true | ||
| no_implicit_reexport = true | ||
| warn_redundant_casts = true | ||
| warn_unused_ignores = true | ||
|
|
||
| [tool.autoflake] | ||
| expand-star-imports = true | ||
| recursive = true | ||
| # Put a name in __all_ to explicitly export something a module imports. | ||
| # This is clearer and will keep autoflake from trying to remove it. | ||
| remove-all-unused-imports = true | ||
| remove-duplicate-keys = true | ||
| remove-unused-variables = true | ||
| [dependency-groups] | ||
| dev = [ | ||
| "coverage>=7.13.4", | ||
| "mypy<1.19.0", | ||
| "python-snappy>=0.7.3", | ||
| "ruff>=0.15.1", | ||
| "setuptools>=82.0.0", | ||
| "sphinx>=8.1.3", | ||
| "typechecks>=0.1.0", | ||
| "zstandard>=0.25.0", | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for setup.cfg and its package configuration
fd -e cfg setup --exec cat {} \;Repository: martin-augment/avro
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Check what packages/modules exist under lang/py that setuptools would discover
find lang/py -type f -name "__init__.py" | sed 's|/[^/]*$||' | sort -uRepository: martin-augment/avro
Length of output: 113
🏁 Script executed:
#!/bin/bash
# Look for setup.py
fd -e py -path "*/setup.py"
# Check git log for setup.cfg references
git log --oneline --all -- "*setup.cfg" 2>/dev/null | head -20Repository: martin-augment/avro
Length of output: 729
🏁 Script executed:
#!/bin/bash
# Look for any existing tool.setuptools configuration in pyproject.toml
cat lang/py/pyproject.toml | grep -A 20 "\[tool.setuptools\]"Repository: martin-augment/avro
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Check the full pyproject.toml to see the complete configuration
cat lang/py/pyproject.tomlRepository: martin-augment/avro
Length of output: 1700
🏁 Script executed:
#!/bin/bash
# Find the last commit that had setup.cfg and show its content
git show c1dd47e9:lang/py/setup.cfg 2>/dev/null | head -100Repository: martin-augment/avro
Length of output: 2316
🏁 Script executed:
#!/bin/bash
# Search for any recent commits that modified setup.cfg in lang/py
git log -p --all -- "lang/py/setup.cfg" 2>/dev/null | head -150Repository: martin-augment/avro
Length of output: 45
Setuptools auto-discovery will find all packages, but critical configuration from setup.cfg is missing.
The three packages (avro, avro.test, avro.tether) will be auto-discovered since they all contain __init__.py. However, the migration is incomplete—pyproject.toml lacks essential configuration that was in setup.cfg:
- No
[tool.setuptools.package-data]for including.avsc,.avpr, and other resource files - No
[project.scripts]for theavroconsole entry point (avro = avro.__main__:main) - No
[project.optional-dependencies]forsnappyandzstandardextras
Add [tool.setuptools] with package-data, [project.scripts], and [project.optional-dependencies] sections to fully replace the removed setup.cfg.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lang/py/pyproject.toml` around lines 20 - 55, Add the missing setuptools and
project metadata that replaced setup.cfg: in pyproject.toml add a
[tool.setuptools.package-data] section mapping the packages avro, avro.test, and
avro.tether to include resource patterns (e.g. *.avsc, *.avpr, and other data
files) so package data is bundled; add a [project.scripts] entry for the console
entry point "avro = avro.__main__:main"; and add [project.optional-dependencies]
entries for the extras (e.g. "snappy" and "zstandard") matching the previously
declared extra names so those optional dependencies are available via extras.
Ensure the package names and patterns exactly reference avro, avro.test,
avro.tether and the entry point avro.__main__:main.
There was a problem hiding this comment.
value:good-to-have; category:bug; feedback: The CodeRabbit AI reviewer is correct! The setup.cfg file provided configuration for setuptools. Any tool supporting PEP 621 (like uv) should support these config options declared in pyproject.toml.
| [tool.mypy] | ||
| python_version = "3.11" |
There was a problem hiding this comment.
python_version = "3.11" is inconsistent with requires-python = ">=3.10".
Mypy will type-check against Python 3.11, so code that uses 3.11-only APIs will pass static analysis even when running on the declared minimum of 3.10. Aligning these avoids silent compatibility gaps:
🔧 Proposed fix
[tool.mypy]
-python_version = "3.11"
+python_version = "3.10"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| [tool.mypy] | |
| python_version = "3.11" | |
| [tool.mypy] | |
| python_version = "3.10" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lang/py/pyproject.toml` around lines 33 - 34, The mypy configuration sets
python_version = "3.11" under [tool.mypy], which conflicts with the package
metadata requires-python = ">=3.10"; change the python_version value to "3.10"
(or remove the key to default to project interpreter) so static type-checking
targets the declared minimum runtime, updating the python_version entry in the
[tool.mypy] section accordingly.
There was a problem hiding this comment.
value:good-to-have; category:bug; feedback: The CodeRabbit AI reviewer is correct! The specified Python version should be 3.10 - the minimum supported Python version at the moment
value:useful; category:bug; feedback: The Claude AI reviewer is correct! The |
3658: To review by AI