Update release-from-fbc for multiple operator FBC#2451
Update release-from-fbc for multiple operator FBC#2451rayfordj wants to merge 2 commits intoopenshift-eng:mainfrom
Conversation
rayfordj
commented
Feb 6, 2026
- Update release-from-fbc for multiple operator FBC (cluster-logging-operator, loki-operator)
- Filter external dependency related-images (kube-rbac-proxy)
- Add retries for errors (502 Bad Gateway)
- job/build%252Frelease-from-fbc/30
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/hold |
WalkthroughAsynchronous label extraction and retry-enabled image info retrieval were added to the release_from_fbc pipeline. FBC pullspecs are now grouped by operator ( Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes ✨ Finishing touches
🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 🧹 Recent nitpick comments
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Comment |
|
/label tide/merge-method-squash |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@pyartcd/pyartcd/pipelines/release_from_fbc.py`:
- Around line 618-631: The current fallback assigns the literal 'unknown' to
fbc_to_doozer_key for any FBC without a __doozer_key or '-fbc-' tag, which
groups unrelated FBCs together; instead, make the "unknown" group unique per FBC
by incorporating a unique identifier derived from the FBC (e.g., the
fbc_pullspec or a sanitized/hash of it) when setting fbc_to_doozer_key for that
branch; update the block that inspects tag_part and the else branch that
currently sets fbc_to_doozer_key[fbc_pullspec] = 'unknown' so it assigns a
per-FBC unique string (referencing fbc_pullspec, tag_part, operator_name, and
fbc_to_doozer_key) to avoid false grouping.
- Around line 327-343: The current startswith check using assembly_prefix can
misclassify versions like "6.30" as matching "6.3"; update the matching logic in
the method using assembly_prefix and nvr_version (variables assembly_prefix,
nvr_version and self.assembly) to be delimiter-aware: when
nvr_version.startswith(assembly_prefix) ensure the match is either the entire
string or the next character after the prefix is a delimiter such as '.' or '-'
(i.e., only accept the prefix if len(nvr_version) == len(assembly_prefix) or
nvr_version[len(assembly_prefix)] in {'.','-'}); keep the existing exact
self.assembly startswith check as a fallback.
- Around line 611-613: The loop mapping FBC pullspecs to doozer keys uses
zip(fbc_pullspecs, label_results) which triggers Ruff B905; change the iteration
in the function that defines fbc_to_doozer_key to use zip(fbc_pullspecs,
label_results, strict=True) so mismatched lengths raise immediately (no extra
imports needed) — update the loop header where fbc_pullspecs and label_results
are zipped (the for fbc_pullspec, labels in ... line) to include strict=True.
🧹 Nitpick comments (1)
pyartcd/pyartcd/pipelines/release_from_fbc.py (1)
641-645: Consider parallelizing related-image extraction across FBCs.Now that multiple FBCs are supported, sequential
awaitcan add noticeable latency. A singlegatherkeeps behavior but speeds the slow path.♻️ Proposed refactor
fbc_related_images: Dict[str, List[str]] = {} - for fbc_pullspec in fbc_pullspecs: - related_nvrs = await extract_nvrs_from_fbc(fbc_pullspec, self.product) - fbc_related_images[fbc_pullspec] = sorted(related_nvrs) + related_tasks = [extract_nvrs_from_fbc(fbc_pullspec, self.product) for fbc_pullspec in fbc_pullspecs] + related_results = await asyncio.gather(*related_tasks) + for fbc_pullspec, related_nvrs in zip(fbc_pullspecs, related_results): + fbc_related_images[fbc_pullspec] = sorted(related_nvrs)
| # Extract major.minor.patch from assembly (e.g., "6.3.3" -> "6.3") | ||
| # to allow matching point releases | ||
| assembly_parts = self.assembly.split('.') | ||
| if len(assembly_parts) >= 2: | ||
| assembly_prefix = f"{assembly_parts[0]}.{assembly_parts[1]}" | ||
| else: | ||
| assembly_prefix = self.assembly | ||
|
|
||
| # Check if NVR version starts with the assembly prefix | ||
| # This handles cases like "6.3.3-..." matching assembly "6.3.3" | ||
| # and "6.3.3.202602032201..." (metadata containers) matching assembly "6.3.3" | ||
| if nvr_version.startswith(assembly_prefix): | ||
| return True | ||
|
|
||
| # Also check if NVR version starts with full assembly (for exact matches) | ||
| if nvr_version.startswith(self.assembly): | ||
| return True |
There was a problem hiding this comment.
Avoid prefix matching that can misclassify versions like 6.30 as 6.3.
startswith(assembly_prefix) will treat 6.30.x as in-group for 6.3. That can incorrectly keep external dependencies and skew shipment content.
🛠️ Proposed fix (add a delimiter-aware prefix check)
- if nvr_version.startswith(assembly_prefix):
+ if (
+ nvr_version == assembly_prefix
+ or nvr_version.startswith(f"{assembly_prefix}.")
+ or nvr_version.startswith(f"{assembly_prefix}-")
+ ):
return True📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Extract major.minor.patch from assembly (e.g., "6.3.3" -> "6.3") | |
| # to allow matching point releases | |
| assembly_parts = self.assembly.split('.') | |
| if len(assembly_parts) >= 2: | |
| assembly_prefix = f"{assembly_parts[0]}.{assembly_parts[1]}" | |
| else: | |
| assembly_prefix = self.assembly | |
| # Check if NVR version starts with the assembly prefix | |
| # This handles cases like "6.3.3-..." matching assembly "6.3.3" | |
| # and "6.3.3.202602032201..." (metadata containers) matching assembly "6.3.3" | |
| if nvr_version.startswith(assembly_prefix): | |
| return True | |
| # Also check if NVR version starts with full assembly (for exact matches) | |
| if nvr_version.startswith(self.assembly): | |
| return True | |
| # Extract major.minor.patch from assembly (e.g., "6.3.3" -> "6.3") | |
| # to allow matching point releases | |
| assembly_parts = self.assembly.split('.') | |
| if len(assembly_parts) >= 2: | |
| assembly_prefix = f"{assembly_parts[0]}.{assembly_parts[1]}" | |
| else: | |
| assembly_prefix = self.assembly | |
| # Check if NVR version starts with the assembly prefix | |
| # This handles cases like "6.3.3-..." matching assembly "6.3.3" | |
| # and "6.3.3.202602032201..." (metadata containers) matching assembly "6.3.3" | |
| if ( | |
| nvr_version == assembly_prefix | |
| or nvr_version.startswith(f"{assembly_prefix}.") | |
| or nvr_version.startswith(f"{assembly_prefix}-") | |
| ): | |
| return True | |
| # Also check if NVR version starts with full assembly (for exact matches) | |
| if nvr_version.startswith(self.assembly): | |
| return True |
🤖 Prompt for AI Agents
In `@pyartcd/pyartcd/pipelines/release_from_fbc.py` around lines 327 - 343, The
current startswith check using assembly_prefix can misclassify versions like
"6.30" as matching "6.3"; update the matching logic in the method using
assembly_prefix and nvr_version (variables assembly_prefix, nvr_version and
self.assembly) to be delimiter-aware: when
nvr_version.startswith(assembly_prefix) ensure the match is either the entire
string or the next character after the prefix is a delimiter such as '.' or '-'
(i.e., only accept the prefix if len(nvr_version) == len(assembly_prefix) or
nvr_version[len(assembly_prefix)] in {'.','-'}); keep the existing exact
self.assembly startswith check as a fallback.
| # Map FBC pullspecs to their doozer_key | ||
| fbc_to_doozer_key: Dict[str, str] = {} | ||
| for fbc_pullspec, labels in zip(fbc_pullspecs, label_results): |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and verify the file exists
find . -name "release_from_fbc.py" -type fRepository: openshift-eng/art-tools
Length of output: 114
🏁 Script executed:
# Read the specific lines mentioned to verify the snippet
if [ -f "pyartcd/pyartcd/pipelines/release_from_fbc.py" ]; then
sed -n '609,615p' pyartcd/pyartcd/pipelines/release_from_fbc.py
fiRepository: openshift-eng/art-tools
Length of output: 369
🏁 Script executed:
# Check if there's a ruff configuration or rules file
find . -name "ruff.toml" -o -name ".ruff.toml" -o -name "pyproject.toml" | head -5Repository: openshift-eng/art-tools
Length of output: 95
🌐 Web query:
Ruff B905 rule lint strict zip parameter
💡 Result:
Ruff rule B905 (from flake8-bugbear) flags any call to zip(...) that doesn’t explicitly pass strict= when:
- there are 2+ iterables, or
- the call includes any starred argument (
*iterable). (docs.astral.sh)
Why it matters
zip truncates silently to the shortest iterable by default, which can hide bugs. Adding strict=True makes it raise ValueError on length mismatch; adding strict=False documents that truncation is intended. (docs.astral.sh)
How to fix
- If lengths must match:
for a, b in zip(xs, ys, strict=True): ...
- If truncation is intended:
for a, b in zip(xs, ys, strict=False): ...
Ruff can autofix this rule (it will add an explicit strict=...). (docs.astral.sh)
Citations:
- 1: https://docs.astral.sh/ruff/rules/zip-without-explicit-strict/?utm_source=openai
- 2: https://docs.astral.sh/ruff/rules/zip-without-explicit-strict/?utm_source=openai
- 3: https://docs.astral.sh/ruff/rules/zip-without-explicit-strict/?utm_source=openai
🏁 Script executed:
# Check Python version requirement in pyproject.toml
cat pyproject.toml | grep -A 10 "python"Repository: openshift-eng/art-tools
Length of output: 1264
🏁 Script executed:
# Also check ruff configuration to understand the rules being enforced
cat ruff.tomlRepository: openshift-eng/art-tools
Length of output: 973
Add explicit strict=True to zip() to satisfy Ruff B905.
This removes the lint warning and adds a guard against accidental length mismatches between fbc_pullspecs and label_results.
Proposed fix
- for fbc_pullspec, labels in zip(fbc_pullspecs, label_results):
+ for fbc_pullspec, labels in zip(fbc_pullspecs, label_results, strict=True):🧰 Tools
🪛 Ruff (0.14.14)
[warning] 613-613: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
🤖 Prompt for AI Agents
In `@pyartcd/pyartcd/pipelines/release_from_fbc.py` around lines 611 - 613, The
loop mapping FBC pullspecs to doozer keys uses zip(fbc_pullspecs, label_results)
which triggers Ruff B905; change the iteration in the function that defines
fbc_to_doozer_key to use zip(fbc_pullspecs, label_results, strict=True) so
mismatched lengths raise immediately (no extra imports needed) — update the loop
header where fbc_pullspecs and label_results are zipped (the for fbc_pullspec,
labels in ... line) to include strict=True.
|
@rayfordj: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Filter external dependency related-images (kube-rbac-proxy) Add retries for errors (502 Bad Gateway)
fd50bf3 to
38d11e2
Compare