feat: enable toggling of hub log forwarding#41
feat: enable toggling of hub log forwarding#41Nightknight3000 wants to merge 35 commits intomainfrom
Conversation
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
# Conflicts: # src/status/status.py
Implement Startuperror Logs
Addressing NoneType Error when retrieving status of deleted analysis
Co-authored-by: Nightknight3000 <alexander.roehl@uni-tuebingen.de>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: Nightknight3000 <alexander.roehl@uni-tuebingen.de>
Co-authored-by: Nightknight3000 <alexander.roehl@uni-tuebingen.de>
Co-authored-by: Nightknight3000 <alexander.roehl@uni-tuebingen.de>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
Co-authored-by: antidodo <albin2993@gmail.com>
# Conflicts: # src/resources/database/entity.py # src/resources/log/entity.py # src/resources/utils.py # src/status/status.py # src/utils/hub_client.py
… k8s.utils Co-authored-by: antidodo <albin2993@gmail.com>
…logic Co-authored-by: Nightknight3000 <alexander.roehl@uni-tuebingen.de>
Co-authored-by: Nightknight3000 <alexander.roehl@uni-tuebingen.de>
# Conflicts: # src/resources/utils.py # src/status/status.py # src/utils/hub_client.py
Co-authored-by: Nightknight3000 <alexander.roehl@uni-tuebingen.de>
📝 WalkthroughWalkthroughThis pull request refactors Kubernetes resource management utilities, integrates Hub logging configuration throughout the codebase, reorganizes imports to consolidate utility locations, updates function signatures to support Hub logging flags, removes Docker image validation utilities, and applies consistent string formatting with single quotes across multiple files. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/k8s/kubernetes.py (1)
253-320:⚠️ Potential issue | 🟠 MajorNormalize
find_k8s_resourcesresults before using them as single names.These lookups can return
Noneorlist[str], but this block uses them as single pod/service names. That can breakread_namespaced_pod(...)and generate invalid nginx upstream targets.🛠️ Proposed fix (example pattern)
message_broker_service_name = find_k8s_resources('service', 'label', 'component=flame-message-broker', namespace=namespace) + if message_broker_service_name is None: + raise ValueError("Could not resolve message broker service") + if isinstance(message_broker_service_name, list): + if len(message_broker_service_name) != 1: + raise ValueError(f"Expected one message broker service, found {message_broker_service_name}") + message_broker_service_name = message_broker_service_name[0] @@ message_broker_pod_name = find_k8s_resources('pod', 'label', 'component=flame-message-broker', namespace=namespace) + if message_broker_pod_name is None: + raise ValueError("Could not resolve message broker pod") + if isinstance(message_broker_pod_name, list): + if len(message_broker_pod_name) != 1: + raise ValueError(f"Expected one message broker pod, found {message_broker_pod_name}") + message_broker_pod_name = message_broker_pod_name[0]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/k8s/kubernetes.py` around lines 253 - 320, find_k8s_resources can return None or list[str], but the code treats results as single names; normalize each result before use (message_broker_service_name, message_broker_pod_name, po_service_name, pod_orchestration_name, hub_adapter_service_name, kong_proxy_name, storage_service_name) by: if the returned value is a list take the first element (and ensure it's a str), if it's None raise a clear ValueError, and only then call core_client.read_namespaced_pod / use the name in downstream logic; also ensure the pod-list branch that sets analysis_ip similarly validates items[0].status.pod_ip exists before using it.src/k8s/utils.py (1)
61-64:⚠️ Potential issue | 🔴 CriticalGuard empty manual selector results to prevent
IndexError.If
manual_name_selectormatches nothing,resource_names[0]will crash.🐛 Proposed fix
if manual_name_selector is not None: resource_names = [name for name in resource_names if manual_name_selector in name] - return resource_names if len(resource_names) > 1 else resource_names[0] + if len(resource_names) == 0: + return None + return resource_names if len(resource_names) > 1 else resource_names[0]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/k8s/utils.py` around lines 61 - 64, The filtered branch using manual_name_selector can produce an empty list and then accessing resource_names[0] will raise IndexError; after filtering (the line assigning resource_names = [name for name in resource_names if manual_name_selector in name]) add a guard like "if not resource_names: return None" (or return an empty list per the calling convention) before the existing "return resource_names if len(resource_names) > 1 else resource_names[0]" so manual_name_selector and resource_names are handled safely.
🧹 Nitpick comments (1)
src/api/api.py (1)
28-37: Skip hub client initialization when hub logging is disabled.Line 30 stores
enable_hub_logging, but Line 31–37 still initializes/authenticates the hub client unconditionally. If logging is toggled off, this unnecessary external dependency can be avoided.🔧 Proposed refactor
self.enable_hub_logging = enable_hub_logging - self.hub_core_client = init_hub_client_with_robot(robot_id, - robot_secret, - hub_url_core, - hub_auth, - http_proxy, - https_proxy) - self.node_id = get_node_id_by_robot(self.hub_core_client, robot_id) if self.hub_core_client else None + self.hub_core_client = None + self.node_id = None + if self.enable_hub_logging: + self.hub_core_client = init_hub_client_with_robot(robot_id, + robot_secret, + hub_url_core, + hub_auth, + http_proxy, + https_proxy) + self.node_id = get_node_id_by_robot(self.hub_core_client, robot_id) if self.hub_core_client else None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/api.py` around lines 28 - 37, The code currently calls init_hub_client_with_robot and get_node_id_by_robot regardless of enable_hub_logging; change the logic so after calling extract_hub_envs() you only call init_hub_client_with_robot and get_node_id_by_robot when enable_hub_logging is truthy, and otherwise set hub_core_client and node_id to None (or leave uninitialized) to avoid initializing/authenticating the hub when logging is disabled; update references to enable_hub_logging, init_hub_client_with_robot, hub_core_client, and get_node_id_by_robot accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/k8s/utils.py`:
- Around line 85-122: The exception handling in all delete branches
(delete_namespaced_deployment, delete_namespaced_service, delete_namespaced_pod,
delete_namespaced_config_map, delete_namespaced_network_policy,
delete_namespaced_job) is inverted: it currently prints "Error: Not Found" for
non-404 errors and ignores actual 404s. Change each except
client.exceptions.ApiException as e block to treat 404/Not Found as the expected
ignore case (e.status == 404 or e.reason == 'Not Found') and otherwise log or
re-raise the error with the real exception details (include e or str(e) in the
message) so real failures (permission, connection, timeout) are surfaced.
In `@src/resources/database/entity.py`:
- Around line 19-20: The startup currently prints the full connection string
(conn_uri) which exposes POSTGRES_USER and POSTGRES_PASSWORD; remove the
print(conn_uri) and, if you need runtime visibility, log a redacted/safe version
instead (e.g., only include host, port, and database or replace user/password
with "***") where the code that builds conn_uri is located; ensure any logging
uses the module's logger instead of printing and never emits the raw conn_uri or
credential values.
In `@src/resources/utils.py`:
- Around line 206-219: find_k8s_resources('pod', ...) can return a list or None,
so don't pass its result directly to delete_k8s_resource; instead check
message_broker_pod_name and storage_service_name for None or list and handle
each case: if None, skip delete and set response_content[cleanup_type] to
indicate no pod found; if a list, iterate over each pod name and call
delete_k8s_resource(name, 'pod', namespace) for each (or pick the single string
if it's not a list); update response_content[cleanup_type] accordingly. Ensure
you reference find_k8s_resources, delete_k8s_resource, message_broker_pod_name,
storage_service_name, response_content, cleanup_type, and namespace when making
these checks and calls.
- Line 246: Replace the type check using equality with the proper isinstance
idiom: in the expression that assigns resources (the line using resources =
[resources] if type(resources) == str else resources), change the condition to
use isinstance(resources, str) so it reads as a check for string instances; keep
the rest of the ternary unchanged to preserve behavior.
In `@src/status/status.py`:
- Around line 163-175: The function _get_internal_deployment_status creates
AsyncClient instances without closing them and uses blocking time.sleep; change
the client usage to "async with AsyncClient(base_url=...)" so each request is
made inside an async context manager and resources are cleaned up, and replace
blocking time.sleep(1) with await asyncio.sleep(1) to avoid blocking the event
loop; apply the same AsyncClient/context-manager and asyncio.sleep change to the
other occurrence referenced around the similar block (lines ~207-210) so both
code paths use async with AsyncClient(...) and await asyncio.sleep(1).
- Around line 212-214: The except block that only catches HTTPStatusError when
refreshing the Keycloak token should also catch transport errors (ConnectError
and ConnectTimeout) to match the health-check pattern; update the try/except
around the token refresh (the block that currently references HTTPStatusError
and deployment_name) to add handlers for httpx.ConnectError and
httpx.ConnectTimeout (and ensure those names are imported), and log/report those
exceptions the same way you do for HTTPStatusError so transport failures are
handled consistently.
In `@src/utils/hub_client.py`:
- Around line 63-66: The call to hub_client.find_nodes(...) is indexed with [0]
without checking for an empty list which raises IndexError; modify the code
around node_id_object = hub_client.find_nodes(filter={'robot_id': robot_id})[0]
to first capture the result (e.g., nodes = hub_client.find_nodes(...)), check if
nodes is truthy/has length > 0 and set node_id_object = nodes[0] only then,
otherwise set node_id_object = None (or handle accordingly); alternatively add
IndexError to the except tuple for the existing try/except that currently
catches HTTPStatusError, JSONDecodeError, ConnectTimeout,
flame_hub._exceptions.HubAPIError, AttributeError so the absence of nodes is
properly handled.
In `@src/utils/other.py`:
- Around line 6-13: The return type of extract_hub_envs is too strict because
os.getenv can return None; update the function signature to use Optional[str]
for the string-returning positions (first four and the two proxy positions) and
keep the fifth element as bool, and add the necessary Optional import from
typing; reference the extract_hub_envs function and replace tuple[str, str, str,
str, bool, str, str] with tuple[Optional[str], Optional[str], Optional[str],
Optional[str], bool, Optional[str], Optional[str]].
In `@src/utils/token.py`:
- Around line 82-84: The client_data dict sets 'serviceAccountsEnabled' to the
string 'true' which will serialize as a JSON string; change it to a Python
boolean True so the ClientRepresentation sent to Keycloak uses a JSON boolean.
Locate the client_data assignment (variable name client_data and key
serviceAccountsEnabled) in src/utils/token.py and replace the string value with
the boolean True, and scan nearby client creation code for any other
boolean-like fields to ensure they use Python booleans rather than strings.
- Around line 80-87: The Keycloak HTTP calls in token.py are missing explicit
timeouts and one request payload sets serviceAccountsEnabled as the string
'true' instead of a boolean; update all requests.* calls (e.g., the
requests.post that assigns response and any other requests.get/post used in this
module) to include a sensible timeout argument (e.g., timeout=10) to avoid
hanging, and change client_data['serviceAccountsEnabled'] from the string 'true'
to the boolean True so the Keycloak Admin API receives the correct type; ensure
response.raise_for_status() is kept after the timed request calls (and apply the
same timeout addition to any other request sites referenced: the calls around
the symbols that build headers, client_data, and where response is read).
---
Outside diff comments:
In `@src/k8s/kubernetes.py`:
- Around line 253-320: find_k8s_resources can return None or list[str], but the
code treats results as single names; normalize each result before use
(message_broker_service_name, message_broker_pod_name, po_service_name,
pod_orchestration_name, hub_adapter_service_name, kong_proxy_name,
storage_service_name) by: if the returned value is a list take the first element
(and ensure it's a str), if it's None raise a clear ValueError, and only then
call core_client.read_namespaced_pod / use the name in downstream logic; also
ensure the pod-list branch that sets analysis_ip similarly validates
items[0].status.pod_ip exists before using it.
In `@src/k8s/utils.py`:
- Around line 61-64: The filtered branch using manual_name_selector can produce
an empty list and then accessing resource_names[0] will raise IndexError; after
filtering (the line assigning resource_names = [name for name in resource_names
if manual_name_selector in name]) add a guard like "if not resource_names:
return None" (or return an empty list per the calling convention) before the
existing "return resource_names if len(resource_names) > 1 else
resource_names[0]" so manual_name_selector and resource_names are handled
safely.
---
Nitpick comments:
In `@src/api/api.py`:
- Around line 28-37: The code currently calls init_hub_client_with_robot and
get_node_id_by_robot regardless of enable_hub_logging; change the logic so after
calling extract_hub_envs() you only call init_hub_client_with_robot and
get_node_id_by_robot when enable_hub_logging is truthy, and otherwise set
hub_core_client and node_id to None (or leave uninitialized) to avoid
initializing/authenticating the hub when logging is disabled; update references
to enable_hub_logging, init_hub_client_with_robot, hub_core_client, and
get_node_id_by_robot accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d33f7740-0769-4b8c-b00f-d587be769f50
📒 Files selected for processing (14)
src/api/api.pysrc/api/oauth.pysrc/k8s/kubernetes.pysrc/k8s/utils.pysrc/main.pysrc/resources/database/entity.pysrc/resources/log/entity.pysrc/resources/utils.pysrc/status/status.pysrc/utils/docker.pysrc/utils/hub_client.pysrc/utils/other.pysrc/utils/token.pytests/test_db.py
💤 Files with no reviewable changes (1)
- src/utils/docker.py
| except client.exceptions.ApiException as e: | ||
| if e.reason != 'Not Found': | ||
| print(f"Error: Not Found {name} deployment") | ||
| elif resource_type == 'service': | ||
| try: | ||
| core_client = client.CoreV1Api() | ||
| core_client.delete_namespaced_service(name=name, namespace=namespace) | ||
| except client.exceptions.ApiException as e: | ||
| if e.reason != 'Not Found': | ||
| print(f"Error: Not Found {name} service") | ||
| elif resource_type == 'pod': | ||
| try: | ||
| core_client = client.CoreV1Api() | ||
| core_client.delete_namespaced_pod(name=name, namespace=namespace) | ||
| except client.exceptions.ApiException as e: | ||
| if e.reason != 'Not Found': | ||
| print(f"Error: Not Found {name} pod") | ||
| elif resource_type == 'configmap': | ||
| try: | ||
| core_client = client.CoreV1Api() | ||
| core_client.delete_namespaced_config_map(name=name, namespace=namespace) | ||
| except client.exceptions.ApiException as e: | ||
| if e.reason != 'Not Found': | ||
| print(f"Error: Not Found {name} configmap") | ||
| elif resource_type == 'networkpolicy': | ||
| try: | ||
| network_client = client.NetworkingV1Api() | ||
| network_client.delete_namespaced_network_policy(name=name, namespace=namespace) | ||
| except client.exceptions.ApiException as e: | ||
| if e.reason != 'Not Found': | ||
| print(f"Error: Not Found {name} networkpolicy") | ||
| elif resource_type == 'job': | ||
| try: | ||
| batch_client = client.BatchV1Api() | ||
| batch_client.delete_namespaced_job(name=name, namespace=namespace, propagation_policy='Foreground') | ||
| except client.exceptions.ApiException as e: | ||
| if e.reason != 'Not Found': | ||
| print(f"Error: Not Found {name} job") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/k8s/utils.py | sed -n '75,130p'Repository: PrivateAIM/node-pod-orchestration
Length of output: 2977
🏁 Script executed:
#!/bin/bash
# Search for the exception handling pattern across the file
rg -nP "except.*ApiException|if e\.reason|Error: Not Found" src/k8s/utils.py -C1Repository: PrivateAIM/node-pod-orchestration
Length of output: 1863
🌐 Web query:
Kubernetes Python client ApiException e.reason 404 Not Found attribute
💡 Result:
In the Kubernetes Python client, ApiException exposes these fields (set from the HTTP response): status (HTTP status code), reason (HTTP reason phrase), body (response payload), and headers. So for a “404 Not Found” you typically get e.status == 404 and e.reason == "Not Found". [1]
If you need the Kubernetes “Status” object’s details (often includes "reason": "NotFound" and "code": 404), parse e.body (it’s often JSON, sometimes bytes):
from kubernetes.client.rest import ApiException
import json
try:
...
except ApiException as e:
if e.status == 404:
k8s_status = json.loads(e.body.decode() if isinstance(e.body, (bytes, bytearray)) else e.body)
# k8s_status["reason"] -> "NotFound"
# k8s_status["message"] -> e.g. 'pods "x" not found'Real 404 examples show Reason: Not Found plus a JSON body containing "reason":"NotFound" and "code":404. [2]
Fix inverted exception handling across all resource deletion branches.
The condition is backwards: all non-404 errors are printed as "Error: Not Found" while actual 404s are silently ignored. This masks real deletion failures (permissions, connection errors, timeouts) that should either be re-raised or handled explicitly.
Lines affected: 86, 93, 100, 107, 114, 121 (deployment, service, pod, configmap, networkpolicy, job).
Fix pattern
- except client.exceptions.ApiException as e:
- if e.reason != 'Not Found':
- print(f"Error: Not Found {name} deployment")
+ except client.exceptions.ApiException as e:
+ if e.status == 404:
+ return
+ raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/k8s/utils.py` around lines 85 - 122, The exception handling in all delete
branches (delete_namespaced_deployment, delete_namespaced_service,
delete_namespaced_pod, delete_namespaced_config_map,
delete_namespaced_network_policy, delete_namespaced_job) is inverted: it
currently prints "Error: Not Found" for non-404 errors and ignores actual 404s.
Change each except client.exceptions.ApiException as e block to treat 404/Not
Found as the expected ignore case (e.status == 404 or e.reason == 'Not Found')
and otherwise log or re-raise the error with the real exception details (include
e or str(e) in the message) so real failures (permission, connection, timeout)
are surfaced.
| conn_uri = f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}" | ||
| print(conn_uri) |
There was a problem hiding this comment.
Remove credential leakage from startup logs.
Line 20 prints the full PostgreSQL URI, which includes secrets from Line 19 (POSTGRES_USER / POSTGRES_PASSWORD). This is a credentials exposure risk.
🔧 Proposed fix
- conn_uri = f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}"
- print(conn_uri)
+ conn_uri = f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| conn_uri = f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}" | |
| print(conn_uri) | |
| conn_uri = f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/resources/database/entity.py` around lines 19 - 20, The startup currently
prints the full connection string (conn_uri) which exposes POSTGRES_USER and
POSTGRES_PASSWORD; remove the print(conn_uri) and, if you need runtime
visibility, log a redacted/safe version instead (e.g., only include host, port,
and database or replace user/password with "***") where the code that builds
conn_uri is located; ensure any logging uses the module's logger instead of
printing and never emits the raw conn_uri or credential values.
| message_broker_pod_name = find_k8s_resources('pod', | ||
| 'label', | ||
| 'component=flame-result-service', | ||
| "component=flame-message-broker", | ||
| namespace=namespace) | ||
| delete_resource(result_service_name, 'pod', namespace) | ||
| response_content[cleanup_type] = "Reset result service" | ||
| delete_k8s_resource(message_broker_pod_name, 'pod', namespace) | ||
| response_content[cleanup_type] = "Reset message broker" | ||
| if cleanup_type in ['all', 'services', 'rs']: | ||
| # reinitialize storage-service pod | ||
| storage_service_name = find_k8s_resources('pod', | ||
| 'label', | ||
| "component=flame-storage-service", | ||
| namespace=namespace) | ||
| delete_k8s_resource(storage_service_name, 'pod', namespace) | ||
| response_content[cleanup_type] = "Reset storage service" |
There was a problem hiding this comment.
Handle multi/empty pod matches before delete calls in cleanup.
find_k8s_resources('pod', ...) can return list[str] or None; passing that directly to delete_k8s_resource(name: str, ...) is unsafe.
🛠️ Proposed fix
- message_broker_pod_name = find_k8s_resources('pod',
- 'label',
- "component=flame-message-broker",
- namespace=namespace)
- delete_k8s_resource(message_broker_pod_name, 'pod', namespace)
+ message_broker_pods = find_k8s_resources('pod',
+ 'label',
+ "component=flame-message-broker",
+ namespace=namespace)
+ message_broker_pods = [message_broker_pods] if isinstance(message_broker_pods, str) else (message_broker_pods or [])
+ for pod_name in message_broker_pods:
+ delete_k8s_resource(pod_name, 'pod', namespace)
@@
- storage_service_name = find_k8s_resources('pod',
- 'label',
- "component=flame-storage-service",
- namespace=namespace)
- delete_k8s_resource(storage_service_name, 'pod', namespace)
+ storage_service_pods = find_k8s_resources('pod',
+ 'label',
+ "component=flame-storage-service",
+ namespace=namespace)
+ storage_service_pods = [storage_service_pods] if isinstance(storage_service_pods, str) else (storage_service_pods or [])
+ for pod_name in storage_service_pods:
+ delete_k8s_resource(pod_name, 'pod', namespace)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| message_broker_pod_name = find_k8s_resources('pod', | |
| 'label', | |
| 'component=flame-result-service', | |
| "component=flame-message-broker", | |
| namespace=namespace) | |
| delete_resource(result_service_name, 'pod', namespace) | |
| response_content[cleanup_type] = "Reset result service" | |
| delete_k8s_resource(message_broker_pod_name, 'pod', namespace) | |
| response_content[cleanup_type] = "Reset message broker" | |
| if cleanup_type in ['all', 'services', 'rs']: | |
| # reinitialize storage-service pod | |
| storage_service_name = find_k8s_resources('pod', | |
| 'label', | |
| "component=flame-storage-service", | |
| namespace=namespace) | |
| delete_k8s_resource(storage_service_name, 'pod', namespace) | |
| response_content[cleanup_type] = "Reset storage service" | |
| message_broker_pods = find_k8s_resources('pod', | |
| 'label', | |
| "component=flame-message-broker", | |
| namespace=namespace) | |
| message_broker_pods = [message_broker_pods] if isinstance(message_broker_pods, str) else (message_broker_pods or []) | |
| for pod_name in message_broker_pods: | |
| delete_k8s_resource(pod_name, 'pod', namespace) | |
| response_content[cleanup_type] = "Reset message broker" | |
| if cleanup_type in ['all', 'services', 'rs']: | |
| # reinitialize storage-service pod | |
| storage_service_pods = find_k8s_resources('pod', | |
| 'label', | |
| "component=flame-storage-service", | |
| namespace=namespace) | |
| storage_service_pods = [storage_service_pods] if isinstance(storage_service_pods, str) else (storage_service_pods or []) | |
| for pod_name in storage_service_pods: | |
| delete_k8s_resource(pod_name, 'pod', namespace) | |
| response_content[cleanup_type] = "Reset storage service" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/resources/utils.py` around lines 206 - 219, find_k8s_resources('pod',
...) can return a list or None, so don't pass its result directly to
delete_k8s_resource; instead check message_broker_pod_name and
storage_service_name for None or list and handle each case: if None, skip delete
and set response_content[cleanup_type] to indicate no pod found; if a list,
iterate over each pod name and call delete_k8s_resource(name, 'pod', namespace)
for each (or pick the single string if it's not a list); update
response_content[cleanup_type] accordingly. Ensure you reference
find_k8s_resources, delete_k8s_resource, message_broker_pod_name,
storage_service_name, response_content, cleanup_type, and namespace when making
these checks and calls.
| for selector_arg in selector_args: | ||
| resources = get_k8s_resource_names(res, 'label', selector_arg, namespace=namespace) | ||
| resources = find_k8s_resources(res, 'label', selector_arg, namespace=namespace) | ||
| resources = [resources] if type(resources) == str else resources |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Verify E721 pattern in src/resources/utils.py at line 246
rg -nP "type\([^)]+\)\s*==\s*str" src/resources/utils.py -C2Repository: PrivateAIM/node-pod-orchestration
Length of output: 403
🏁 Script executed:
# Check for other similar type() == ... patterns in the file
rg -n "type\([^)]+\)\s*==" src/resources/utils.pyRepository: PrivateAIM/node-pod-orchestration
Length of output: 157
Replace type(...) == str with isinstance(...) (Ruff E721).
This uses the proper Python idiom and resolves the lint error.
Proposed fix
- resources = [resources] if type(resources) == str else resources
+ resources = [resources] if isinstance(resources, str) else resources📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resources = [resources] if type(resources) == str else resources | |
| resources = [resources] if isinstance(resources, str) else resources |
🧰 Tools
🪛 Ruff (0.15.2)
[error] 246-246: Use is and is not for type comparisons, or isinstance() for isinstance checks
(E721)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/resources/utils.py` at line 246, Replace the type check using equality
with the proper isinstance idiom: in the expression that assigns resources (the
line using resources = [resources] if type(resources) == str else resources),
change the condition to use isinstance(resources, str) so it reads as a check
for string instances; keep the rest of the ternary unchanged to preserve
behavior.
| response = await (AsyncClient(base_url=f"http://nginx-{deployment_name}:{PORTS['nginx'][0]}") | ||
| .get("/analysis/healthz", headers=[('Connection', 'close')])) | ||
| response.raise_for_status() | ||
| break | ||
| except HTTPStatusError as e: | ||
| print(f"\tError getting internal deployment status: {e}") | ||
| print(f"\tError whilst retrieving internal deployment status: {e}") | ||
| except ConnectError as e: | ||
| print(f"\tConnection to http://nginx-{deployment_name}:{PORTS['nginx'][0]} yielded an error: {e}") | ||
| except ConnectTimeout as e: | ||
| print(f"\tConnection to http://nginx-{deployment_name}:{PORTS['nginx'][0]} timed out: {e}") | ||
| elapsed_time = time.time() - start_time | ||
| time.sleep(1) | ||
| if elapsed_time > _INTERNAL_STATUS_TIMEOUT: | ||
| if elapsed_time > _INTERNAL_STATUS_TIMEOUT: # TODO: Handle case of this happening for large images |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify async-http client lifecycle and blocking sleeps in src/status/status.py
echo "=== AsyncClient usage patterns ==="
rg -nP "AsyncClient\(" src/status/status.py -C2
echo -e "\n=== time.sleep usage ==="
rg -nP "time\.sleep\(" src/status/status.py -C2
echo -e "\n=== Async with AsyncClient patterns (should be present if properly handled) ==="
rg -nP "async with AsyncClient" src/status/status.pyRepository: PrivateAIM/node-pod-orchestration
Length of output: 1490
🏁 Script executed:
#!/bin/bash
# Get file size first
wc -l src/status/status.py
# Check imports
head -30 src/status/status.py
# Get function signatures around the problematic lines
echo -e "\n=== Function context around line 163 ==="
sed -n '140,180p' src/status/status.py
echo -e "\n=== Function context around line 207 ==="
sed -n '195,220p' src/status/status.py
echo -e "\n=== Function context around line 119 ==="
sed -n '105,130p' src/status/status.pyRepository: PrivateAIM/node-pod-orchestration
Length of output: 6479
Replace AsyncClient instantiation with context manager and use asyncio.sleep in async functions.
AsyncClient is created per request without being closed, creating connection leaks that degrade reliability under load. Additionally, time.sleep(1) at line 174 blocks the event loop inside the async _get_internal_deployment_status function, preventing other concurrent operations from running.
Use async with AsyncClient(...) for proper resource cleanup and await asyncio.sleep(1) instead of blocking sleep in async code.
♻️ Proposed fix
- response = await (AsyncClient(base_url=f"http://nginx-{deployment_name}:{PORTS['nginx'][0]}")
- .get("/analysis/healthz", headers=[('Connection', 'close')]))
+ async with AsyncClient(
+ base_url=f"http://nginx-{deployment_name}:{PORTS['nginx'][0]}",
+ timeout=10.0
+ ) as client:
+ response = await client.get("/analysis/healthz", headers=[('Connection', 'close')])
@@
- time.sleep(1)
+ await asyncio.sleep(1)
@@
- response = await (AsyncClient(base_url=f"http://nginx-{deployment_name}:{PORTS['nginx'][0]}")
- .post("/analysis/token_refresh",
- json={'token': keycloak_token},
- headers=[('Connection', 'close')]))
+ async with AsyncClient(
+ base_url=f"http://nginx-{deployment_name}:{PORTS['nginx'][0]}",
+ timeout=10.0
+ ) as client:
+ response = await client.post("/analysis/token_refresh",
+ json={'token': keycloak_token},
+ headers=[('Connection', 'close')])Also applies to: 207-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/status/status.py` around lines 163 - 175, The function
_get_internal_deployment_status creates AsyncClient instances without closing
them and uses blocking time.sleep; change the client usage to "async with
AsyncClient(base_url=...)" so each request is made inside an async context
manager and resources are cleaned up, and replace blocking time.sleep(1) with
await asyncio.sleep(1) to avoid blocking the event loop; apply the same
AsyncClient/context-manager and asyncio.sleep change to the other occurrence
referenced around the similar block (lines ~207-210) so both code paths use
async with AsyncClient(...) and await asyncio.sleep(1).
| except HTTPStatusError as e: | ||
| print(f"Failed to refresh keycloak token in deployment {deployment_name}.\n{e}") | ||
| print(f"Error: Failed to refresh keycloak token in deployment {deployment_name}.\n{e}") | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Get the full context of the _refresh_keycloak_token function and imports
echo "=== Checking imports and HTTPStatusError usage ==="
head -50 src/status/status.py | grep -E "^import|^from"
echo -e "\n=== Full _refresh_keycloak_token function context ==="
rg -n "def _refresh_keycloak_token" src/status/status.py -A 30Repository: PrivateAIM/node-pod-orchestration
Length of output: 2599
🏁 Script executed:
#!/bin/bash
# Check for similar httpx/AsyncClient exception handling patterns
echo "=== Looking for other exception handling of httpx calls ==="
rg -n "except.*HTTPStatusError|except.*ConnectError|except.*ConnectTimeout" src/status/status.py -B 5
echo -e "\n=== Check if other files handle transport exceptions ==="
rg -n "ConnectError|ConnectTimeout" src/ -lRepository: PrivateAIM/node-pod-orchestration
Length of output: 1259
Handle transport exceptions during token refresh to match health-check pattern.
Only HTTPStatusError is handled here, but the health-check function in the same file (lines 162-171) correctly handles ConnectError and ConnectTimeout for the same in-cluster nginx endpoint. The token refresh call should handle these transport failures consistently.
🛠️ Proposed fix
- except HTTPStatusError as e:
+ except HTTPStatusError as e:
print(f"Error: Failed to refresh keycloak token in deployment {deployment_name}.\n{e}")
+ except ConnectError as e:
+ print(f"Error: Failed to refresh keycloak token in deployment {deployment_name}.\n{e}")
+ except ConnectTimeout as e:
print(f"Error: Failed to refresh keycloak token in deployment {deployment_name}.\n{e}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except HTTPStatusError as e: | |
| print(f"Failed to refresh keycloak token in deployment {deployment_name}.\n{e}") | |
| print(f"Error: Failed to refresh keycloak token in deployment {deployment_name}.\n{e}") | |
| except HTTPStatusError as e: | |
| print(f"Error: Failed to refresh keycloak token in deployment {deployment_name}.\n{e}") | |
| except ConnectError as e: | |
| print(f"Error: Failed to refresh keycloak token in deployment {deployment_name}.\n{e}") | |
| except ConnectTimeout as e: | |
| print(f"Error: Failed to refresh keycloak token in deployment {deployment_name}.\n{e}") | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/status/status.py` around lines 212 - 214, The except block that only
catches HTTPStatusError when refreshing the Keycloak token should also catch
transport errors (ConnectError and ConnectTimeout) to match the health-check
pattern; update the try/except around the token refresh (the block that
currently references HTTPStatusError and deployment_name) to add handlers for
httpx.ConnectError and httpx.ConnectTimeout (and ensure those names are
imported), and log/report those exceptions the same way you do for
HTTPStatusError so transport failures are handled consistently.
| node_id_object = hub_client.find_nodes(filter={'robot_id': robot_id})[0] | ||
| except (HTTPStatusError, JSONDecodeError, ConnectTimeout, flame_hub._exceptions.HubAPIError, AttributeError) as e: | ||
| print(f"Error in hub python client whilst retrieving node id object!\n{e}") | ||
| print(f"Error: Failed to retrieve node id object from hub python client\n{e}") | ||
| node_id_object = None |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the actual file
find . -name "hub_client.py" -type fRepository: PrivateAIM/node-pod-orchestration
Length of output: 102
🏁 Script executed:
# Let's also search for flame_hub imports and usage in the codebase
rg "flame_hub" -t py --max-count 20Repository: PrivateAIM/node-pod-orchestration
Length of output: 1501
🏁 Script executed:
# Search for find_nodes usage patterns to understand expected behavior
rg "find_nodes" -t py -B 2 -A 2Repository: PrivateAIM/node-pod-orchestration
Length of output: 584
🌐 Web query:
flame_hub python client CoreClient find_nodes return value empty list documentation
💡 Result:
flame_hub.CoreClient.find_nodes(...) is documented to return a Python list[Node] (optionally a (list[Node], ResourceListMeta) tuple if meta=True)—so when nothing matches your filters/page, the correct “no results” return value is an empty list [], not None. This differs from get_node(...), which is documented to return Node | None when the specific ID doesn’t exist. [1] [2]
Docs locations
- User guide: generic
find_RESOURCE_NAME_PLURAL(...) -> list[ResourceT]behavior (“returns matching resources as a list”). [1] - API reference (Clients →
CoreClient):find_nodes(...) -> list[Node]. [2]
Sources
- FLAME Hub Python Client – User guide (Finding resources / method signatures) https://privateaim.github.io/hub-python-client/user_guide.html
- FLAME Hub Python Client – API Reference → Clients →
CoreClient.find_nodes()https://privateaim.github.io/hub-python-client/clients_api.html
Guard against empty node results before indexing.
Line 63 calls hub_client.find_nodes(...) which returns an empty list [] when no nodes match the filter. Accessing [0] on an empty list raises IndexError, but the exception handler on line 64 does not catch IndexError. This causes an unhandled exception when a robot has no nodes.
🔧 Proposed fix
- try:
- node_id_object = hub_client.find_nodes(filter={'robot_id': robot_id})[0]
+ try:
+ nodes = hub_client.find_nodes(filter={'robot_id': robot_id})
+ node_id_object = nodes[0] if nodes else None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| node_id_object = hub_client.find_nodes(filter={'robot_id': robot_id})[0] | |
| except (HTTPStatusError, JSONDecodeError, ConnectTimeout, flame_hub._exceptions.HubAPIError, AttributeError) as e: | |
| print(f"Error in hub python client whilst retrieving node id object!\n{e}") | |
| print(f"Error: Failed to retrieve node id object from hub python client\n{e}") | |
| node_id_object = None | |
| try: | |
| nodes = hub_client.find_nodes(filter={'robot_id': robot_id}) | |
| node_id_object = nodes[0] if nodes else None | |
| except (HTTPStatusError, JSONDecodeError, ConnectTimeout, flame_hub._exceptions.HubAPIError, AttributeError) as e: | |
| print(f"Error: Failed to retrieve node id object from hub python client\n{e}") | |
| node_id_object = None |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/hub_client.py` around lines 63 - 66, The call to
hub_client.find_nodes(...) is indexed with [0] without checking for an empty
list which raises IndexError; modify the code around node_id_object =
hub_client.find_nodes(filter={'robot_id': robot_id})[0] to first capture the
result (e.g., nodes = hub_client.find_nodes(...)), check if nodes is truthy/has
length > 0 and set node_id_object = nodes[0] only then, otherwise set
node_id_object = None (or handle accordingly); alternatively add IndexError to
the except tuple for the existing try/except that currently catches
HTTPStatusError, JSONDecodeError, ConnectTimeout,
flame_hub._exceptions.HubAPIError, AttributeError so the absence of nodes is
properly handled.
| def extract_hub_envs() -> tuple[str, str, str, str, bool, str, str]: | ||
| return (os.getenv('HUB_ROBOT_USER'), | ||
| os.getenv('HUB_ROBOT_SECRET'), | ||
| os.getenv('HUB_URL_CORE'), | ||
| os.getenv('HUB_URL_AUTH'), | ||
| os.getenv('HUB_LOGGING') in ['True', 'true', '1', 't'], | ||
| os.getenv('PO_HTTP_PROXY'), | ||
| os.getenv('PO_HTTPS_PROXY')) |
There was a problem hiding this comment.
Fix extract_hub_envs return typing to reflect None values.
os.getenv(...) can return None, so the current annotation on Line 6 overstates safety.
🔧 Proposed fix
+from typing import Optional
@@
-def extract_hub_envs() -> tuple[str, str, str, str, bool, str, str]:
+def extract_hub_envs() -> tuple[Optional[str], Optional[str], Optional[str], Optional[str], bool, Optional[str], Optional[str]]:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def extract_hub_envs() -> tuple[str, str, str, str, bool, str, str]: | |
| return (os.getenv('HUB_ROBOT_USER'), | |
| os.getenv('HUB_ROBOT_SECRET'), | |
| os.getenv('HUB_URL_CORE'), | |
| os.getenv('HUB_URL_AUTH'), | |
| os.getenv('HUB_LOGGING') in ['True', 'true', '1', 't'], | |
| os.getenv('PO_HTTP_PROXY'), | |
| os.getenv('PO_HTTPS_PROXY')) | |
| from typing import Optional | |
| def extract_hub_envs() -> tuple[Optional[str], Optional[str], Optional[str], Optional[str], bool, Optional[str], Optional[str]]: | |
| return (os.getenv('HUB_ROBOT_USER'), | |
| os.getenv('HUB_ROBOT_SECRET'), | |
| os.getenv('HUB_URL_CORE'), | |
| os.getenv('HUB_URL_AUTH'), | |
| os.getenv('HUB_LOGGING') in ['True', 'true', '1', 't'], | |
| os.getenv('PO_HTTP_PROXY'), | |
| os.getenv('PO_HTTPS_PROXY')) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/other.py` around lines 6 - 13, The return type of extract_hub_envs
is too strict because os.getenv can return None; update the function signature
to use Optional[str] for the string-returning positions (first four and the two
proxy positions) and keep the fifth element as bool, and add the necessary
Optional import from typing; reference the extract_hub_envs function and replace
tuple[str, str, str, str, bool, str, str] with tuple[Optional[str],
Optional[str], Optional[str], Optional[str], bool, Optional[str],
Optional[str]].
| headers = {'Authorization': f"Bearer {admin_token}", | ||
| 'Content-Type': "application/json"} | ||
| client_data = {'clientId': f"{analysis_id}", | ||
| 'name': f"flame-{analysis_id}", | ||
| 'serviceAccountsEnabled': 'true'} | ||
|
|
||
| response = requests.post(url_create_client, headers=headers, json=client_data) | ||
| response.raise_for_status() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the src/utils/token.py file
find . -name "token.py" -path "*/utils/*" | head -20Repository: PrivateAIM/node-pod-orchestration
Length of output: 97
🏁 Script executed:
# Once file is found, examine the specific lines and context
if [ -f "src/utils/token.py" ]; then
echo "=== File found, checking line count ==="
wc -l src/utils/token.py
echo ""
echo "=== Lines 70-110 (context around lines 80-87 and 103-107) ==="
sed -n '70,110p' src/utils/token.py | cat -n
fiRepository: PrivateAIM/node-pod-orchestration
Length of output: 2028
🏁 Script executed:
# Search for all requests calls in token.py to identify all network calls
rg -n 'requests\.(get|post|put|delete|patch)\(' src/utils/token.py -A 3Repository: PrivateAIM/node-pod-orchestration
Length of output: 1262
🏁 Script executed:
# Check for any existing timeout configurations
rg -n 'timeout' src/utils/token.py
rg -n 'REQUEST_TIMEOUT\|TIMEOUT\|_TIMEOUT' src/ --type=py | head -20Repository: PrivateAIM/node-pod-orchestration
Length of output: 59
🏁 Script executed:
# Check if serviceAccountsEnabled is used elsewhere to understand expected type
rg -n 'serviceAccountsEnabled' . --type=pyRepository: PrivateAIM/node-pod-orchestration
Length of output: 153
🏁 Script executed:
# Check for any Keycloak configuration files or documentation
find . -type f \( -name "*keycloak*" -o -name "*config*" \) | grep -E "\.(md|py|json|yaml|yml)$" | head -20Repository: PrivateAIM/node-pod-orchestration
Length of output: 102
🏁 Script executed:
# Look for similar Keycloak API calls or setup examples in the codebase
rg -n 'client_data\|clientId' . --type=py -B 2 -A 2Repository: PrivateAIM/node-pod-orchestration
Length of output: 59
🌐 Web query:
Keycloak Admin API serviceAccountsEnabled field type boolean or string
💡 Result:
In the Keycloak Admin REST API, the serviceAccountsEnabled field (in ClientRepresentation) is a boolean (JSON true/false), not a string. [1], [2]
Example
{ "serviceAccountsEnabled": true }Sources: Keycloak Admin REST API docs (latest / 26.x) and older version docs showing serviceAccountsEnabled: boolean. [1], [2]
Add explicit HTTP timeouts to external Keycloak calls and fix serviceAccountsEnabled field type.
Line 86 and Line 106 perform blocking network calls without timeout. This can hang workers under network degradation.
Additionally, line 84 sets serviceAccountsEnabled to string 'true' instead of the boolean true that the Keycloak Admin API expects, which may cause client creation to fail or behave unexpectedly.
🔧 Proposed fixes
+REQUEST_TIMEOUT_SECONDS = 10
def _create_keycloak_client(admin_token: str, analysis_id: str) -> None:
url_create_client = f"{_KEYCLOAK_URL}/admin/realms/{_KEYCLOAK_REALM}/clients"
headers = {'Authorization': f"Bearer {admin_token}",
'Content-Type': "application/json"}
client_data = {'clientId': f"{analysis_id}",
'name': f"flame-{analysis_id}",
- 'serviceAccountsEnabled': 'true'}
+ 'serviceAccountsEnabled': True}
response = requests.post(
url_create_client,
headers=headers,
json=client_data,
+ timeout=REQUEST_TIMEOUT_SECONDS,
)
response.raise_for_status()
def delete_keycloak_client(analysis_id: str) -> None:
...
response = requests.get(
url_get_client,
headers=headers,
+ timeout=REQUEST_TIMEOUT_SECONDS,
)
response.raise_for_status()Also applies to: 103-107 and other requests calls at lines 25, 45, 62, 72, 94, 117.
🧰 Tools
🪛 Ruff (0.15.2)
[error] 86-86: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/token.py` around lines 80 - 87, The Keycloak HTTP calls in token.py
are missing explicit timeouts and one request payload sets
serviceAccountsEnabled as the string 'true' instead of a boolean; update all
requests.* calls (e.g., the requests.post that assigns response and any other
requests.get/post used in this module) to include a sensible timeout argument
(e.g., timeout=10) to avoid hanging, and change
client_data['serviceAccountsEnabled'] from the string 'true' to the boolean True
so the Keycloak Admin API receives the correct type; ensure
response.raise_for_status() is kept after the timed request calls (and apply the
same timeout addition to any other request sites referenced: the calls around
the symbols that build headers, client_data, and where response is read).
| client_data = {'clientId': f"{analysis_id}", | ||
| 'name': f"flame-{analysis_id}", | ||
| 'serviceAccountsEnabled': 'true'} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the file to confirm the current state
cat -n src/utils/token.py | sed -n '80,90p'Repository: PrivateAIM/node-pod-orchestration
Length of output: 624
🌐 Web query:
Keycloak Admin REST API ClientRepresentation serviceAccountsEnabled type specification
💡 Result:
In the Keycloak Admin REST API ClientRepresentation, the field:
serviceAccountsEnabledis optional and its type isBoolean(i.e., JSONtrue/false). [1]
Example
{
"serviceAccountsEnabled": true
}Source(s):
[1] Keycloak Admin REST API (latest) – ClientRepresentation schema/table (shows serviceAccountsEnabled → Boolean). (keycloak.org)
Citations:
Use a boolean for serviceAccountsEnabled, not a string.
On Line 84, 'serviceAccountsEnabled': 'true' is a string that serializes to the JSON string "true", not the JSON boolean true. The Keycloak Admin REST API ClientRepresentation expects serviceAccountsEnabled to be of type Boolean, so passing a string will cause the API request to fail or behave unexpectedly.
🔧 Proposed fix
- client_data = {'clientId': f"{analysis_id}",
- 'name': f"flame-{analysis_id}",
- 'serviceAccountsEnabled': 'true'}
+ client_data = {'clientId': analysis_id,
+ 'name': f'flame-{analysis_id}',
+ 'serviceAccountsEnabled': True}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| client_data = {'clientId': f"{analysis_id}", | |
| 'name': f"flame-{analysis_id}", | |
| 'serviceAccountsEnabled': 'true'} | |
| client_data = {'clientId': analysis_id, | |
| 'name': f'flame-{analysis_id}', | |
| 'serviceAccountsEnabled': True} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/token.py` around lines 82 - 84, The client_data dict sets
'serviceAccountsEnabled' to the string 'true' which will serialize as a JSON
string; change it to a Python boolean True so the ClientRepresentation sent to
Keycloak uses a JSON boolean. Locate the client_data assignment (variable name
client_data and key serviceAccountsEnabled) in src/utils/token.py and replace
the string value with the boolean True, and scan nearby client creation code for
any other boolean-like fields to ensure they use Python booleans rather than
strings.
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Refactor
Chores