-
Notifications
You must be signed in to change notification settings - Fork 215
PR #2 - Task 4. Compute Traces - Joe Black #98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds Python version pinning and enhances txt2img/img2img backends with dynamic checkpoint discovery/validation, GPU/NVML-based metrics, timing, and CSV inference tracing. Workflows now resolve checkpoints from filesystem and dump JSON. New test utilities exercise endpoints and base64 handling. Frontend surfaces backend metrics with a badge and CSV export. Frontend tooling updated. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as Frontend (Txt2Img/Img2Img)
participant API as Backend Server
participant FS as Filesystem (Checkpoints)
participant NVML as NVML
participant Comfy as ComfyUI
participant CSV as Inference Traces (CSV)
Client->>API: POST /api/(txt2img|img2img) {payload}
API->>FS: List ComfyUI/models/checkpoints (*.ckpt, *.safetensors)
FS-->>API: Allowed checkpoints
alt No checkpoints
API-->>Client: 500 No checkpoints available
else Invalid/missing ckpt
API->>API: Select first allowed checkpoint
end
API->>NVML: Query GPU name/driver
NVML-->>API: GPU info (or error -> CPU/N/A)
API->>Comfy: send_to_comfyui(workflow)
Comfy-->>API: comfy_response (images, etc.)
API->>API: Compute elapsed and time_per_image
API->>CSV: Append inference trace row
API-->>Client: Response {images, comfy_response{metrics}}
sequenceDiagram
autonumber
participant FE as Frontend Page
participant BE as Backend
FE->>BE: Generate (txt2img/img2img)
BE-->>FE: Response {comfy_response.metrics}
alt Metrics available
FE->>FE: setMetrics(...)
FE->>User: Show MetricsBadge (time/img, GPU/driver)
User->>FE: Click "Download CSV"
FE->>User: Download inference_trace_(txt2img|img2img).csv
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
Reviewer's GuideInstrumented backend inference endpoints to measure wall-clock time per image, capture GPU/driver info via NVML, and log these metrics to CSV; enriched API responses with metrics; enhanced workflows to dynamically load and validate checkpoint models and dump JSON; updated frontend to display a compact metrics badge and support CSV export; added integration tests and bumped frontend tooling versions. Sequence diagram for inference request with metrics logging and response enrichmentsequenceDiagram
actor User
participant Frontend
participant Backend
participant ComfyUI
participant GPU
User->>Frontend: Request image generation
Frontend->>Backend: POST /api/txt2img or /api/img2img
Backend->>GPU: Query GPU name & driver (NVML)
Backend->>Backend: Start timer
Backend->>ComfyUI: Send workflow
ComfyUI-->>Backend: Return generated images
Backend->>Backend: Stop timer, compute metrics
Backend->>Backend: Log metrics to CSV
Backend->>Frontend: Respond with images + metrics
Frontend->>User: Display images and metrics badge
User->>Frontend: (Optional) Export metrics to CSV
Class diagram for metrics integration in frontend and backendclassDiagram
class Metrics {
+float elapsed_time_sec
+float time_per_image_sec
+string gpu
+string driver_version
}
class BackendInferenceLogger {
+log_inference_trace(...)
+ensure_csv_exists()
}
class Txt2ImgPage {
+metrics: Metrics | null
+setMetrics(...)
+exportTxt2ImgMetricsToCSV()
}
class Img2ImgPage {
+metrics: Metrics | null
+setMetrics(...)
+exportImg2ImgMetricsToCSV()
}
BackendInferenceLogger --> Metrics
Txt2ImgPage --> Metrics
Img2ImgPage --> Metrics
Class diagram for dynamic checkpoint model validation in workflow transformationclassDiagram
class WorkflowTransformer {
+transform_to_txt2img_workflow(data)
+transform_to_img2img_workflow(data)
+get_available_checkpoints()
}
class Data {
+model_name: string
+ckpt_name: string
...
}
WorkflowTransformer --> Data
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
Blocking issues:
- Detected a 'requests' call without a timeout set. By default, 'requests' calls wait until the connection is closed. This means a 'requests' call without a timeout will hang the program if a response is never received. Consider setting a timeout for all 'requests'. (link)
- Detected a 'requests' call without a timeout set. By default, 'requests' calls wait until the connection is closed. This means a 'requests' call without a timeout will hang the program if a response is never received. Consider setting a timeout for all 'requests'. (link)
General comments:
- Consolidate duplicated CSV logging logic into a shared helper to avoid copy-pasting between txt2img and img2img handlers.
- Remove generated artifacts (inference_trace CSVs, workflow dumps, timestamped Vite config) from source control and add them to .gitignore.
- Refactor the checkpoint validation code to simplify the nested conditionals and ensure fallback and error cases are unambiguous.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consolidate duplicated CSV logging logic into a shared helper to avoid copy-pasting between txt2img and img2img handlers.
- Remove generated artifacts (inference_trace CSVs, workflow dumps, timestamped Vite config) from source control and add them to .gitignore.
- Refactor the checkpoint validation code to simplify the nested conditionals and ensure fallback and error cases are unambiguous.
## Individual Comments
### Comment 1
<location> `dream_layer_backend/txt2img_server.py:171` </location>
<code_context>
+ elapsed = time.perf_counter() - start_time
+
+ # Calculate images generated
+ images_generated = len(comfy_response.get("all_images", []))
+ time_per_image = elapsed / images_generated if images_generated > 0 else None
+
</code_context>
<issue_to_address>
Variable 'total_time' is undefined; should use 'elapsed'.
Using 'total_time' instead of 'elapsed' will cause a runtime error since 'total_time' is not defined. Please replace 'total_time' with 'elapsed' in the calculation.
</issue_to_address>
### Comment 2
<location> `dream_layer_backend/img2img_workflow.py:37` </location>
<code_context>
+ logger.error(f"Checkpoints directory does not exist: {checkpoints_dir}")
+ return []
+
+ models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']]
+ logger.info(f"Found checkpoint files: {models}")
+ return models
</code_context>
<issue_to_address>
Suffix filtering may miss files with double extensions.
Using 'f.suffix' will not detect files with compound extensions like '.ckpt.safetensors'. 'str.endswith' is recommended for accurate filtering.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']]
=======
valid_extensions = ('.safetensors', '.ckpt', '.ckpt.safetensors')
models = [f.name for f in checkpoints_dir.glob("*") if any(f.name.endswith(ext) for ext in valid_extensions)]
>>>>>>> REPLACE
</suggested_fix>
### Comment 3
<location> `dream_layer_backend/txt2img_workflow.py:29` </location>
<code_context>
+ logger.error(f"Checkpoints directory does not exist: {checkpoints_dir}")
+ return []
+
+ models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']]
+ logger.info(f"Found checkpoint files: {models}")
+ return models
</code_context>
<issue_to_address>
Suffix filtering may miss files with double extensions.
Using 'f.suffix' only matches the last extension, so files like '.ckpt.safetensors' will be missed. 'str.endswith' can handle multiple extensions and is more reliable here.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']]
=======
models = [f.name for f in checkpoints_dir.glob("*") if f.name.endswith(('.safetensors', '.ckpt'))]
>>>>>>> REPLACE
</suggested_fix>
### Comment 4
<location> `dream_layer_backend/test_image/test_img_endpoints.py:23` </location>
<code_context>
+COMFYUI_ROOT = Path(__file__).parent.parent.parent / "ComfyUI"
+CHECKPOINTS_DIR = COMFYUI_ROOT / "models" / "checkpoints"
+
+def get_checkpoints():
+ if not CHECKPOINTS_DIR.exists():
+ print(f"Checkpoints directory not found: {CHECKPOINTS_DIR}")
</code_context>
<issue_to_address>
Missing assertions in test_img_endpoints.py.
Please add assertions to check expected status codes, response keys, and metrics, so test failures are detected automatically.
Suggested implementation:
```python
def test_img2img_endpoint(client):
response = client.post(
f"{IMG2IMG_API_HOST}/img2img",
json={"image": get_base64_image(), "prompt": "A cat in a hat"}
)
print("img2img response:", response.json())
# Assert status code
assert response.status_code == 200, f"Expected status 200, got {response.status_code}"
# Assert response contains expected keys
resp_json = response.json()
assert "result" in resp_json, "Missing 'result' key in response"
assert "metrics" in resp_json, "Missing 'metrics' key in response"
# Assert metrics contains expected fields
metrics = resp_json["metrics"]
assert "inference_time" in metrics, "Missing 'inference_time' in metrics"
assert isinstance(metrics["inference_time"], (int, float)), "inference_time should be a number"
```
```python
def test_checkpoints_endpoint(client):
response = client.get(f"{IMG2IMG_API_HOST}/checkpoints")
print("checkpoints response:", response.json())
# Assert status code
assert response.status_code == 200, f"Expected status 200, got {response.status_code}"
# Assert response contains expected keys
resp_json = response.json()
assert "checkpoints" in resp_json, "Missing 'checkpoints' key in response"
# Assert checkpoints is a list
assert isinstance(resp_json["checkpoints"], list), "'checkpoints' should be a list"
```
</issue_to_address>
## Security Issues
### Issue 1
<location> `dream_layer_backend/test_image/test_img_endpoints.py:82` </location>
<issue_to_address>
**security (python.requests.best-practice.use-timeout):** Detected a 'requests' call without a timeout set. By default, 'requests' calls wait until the connection is closed. This means a 'requests' call without a timeout will hang the program if a response is never received. Consider setting a timeout for all 'requests'.
```suggestion
r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload, timeout=30)
```
*Source: opengrep*
</issue_to_address>
### Issue 2
<location> `dream_layer_backend/test_image/test_img_endpoints.py:102` </location>
<issue_to_address>
**security (python.requests.best-practice.use-timeout):** Detected a 'requests' call without a timeout set. By default, 'requests' calls wait until the connection is closed. This means a 'requests' call without a timeout will hang the program if a response is never received. Consider setting a timeout for all 'requests'.
```suggestion
r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload, timeout=30)
```
*Source: opengrep*
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name): | ||
| """Log inference details to CSV and console.""" | ||
| # Checking to see if images were generated, finding the time per image | ||
| time_per_image = None if images_generated == 0 else total_time / images_generated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Variable 'total_time' is undefined; should use 'elapsed'.
Using 'total_time' instead of 'elapsed' will cause a runtime error since 'total_time' is not defined. Please replace 'total_time' with 'elapsed' in the calculation.
| logger.error(f"Checkpoints directory does not exist: {checkpoints_dir}") | ||
| return [] | ||
|
|
||
| models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Suffix filtering may miss files with double extensions.
Using 'f.suffix' will not detect files with compound extensions like '.ckpt.safetensors'. 'str.endswith' is recommended for accurate filtering.
| models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] | |
| valid_extensions = ('.safetensors', '.ckpt', '.ckpt.safetensors') | |
| models = [f.name for f in checkpoints_dir.glob("*") if any(f.name.endswith(ext) for ext in valid_extensions)] |
| root_dir = Path(__file__).resolve().parent.parent | ||
| checkpoints_dir = root_dir / "ComfyUI" / "models" / "checkpoints" | ||
| try: | ||
| models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (bug_risk): Suffix filtering may miss files with double extensions.
Using 'f.suffix' only matches the last extension, so files like '.ckpt.safetensors' will be missed. 'str.endswith' can handle multiple extensions and is more reliable here.
| models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] | |
| models = [f.name for f in checkpoints_dir.glob("*") if f.name.endswith(('.safetensors', '.ckpt'))] |
| COMFYUI_ROOT = Path(__file__).parent.parent.parent / "ComfyUI" | ||
| CHECKPOINTS_DIR = COMFYUI_ROOT / "models" / "checkpoints" | ||
|
|
||
| def get_checkpoints(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Missing assertions in test_img_endpoints.py.
Please add assertions to check expected status codes, response keys, and metrics, so test failures are detected automatically.
Suggested implementation:
def test_img2img_endpoint(client):
response = client.post(
f"{IMG2IMG_API_HOST}/img2img",
json={"image": get_base64_image(), "prompt": "A cat in a hat"}
)
print("img2img response:", response.json())
# Assert status code
assert response.status_code == 200, f"Expected status 200, got {response.status_code}"
# Assert response contains expected keys
resp_json = response.json()
assert "result" in resp_json, "Missing 'result' key in response"
assert "metrics" in resp_json, "Missing 'metrics' key in response"
# Assert metrics contains expected fields
metrics = resp_json["metrics"]
assert "inference_time" in metrics, "Missing 'inference_time' in metrics"
assert isinstance(metrics["inference_time"], (int, float)), "inference_time should be a number"def test_checkpoints_endpoint(client):
response = client.get(f"{IMG2IMG_API_HOST}/checkpoints")
print("checkpoints response:", response.json())
# Assert status code
assert response.status_code == 200, f"Expected status 200, got {response.status_code}"
# Assert response contains expected keys
resp_json = response.json()
assert "checkpoints" in resp_json, "Missing 'checkpoints' key in response"
# Assert checkpoints is a list
assert isinstance(resp_json["checkpoints"], list), "'checkpoints' should be a list"| "denoise": 1.0 | ||
| } | ||
|
|
||
| r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
security (python.requests.best-practice.use-timeout): Detected a 'requests' call without a timeout set. By default, 'requests' calls wait until the connection is closed. This means a 'requests' call without a timeout will hang the program if a response is never received. Consider setting a timeout for all 'requests'.
| r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload) | |
| r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload, timeout=30) |
Source: opengrep
| var vite_config_default = defineConfig({ | ||
| server: { | ||
| host: "::", | ||
| port: 8080 | ||
| }, | ||
| plugins: [ | ||
| react() | ||
| ], | ||
| resolve: { | ||
| alias: { | ||
| "@": path.resolve(__vite_injected_original_dirname, "./src") | ||
| } | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Use const or let instead of var. (avoid-using-var)
Explanation
`const` is preferred as it ensures you cannot reassign references (which can lead to buggy and confusing code). `let` may be used if you need to reassign references - it's preferred to `var` because it is block- rather than function-scoped.From the Airbnb JavaScript Style Guide
| logger.info(f"Found checkpoint files: {models}") | ||
| return models | ||
|
|
||
| def transform_to_img2img_workflow(data): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Replace f-string with no interpolated values with string (
remove-redundant-fstring) - Low code quality found in transform_to_img2img_workflow - 13% (
low-code-quality)
Explanation
The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.
How can you solve this?
It might be worth refactoring this function to make it shorter and more readable.
- Reduce the function length by extracting pieces of functionality out into
their own functions. This is the most important thing you can do - ideally a
function should be less than 10 lines. - Reduce nesting, perhaps by introducing guard clauses to return early.
- Ensure that variables are tightly scoped, so that code using related concepts
sits together within the function rather than being scattered.
| checkpoints = get_checkpoints() | ||
| if checkpoints: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Use named expression to simplify assignment and conditional (use-named-expression)
| checkpoints = get_checkpoints() | |
| if checkpoints: | |
| if checkpoints := get_checkpoints(): |
| models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] | ||
| return models |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Inline variable that is immediately returned (inline-immediately-returned-variable)
| models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] | |
| return models | |
| return [ | |
| f.name | |
| for f in checkpoints_dir.glob("*") | |
| if f.suffix in ['.safetensors', '.ckpt'] | |
| ] |
| logger.error(f"Failed to list checkpoints: {e}") | ||
| return [] | ||
|
|
||
| def transform_to_txt2img_workflow(data): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Replace f-string with no interpolated values with string [×5] (
remove-redundant-fstring) - Low code quality found in transform_to_txt2img_workflow - 16% (
low-code-quality)
Explanation
The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.
How can you solve this?
It might be worth refactoring this function to make it shorter and more readable.
- Reduce the function length by extracting pieces of functionality out into
their own functions. This is the most important thing you can do - ideally a
function should be less than 10 lines. - Reduce nesting, perhaps by introducing guard clauses to return early.
- Ensure that variables are tightly scoped, so that code using related concepts
sits together within the function rather than being scattered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🔭 Outside diff range comments (1)
dream_layer_frontend/vite.config.ts.timestamp-1755070710337-0629aa119849b.mjs (1)
1-24: Critical: committed a local, machine-specific compiled Vite config with absolute paths — replace with a standard vite.config.ts
- The file imports from absolute file:// URLs referencing a local user path and embeds a sourcemap; this won’t work on other machines/CI.
- The timestamped filename and injected dirname indicate a built artifact accidentally committed.
Replace the file with a conventional vite.config.ts and remove this artifact from version control. Suggested content:
-// vite.config.ts -import { defineConfig } from "file:///Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend/node_modules/vite/dist/node/index.js"; -import react from "file:///Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend/node_modules/@vitejs/plugin-react-swc/index.mjs"; -import path from "path"; -var __vite_injected_original_dirname = "/Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend"; -var vite_config_default = defineConfig({ - server: { - host: "::", - port: 8080 - }, - plugins: [ - react() - ], - resolve: { - alias: { - "@": path.resolve(__vite_injected_original_dirname, "./src") - } - } -}); -export { - vite_config_default as default -}; -//# sourceMappingURL=... +// vite.config.ts +import { defineConfig } from "vite"; +import react from "@vitejs/plugin-react-swc"; +import { fileURLToPath, URL } from "node:url"; + +export default defineConfig({ + server: { + host: true, // binds on all addresses (IPv4/IPv6); change to "localhost" if preferred + port: 8080, + }, + plugins: [react()], + resolve: { + alias: { + "@": fileURLToPath(new URL("./src", import.meta.url)), + }, + }, +});Follow-ups:
- Delete the timestamped .mjs file and add a proper vite.config.ts at dream_layer_frontend/vite.config.ts.
- Consider adding patterns to .gitignore to prevent committing build artifacts.
I can open a cleanup PR to add the proper config and gitignore entries if you want.
♻️ Duplicate comments (2)
dream_layer_backend/txt2img_workflow.py (1)
263-273: Consider making the workflow dump behavior configurable.Similar to img2img_workflow.py, the workflow dumping logic always saves to a fixed location and prints to console. This creates duplicate console output and unnecessary I/O operations in production.
The same configuration approach suggested for img2img_workflow.py should be applied here for consistency. Consider extracting this into a shared utility function to avoid duplication.
dream_layer_backend/img2img_server.py (1)
167-176: Consider extracting the checkpoint discovery function.This inline function duplicates logic from the workflow files. Since all three files need checkpoint discovery, it should be centralized.
As suggested for the workflow files, this function should be moved to a shared module like
shared_checkpoint_utils.pyto maintain DRY principles and ensure consistency across the codebase.
🧹 Nitpick comments (14)
.python-version (1)
1-1: Action: Confirm Python version pin — .python-version = 3.11.9Quick check summary: repo search shows install scripts accept Python 3.8+ and the app logs the runtime Python, but I did not find CI workflows, Dockerfiles, or packaging metadata that pin a different Python version. If tests referenced in the PR actually ran on 3.10.18, please either align or document the difference and verify native deps (PyTorch / NVML / CUDA) support 3.11.9.
Current .python-version:
3.11.9Places to review / that I inspected:
- .python-version (repo root) — currently "3.11.9"
- install_mac_dependencies.sh (~lines 87–99) — accepts 3.8+
- install_linux_dependencies.sh (~lines 94–98) — accepts 3.8+
- install_windows_dependencies.ps1 (~lines 186–204) — accepts 3.8+
- docs/contributing.md (around line 548) — asks contributors to include environment details
- ComfyUI/main.py and ComfyUI/server.py — log Python version at runtime
- No Dockerfile or .github workflow YAML found that pins Python
Recommended next steps (pick one):
- Align .python-version with the Python used in CI/test logs (e.g., set to 3.10.18), or
- Update CI to use 3.11.9 and run full integration tests for native deps, and/or
- Add python_requires in packaging (pyproject/setup.cfg) or a README note listing supported Python versions.
If you want, I can:
- Draft the README/CONTRIBUTING note about supported Python versions, or
- Update .python-version to 3.10.18 (if you confirm that's the CI/test version).
dream_layer_backend/test_image/base64conversion.py (3)
29-31: Fix misleading comment (references “4.txt”)The output file is base64_txt_test_image.txt; the comment says “4.txt”.
-# Save base64 string to 4.txt +# Save base64 string to base64_txt_test_image.txt
11-18: Make input selection deterministic and avoid picking the just-generated file
- Path.iterdir() order is arbitrary; also the newly generated test_image.png can be picked on subsequent runs.
-# Find first image file in folder -image_files = [f for f in folder.iterdir() if f.suffix.lower() in img_extensions] +# Find first image file in folder (stable order), excluding the generated output name +image_files = sorted( + [f for f in folder.iterdir() if f.suffix.lower() in img_extensions and f.name != "test_image.png"] +)
1-43: Wrap as a script entrypoint and simplify I/O with Path helpers
- Prevent accidental execution on import by adding a main guard.
- Prefer Path.read_bytes/write_bytes and write_text for brevity.
Proposed refactor (outside current range diff for clarity):
from pathlib import Path import base64 img_extensions = {".png", ".jpg", ".jpeg", ".bmp", ".gif"} folder = Path(__file__).parent BASE64_TXT = folder / "base64_txt_test_image.txt" OUTPUT_IMG = folder / "test_image.png" def main(): image_files = sorted( [f for f in folder.iterdir() if f.suffix.lower() in img_extensions and f.name != OUTPUT_IMG.name] ) if not image_files: raise FileNotFoundError(f"No image files found in {folder}") input_img_path = image_files[0] print(f"Using input image: {input_img_path.name}") img_bytes = input_img_path.read_bytes() base64_img = base64.b64encode(img_bytes).decode("utf-8") BASE64_TXT.write_text(base64_img) print(f"Saved base64 string to {BASE64_TXT}") decoded_bytes = base64.b64decode(base64_img) OUTPUT_IMG.write_bytes(decoded_bytes) print(f"Saved decoded image as {OUTPUT_IMG}") if __name__ == "__main__": main()dream_layer_backend/workflow_dumps/last_workflow.json (1)
1-89: Consider excluding runtime-generated workflow dumps from version control
- This looks like a runtime artifact. If it’s just an example, move it to an examples/ folder or document its purpose. Otherwise, ignore via .gitignore to avoid churn/noise.
Want me to add a .gitignore rule (e.g., dream_layer_backend/workflow_dumps/*.json) and move a curated example to docs/examples?
dream_layer_backend/img2img_workflow.py (1)
260-270: Consider making the workflow dump behavior configurable.The workflow dumping logic always saves to a fixed location and prints to console, which might not be desired in production environments. This could impact performance and create unnecessary I/O operations.
Consider making this behavior configurable via environment variable:
- # Ensure dump directory exists - dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") - os.makedirs(dump_dir, exist_ok=True) - - # Save the workflow JSON - output_path = os.path.join(dump_dir, "last_workflow.json") - with open(output_path, "w") as f: - json.dump(workflow, f, indent=2) - - print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") - print(f"🚀 Workflow JSON saved to {output_path}") + # Dump workflow for debugging if enabled + if os.getenv("DUMP_WORKFLOWS", "false").lower() == "true": + dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") + os.makedirs(dump_dir, exist_ok=True) + + output_path = os.path.join(dump_dir, "last_workflow.json") + with open(output_path, "w") as f: + json.dump(workflow, f, indent=2) + + if os.getenv("DEBUG", "false").lower() == "true": + print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") + print(f"🚀 Workflow JSON saved to {output_path}")dream_layer_backend/txt2img_workflow.py (1)
91-91: Consider providing a more helpful error message.The error message mentions a hardcoded fallback model name that's no longer used in the code.
- raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors + raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory. Please ensure at least one .safetensors or .ckpt file is available.")dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx (1)
462-475: Consider improving the layout of metrics display elements.The current implementation nests the CSV download button and MetricsBadge within a fragment alongside conditional rendering, which might cause layout issues. Also, the MetricsBadge is being passed a potentially undefined value.
{metrics && ( - <> - <button - onClick={exportImg2ImgMetricsToCSV} - className="rounded-md border border-input bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground" - > - Download CSV - </button> - <MetricsBadge - elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / (batchSize * batchCount))} - gpu={metrics.gpu} - driver={metrics.driver_version} - /> - </> + <div className="flex items-center gap-2"> + <button + onClick={exportImg2ImgMetricsToCSV} + className="rounded-md border border-input bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground" + > + Download CSV + </button> + <MetricsBadge + elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / Math.max(1, batchSize * batchCount))} + gpu={metrics.gpu} + driver={metrics.driver_version} + /> + </div> )}dream_layer_backend/img2img_server.py (2)
199-207: Consider adding more robust GPU detection.The current implementation only handles the first GPU and silently falls back to CPU on any error. This might hide configuration issues.
try: pynvml.nvmlInit() - gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0) - gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode() - driver_version = pynvml.nvmlSystemGetDriverVersion().decode() - except Exception: + device_count = pynvml.nvmlDeviceGetCount() + if device_count > 0: + gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0) + gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode() + driver_version = pynvml.nvmlSystemGetDriverVersion().decode() + if device_count > 1: + logger.info(f"Multiple GPUs detected ({device_count}), using first GPU") + else: + gpu_name = "CPU" + driver_version = "N/A" + logger.info("No CUDA GPUs detected, using CPU") + except Exception as e: gpu_name = "CPU" driver_version = "N/A" + logger.debug(f"GPU detection failed: {e}")
234-251: Consider using DictWriter for CSV operations.Using
csv.DictWriterwould make the CSV operations more maintainable and less error-prone.- # Ensure CSV file exists and has header - if not os.path.exists(TRACE_CSV): - with open(TRACE_CSV, "w", newline="") as f: - writer = csv.writer(f) - writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"]) - - # Append new row to CSV - with open(TRACE_CSV, "a", newline="") as f: - writer = csv.writer(f) - writer.writerow([ - time.time(), - round(elapsed, 4), - images_generated, - round(time_per_image, 4) if time_per_image is not None else "", - gpu_name, - driver_version, - ckpt_name - ]) + # Define fieldnames for consistency + fieldnames = ["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version", "ckpt_name"] + + # Ensure CSV file exists and has header + file_exists = os.path.exists(TRACE_CSV) + + with open(TRACE_CSV, "a", newline="") as f: + writer = csv.DictWriter(f, fieldnames=fieldnames) + + # Write header if new file + if not file_exists: + writer.writeheader() + + # Write data row + writer.writerow({ + "timestamp": time.time(), + "total_time_s": round(elapsed, 4), + "images_generated": images_generated, + "time_per_image_s": round(time_per_image, 4) if time_per_image is not None else "", + "gpu_name": gpu_name, + "driver_version": driver_version, + "ckpt_name": ckpt_name + })dream_layer_backend/test_image/test_img_endpoints.py (2)
1-3: Nit: Fix docstring grammar for claritySuggest rewording the docstring to be clearer and more idiomatic.
-""" -Test file validation logic for testing model endpoint communication -""" +""" +Test utility for validating model endpoint communication +"""
43-52: Helpful response preview with JSON fallbackGood defensive parsing and truncation. Consider logging the full body to a file if deeper diagnostics are needed later.
dream_layer_backend/txt2img_server.py (2)
49-74: Trace logging is useful; consider ISO timestamps for readabilityCurrent CSV stores epoch seconds. Optional: add a human-readable ISO 8601 timestamp column, or replace the current timestamp with ISO to simplify CSV consumption without post-processing.
Example tweak:
- writer.writerow([ - time.time(), + writer.writerow([ + time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()), round(total_time, 4), images_generated, round(time_per_image, 4) if time_per_image is not None else "", gpu_name, driver_version, ckpt_name ])
194-201: Minor duplication of generated_images
send_to_comfyuialready attachesall_imagesandgenerated_images. This re-assignment is harmless but redundant. Optional to remove to reduce noise.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (8)
dream_layer_backend/dream_layer_backend_utils/inference_traces/inference_trace_img2img.csvis excluded by!**/*.csvdream_layer_backend/dream_layer_backend_utils/inference_traces/inference_trace_txt2img.csvis excluded by!**/*.csvdream_layer_backend/inference_traces/inference_trace_img2img.csvis excluded by!**/*.csvdream_layer_backend/inference_traces/inference_trace_text2img.csvis excluded by!**/*.csvdream_layer_backend/inference_traces/inference_trace_txt2img.csvis excluded by!**/*.csvdream_layer_backend/test_image/examjam.pngis excluded by!**/*.pngdream_layer_backend/test_image/test_image.pngis excluded by!**/*.pngdream_layer_frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (12)
.python-version(1 hunks)dream_layer_backend/img2img_server.py(3 hunks)dream_layer_backend/img2img_workflow.py(3 hunks)dream_layer_backend/test_image/base64conversion.py(1 hunks)dream_layer_backend/test_image/test_img_endpoints.py(1 hunks)dream_layer_backend/txt2img_server.py(3 hunks)dream_layer_backend/txt2img_workflow.py(4 hunks)dream_layer_backend/workflow_dumps/last_workflow.json(1 hunks)dream_layer_frontend/package.json(2 hunks)dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx(6 hunks)dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx(3 hunks)dream_layer_frontend/vite.config.ts.timestamp-1755070710337-0629aa119849b.mjs(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
dream_layer_backend/img2img_server.py (3)
dream_layer_backend/txt2img_server.py (1)
get_allowed_checkpoints(120-128)dream_layer_backend/img2img_workflow.py (1)
transform_to_img2img_workflow(41-271)dream_layer_backend/shared_utils.py (1)
send_to_comfyui(161-223)
dream_layer_backend/img2img_workflow.py (1)
dream_layer_backend/txt2img_workflow.py (1)
get_available_checkpoints(25-33)
dream_layer_backend/txt2img_workflow.py (1)
dream_layer_backend/img2img_workflow.py (1)
get_available_checkpoints(25-39)
dream_layer_backend/txt2img_server.py (3)
dream_layer_backend/shared_utils.py (1)
send_to_comfyui(161-223)dream_layer_backend/img2img_server.py (1)
get_allowed_checkpoints(167-175)dream_layer_backend/txt2img_workflow.py (1)
transform_to_txt2img_workflow(35-280)
🪛 Ruff (0.12.2)
dream_layer_backend/img2img_workflow.py
259-259: f-string without any placeholders
Remove extraneous f prefix
(F541)
dream_layer_backend/txt2img_workflow.py
5-5: Redefinition of unused json from line 1
Remove definition: json
(F811)
dream_layer_backend/txt2img_server.py
172-172: Undefined name total_time
(F821)
🔇 Additional comments (11)
dream_layer_backend/workflow_dumps/last_workflow.json (1)
39-41: Confirm batch_size alignment with “time per image” metric
- batch_size is set to 4. Ensure the “time per image” displayed in the frontend divides by the actual number of images produced per inference.
If you need, I can scan server code to confirm where you compute per-image time and adjust if needed.
dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx (1)
272-288: Good implementation of the metrics badge component.The MetricsBadge component is well-structured with proper truncation for long GPU names and informative tooltips. This provides a clean and professional UI for displaying performance metrics.
dream_layer_backend/img2img_server.py (1)
160-189: Good implementation of dynamic checkpoint validation.The checkpoint validation logic properly handles missing or invalid checkpoints with appropriate fallbacks and error handling.
dream_layer_backend/test_image/test_img_endpoints.py (4)
29-42: Base64 cache helper looks goodReads cached base64 when present and falls back to encoding the PNG, persisting the result. Clear and robust.
56-63: Sane checkpoint fallback behaviorSelecting the first available checkpoint or proceeding without one is pragmatic for a smoke test script.
66-81: TXT2IMG payload aligns with backend expectationsFields and types match the server-side transformer. Including
ckpt_nameanddenoiseis consistent with the new dynamic checkpoint flow.
86-101: IMG2IMG payload looks correctIncludes base64 input and standard generation params. Mirrors server expectations.
dream_layer_backend/txt2img_server.py (4)
32-37: Inference traces directory and CSV path initialization looks solidCreates the directory if missing and centralizes file path. Straightforward and reliable.
41-48: CSV header creation is correctEnsures the file exists and writes headers exactly once. Good defensive setup.
181-187: Metrics augmentation is aligned; will benefit from thetotal_timefixOnce
time_per_imageis fixed above, this payload matches frontend expectations and CSV export goals.
75-87: CORS and logging: looks reasonable for local devPermissive origins are constrained to localhost; debug-friendly prints are okay for now.
| # Initialize logger | ||
| logger = logging.getLogger(__name__) | ||
|
|
||
| def get_available_checkpoints(): | ||
| logger.info(f"Current __file__ path: {__file__}") | ||
| root_dir = Path(__file__).resolve().parent.parent | ||
| logger.info(f"Resolved root_dir: {root_dir}") | ||
|
|
||
| checkpoints_dir = root_dir / "ComfyUI" / "models" / "checkpoints" | ||
| logger.info(f"Looking for checkpoints in: {checkpoints_dir}") | ||
|
|
||
| if not checkpoints_dir.exists(): | ||
| logger.error(f"Checkpoints directory does not exist: {checkpoints_dir}") | ||
| return [] | ||
|
|
||
| models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] | ||
| logger.info(f"Found checkpoint files: {models}") | ||
| return models |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting the shared checkpoint discovery logic.
Both img2img_workflow.py and txt2img_workflow.py have nearly identical get_available_checkpoints() implementations. This violates the DRY principle and could lead to maintenance issues.
Consider moving this function to a shared module:
# In a new file: dream_layer_backend/shared_checkpoint_utils.py
from pathlib import Path
import logging
logger = logging.getLogger(__name__)
def get_available_checkpoints():
"""Get list of available checkpoint files from the models directory."""
root_dir = Path(__file__).resolve().parent.parent
checkpoints_dir = root_dir / "ComfyUI" / "models" / "checkpoints"
logger.info(f"Looking for checkpoints in: {checkpoints_dir}")
if not checkpoints_dir.exists():
logger.error(f"Checkpoints directory does not exist: {checkpoints_dir}")
return []
try:
models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']]
logger.info(f"Found checkpoint files: {models}")
return models
except Exception as e:
logger.error(f"Failed to list checkpoints: {e}")
return []Then import and use it in both workflow files.
🤖 Prompt for AI Agents
In dream_layer_backend/img2img_workflow.py around lines 23-39 the
get_available_checkpoints() logic is duplicated across workflows; extract it
into a shared module (e.g., dream_layer_backend/shared_checkpoint_utils.py) that
defines logger, computes root_dir = Path(__file__).resolve().parent.parent, sets
checkpoints_dir = root_dir/"ComfyUI"/"models"/"checkpoints", and returns a list
of filenames filtered by .safetensors and .ckpt while wrapping the directory
listing in a try/except that logs and returns [] on error; then replace the
local function in img2img_workflow.py (and in txt2img_workflow.py) with an
import from that shared module and call the shared get_available_checkpoints()
instead.
| workflow = inject_refiner_parameters(workflow, refiner_data) | ||
|
|
||
|
|
||
| print(f"✅ Workflow transformation complete") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove extraneous f-string prefix.
The f-string on line 259 doesn't contain any placeholders, making the f prefix unnecessary.
- print(f"✅ Workflow transformation complete")
+ print("✅ Workflow transformation complete")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print(f"✅ Workflow transformation complete") | |
| print("✅ Workflow transformation complete") |
🧰 Tools
🪛 Ruff (0.12.2)
259-259: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
In dream_layer_backend/img2img_workflow.py around line 259, the print statement
uses an unnecessary f-string prefix for a literal string; remove the leading "f"
so the statement prints the same message without treating it as an f-string.
Replace the f-prefixed string with a normal string literal (e.g., change
print(f"...") to print("...")).
| # Config | ||
| TXT2IMG_API_HOST = "http://127.0.0.1:5001" | ||
| IMG2IMG_API_HOST = "http://127.0.0.1:5004" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Add HTTP timeouts to avoid hanging tests
Network calls can hang if the server is down or slow. Add an explicit timeout to both requests and centralize the timeout value near the host config.
TXT2IMG_API_HOST = "http://127.0.0.1:5001"
IMG2IMG_API_HOST = "http://127.0.0.1:5004"
+REQUEST_TIMEOUT = 60 # seconds-r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload)
+r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload, timeout=REQUEST_TIMEOUT)-r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload)
+r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload, timeout=REQUEST_TIMEOUT)Run this quick check to verify the current file locations of your test assets (ensures the earlier path fix is correct):
Also applies to: 82-84, 102-104
🏁 Script executed:
#!/bin/bash
fd -a test_image.png -HI
fd -a base64_txt_test_image.txt -HILength of output: 205
Add HTTP timeouts to avoid hanging tests
Centralize a REQUEST_TIMEOUT constant near the host configs and pass it to all requests.post calls in the test to prevent hangs.
- File: dream_layer_backend/test_image/test_img_endpoints.py
- Add REQUEST_TIMEOUT near TXT2IMG_API_HOST / IMG2IMG_API_HOST (around lines 11–14)
- Update requests.post calls at ~lines 82–84 and ~102–104 to include timeout=REQUEST_TIMEOUT
Verified test assets (from your check):
- dream_layer_backend/test_image/test_image.png
- dream_layer_backend/test_image/base64_txt_test_image.txt
Suggested diffs:
TXT2IMG_API_HOST = "http://127.0.0.1:5001"
IMG2IMG_API_HOST = "http://127.0.0.1:5004"
+REQUEST_TIMEOUT = 60 # seconds
-r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload)
+r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload, timeout=REQUEST_TIMEOUT)
-r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload)
+r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload, timeout=REQUEST_TIMEOUT)
🤖 Prompt for AI Agents
In dream_layer_backend/test_image/test_img_endpoints.py around lines 11 to 14,
tests call external endpoints without timeouts which can hang; add a
REQUEST_TIMEOUT = 60 (seconds) constant next to TXT2IMG_API_HOST and
IMG2IMG_API_HOST, then update the two requests.post calls (around lines ~82–84
and ~102–104) to pass timeout=REQUEST_TIMEOUT so each POST uses the centralized
timeout.
| TEST_IMAGE_DIR = Path(__file__).parent / "test_image" | ||
| PNG_PATH = TEST_IMAGE_DIR / "test_image.png" | ||
| BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect test image directory path (double “test_image” nesting)
Path(__file__).parent already points to .../test_image/. Appending "test_image" creates a nested non-existent folder (test_image/test_image/...), making file lookups fail.
-TEST_IMAGE_DIR = Path(__file__).parent / "test_image"
+TEST_IMAGE_DIR = Path(__file__).parent
PNG_PATH = TEST_IMAGE_DIR / "test_image.png"
BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt" 📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| TEST_IMAGE_DIR = Path(__file__).parent / "test_image" | |
| PNG_PATH = TEST_IMAGE_DIR / "test_image.png" | |
| BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt" | |
| TEST_IMAGE_DIR = Path(__file__).parent | |
| PNG_PATH = TEST_IMAGE_DIR / "test_image.png" | |
| BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt" |
🤖 Prompt for AI Agents
In dream_layer_backend/test_image/test_img_endpoints.py around lines 15 to 18,
the TEST_IMAGE_DIR incorrectly appends "test_image" to Path(__file__).parent
creating a nested non-existent directory; remove the extra "test_image" so
TEST_IMAGE_DIR is Path(__file__).parent and update PNG_PATH and BASE64_PATH to
be resolved from that directory (e.g., PNG_PATH = TEST_IMAGE_DIR /
"test_image.png", BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt").
| import csv | ||
| import time | ||
| import pynvml | ||
| from shared_utils import send_to_comfyui |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid hard dependency on pynvml at import time
Importing pynvml at module import time will crash the server if the package isn’t installed or NVML is unavailable. Move the import into the try-block where you already handle fallback to CPU.
-import pynvml
from shared_utils import send_to_comfyui- try:
- pynvml.nvmlInit()
+ try:
+ import pynvml # lazy import to avoid hard dependency at module load
+ pynvml.nvmlInit()
gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0)
gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode()
driver_version = pynvml.nvmlSystemGetDriverVersion().decode()
except Exception:
gpu_name = "CPU"
driver_version = "N/A"Also applies to: 151-160
|
|
||
| # Dynamically determine the model name that's being used and validate | ||
| requested_model = data.get("model_name") | ||
| available_models = get_available_checkpoints() | ||
| if not available_models: | ||
| raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors | ||
|
|
||
| # Use requested model if valid, else fallback to detected | ||
| if requested_model and requested_model in available_models: | ||
| model_name = requested_model | ||
| else: | ||
| # fallback to first available checkpoint and log the fallback | ||
| model_name = available_models[0] | ||
| logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.") | ||
|
|
||
| #model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded | ||
|
|
||
| # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.) | ||
| closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove redundant closed_source_models declarations.
The closed_source_models list is defined twice (lines 83-84 and 104), and line 101 contains commented old code that should be removed.
- # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, Stability AI, etc.)
- closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro',
- 'flux-dev', 'ideogram-v3', 'stability-sdxl', 'stability-sd-turbo']
-
# Dynamically determine the model name that's being used and validate
requested_model = data.get("model_name")
available_models = get_available_checkpoints()
if not available_models:
- raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors
+ raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory. Please ensure at least one .safetensors or .ckpt file is available.")
# Use requested model if valid, else fallback to detected
if requested_model and requested_model in available_models:
model_name = requested_model
else:
# fallback to first available checkpoint and log the fallback
model_name = available_models[0]
logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.")
- #model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded
# Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.)
closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3']📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Dynamically determine the model name that's being used and validate | |
| requested_model = data.get("model_name") | |
| available_models = get_available_checkpoints() | |
| if not available_models: | |
| raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors | |
| # Use requested model if valid, else fallback to detected | |
| if requested_model and requested_model in available_models: | |
| model_name = requested_model | |
| else: | |
| # fallback to first available checkpoint and log the fallback | |
| model_name = available_models[0] | |
| logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.") | |
| #model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded | |
| # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.) | |
| closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3'] | |
| # Dynamically determine the model name that's being used and validate | |
| requested_model = data.get("model_name") | |
| available_models = get_available_checkpoints() | |
| if not available_models: | |
| raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory. Please ensure at least one .safetensors or .ckpt file is available.") | |
| # Use requested model if valid, else fallback to detected | |
| if requested_model and requested_model in available_models: | |
| model_name = requested_model | |
| else: | |
| # fallback to first available checkpoint and log the fallback | |
| model_name = available_models[0] | |
| logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.") | |
| # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.) | |
| closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3'] |
🤖 Prompt for AI Agents
In dream_layer_backend/txt2img_workflow.py around lines 86 to 104, there is a
duplicated closed_source_models list and a commented-out old model_name
assignment; remove the redundant declaration and the commented legacy line so
only a single closed_source_models definition remains (move/keep the one that
best fits context), and ensure any usages reference that single list without
changing logic.
| "build": "vite build", | ||
| "build:dev": "vite build --mode development", | ||
| "lint": "eslint .", | ||
| "test": "jest", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Tests won't run without Jest/Vitest setup — consider switching to Vitest (recommended for Vite) or add missing Jest deps/config
- Current script uses Jest, but there are no Jest devDependencies or config. In a Vite project, Vitest integrates better and needs minimal setup.
Option A (recommended): switch to Vitest.
- "test": "jest",
+ "test": "vitest --run",Also add devDeps (apply in the devDependencies block):
- vitest
- @vitest/coverage-v8 (optional)
- jsdom (test environment)
- @testing-library/react and @testing-library/user-event (optional, for UI tests)
Option B (keep Jest): add and configure:
- jest, ts-jest, @types/jest, jest-environment-jsdom
- Add a jest.config.ts with ESM/TS transform and "testEnvironment": "jsdom"
Run to verify:
🏁 Script executed:
#!/bin/bash
# Check if any jest config exists and if vitest is present
rg -n -C2 -e '^\\s*"test"\\s*:\\s*".*"' package.json
rg -n -C1 -e 'jest\\.config\\.(js|ts)|vitest\\.config\\.(js|ts)' -e 'defineConfig\\('
rg -n -C2 -e '"@vitejs/plugin-react-swc"' dream_layer_frontend/**Length of output: 2040
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "---- repo root package.json exists? ----"
if [ -f package.json ]; then
echo "root package.json found"
else
echo "root package.json not found"
fi
echo
echo "---- dream_layer_frontend/package.json (first 240 lines) ----"
sed -n '1,240p' dream_layer_frontend/package.json || true
echo
echo "---- Search for test script across repo ----"
rg -n --hidden '"test"\s*:' || true
echo
echo "---- Search for jest/vitest mentions in package.json files ----"
rg -n --hidden --glob '!**/node_modules/**' '"jest"' || true
rg -n --hidden --glob '!**/node_modules/**' '"vitest"' || true
echo
echo "---- Search for config files or defineConfig usage ----"
rg -n --hidden 'jest\.config|vitest\.config|defineConfig\(' || true
echo
echo "---- Show dream_layer_frontend/vite.config.ts (if present) ----"
sed -n '1,240p' dream_layer_frontend/vite.config.ts || trueLength of output: 5070
Tests reference Jest but no Jest deps/config — switch to Vitest (recommended) or add Jest deps/config
Short: dream_layer_frontend/package.json defines "test": "jest" but there are no Jest packages or config in the repo. The project uses Vite (vite.config.ts), so Vitest is the simpler, recommended option.
Files to update
- dream_layer_frontend/package.json — scripts/devDependencies
- (optional) add vitest.config.ts or jest.config.ts at repo root or dream_layer_frontend/
Recommended change (Option A — Vitest, preferred)
- "test": "jest",
+ "test": "vitest --run",Add to devDependencies (example list):
- vitest
- @vitest/coverage-v8 (optional)
- jsdom (or happy-dom) — test environment
- @testing-library/react (optional)
- @testing-library/user-event (optional)
Alternative (Option B — Keep Jest)
- Add devDependencies:
- jest
- ts-jest
- @types/jest
- jest-environment-jsdom
- Add jest.config.ts configured for ESM/TypeScript and "testEnvironment": "jsdom"
Evidence
- dream_layer_frontend/package.json contains "test": "jest" but no jest/vitest deps.
- dream_layer_frontend/vite.config.ts uses defineConfig and @vitejs/plugin-react-swc (Vite project).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "test": "jest", | |
| "test": "vitest --run", |
🤖 Prompt for AI Agents
dream_layer_frontend/package.json lines 11: the "test": "jest" script references
Jest but the repo has no Jest deps/config and uses Vite, so replace with Vitest:
change the test script to use "vitest" (or "vitest --run" for CI) and add
devDependencies: vitest, @vitest/coverage-v8 (optional), jsdom or happy-dom,
@testing-library/react (optional), @testing-library/user-event (optional); then
add a minimal vitest.config.ts in dream_layer_frontend/ (or repo root)
configuring the test environment to jsdom and integrating with Vite;
alternatively, if you must keep Jest, change the script to jest and add
devDependencies jest, ts-jest, @types/jest, jest-environment-jsdom plus a
jest.config.ts configured for ESM/TypeScript and jsdom.
| "typescript": "^5.5.3", | ||
| "typescript-eslint": "^8.0.1", | ||
| "vite": "^5.4.1" | ||
| "vite": "^7.1.2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Vite 7 upgrade: verify plugin compatibility and Node engine alignment
- You bumped vite to ^7.1.2 but kept @vitejs/plugin-react-swc at ^3.5.0. Ensure compatibility (upgrade plugin if needed).
- Confirm your Node version meets Vite 7 engine requirements across dev/CI.
Suggested quick check:
🏁 Script executed:
#!/bin/bash
# Inspect versions
rg -n '"vite"\\s*:' -n dream_layer_frontend/package.json
rg -n '"@vitejs/plugin-react-swc"\\s*:' dream_layer_frontend/package.json
# If you stick with Jest, check it's installed; if Vitest, check it's added
rg -n '"jest"| "vitest' dream_layer_frontend/package.jsonLength of output: 217
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Listing package.json files (repo root and subfolders):"
if command -v fd >/dev/null 2>&1; then
fd -HI -t f package.json || true
else
find . -name package.json -print || true
fi
echo "-----"
echo "Showing dream_layer_frontend/package.json (if it exists):"
if [ -f dream_layer_frontend/package.json ]; then
sed -n '1,240p' dream_layer_frontend/package.json || true
else
echo "FILE_NOT_FOUND: dream_layer_frontend/package.json"
fi
echo "-----"
echo "Searching repo for @vitejs/plugin-react-swc occurrences:"
rg -n "@vitejs/plugin-react-swc" -S || echo "no matches"
echo "Searching for vite occurrences in repo:"
rg -n '"vite"\s*:' -S || echo "no matches"
echo "Searching for engines or volta/node version hints in dream_layer_frontend/package.json:"
if [ -f dream_layer_frontend/package.json ]; then
rg -n '"engines"|"volta"|"node"|"engines"' dream_layer_frontend/package.json -n -S || true
else
echo "skip"
fi
echo "Searching CI/workflow files for node-version hints (.github/workflows, Dockerfile, .nvmrc):"
rg -n "node-version|node-version:|uses: actions/setup-node|FROM node:|.nvmrc" -S || echo "no matches"Length of output: 5095
🏁 Script executed:
#!/bin/bash
set -euo pipefail
LOCK="dream_layer_frontend/package-lock.json"
PKG="dream_layer_frontend/package.json"
echo "Printing vite.config.ts:"
sed -n '1,240p' dream_layer_frontend/vite.config.ts || true
echo "-----"
python - <<'PY'
import json,sys
p = "dream_layer_frontend/package-lock.json"
with open(p,'r',encoding='utf-8') as f:
data = json.load(f)
print("Top-level lockfileVersion:", data.get("lockfileVersion"))
# Search 'packages' and 'dependencies' for any reference to 'vite' in requires/peerDependencies
def scan(obj, path=""):
hits=[]
if isinstance(obj, dict):
for k,v in obj.items():
if k in ("dependencies","requires","peerDependencies","peerDependenciesMeta"):
if isinstance(v, dict) and "vite" in v:
hits.append((path or "/", k, v.get("vite")))
# recurse
hits += scan(v, path + "/" + k)
elif isinstance(obj, list):
for i,x in enumerate(obj):
hits += scan(x, path + f"[{i}]")
return hits
hits = []
# check packages (if present)
if "packages" in data:
for pkgpath, pkginfo in data["packages"].items():
# pkgpath like "node_modules/xxx" or ""
deps = pkginfo.get("dependencies",{}) or {}
peer = pkginfo.get("peerDependencies",{}) or {}
req = pkginfo.get("requires",{}) or {}
if "vite" in deps or "vite" in peer or "vite" in req:
val = deps.get("vite") or peer.get("vite") or req.get("vite")
print(f"packages entry: {pkgpath} -> vite: {val}")
# check top-level dependencies map
if "dependencies" in data:
for name, info in data["dependencies"].items():
# info may contain requires/peerDependencies
req = info.get("requires",{}) or {}
peer = info.get("peerDependencies",{}) or {}
if "vite" in req or "vite" in peer:
val = req.get("vite") or peer.get("vite")
print(f"dependencies entry: {name} -> vite: {val}")
# show specific @vitejs/plugin-react-swc entry if present
key = "@vitejs/plugin-react-swc"
if "dependencies" in data and key in data["dependencies"]:
info = data["dependencies"][key]
print("\n@vitejs/plugin-react-swc dependency entry (top-level):")
for k in ("version","resolved","requires","peerDependencies","dependencies"):
if k in info:
print(f"{k}: {info[k]}")
else:
print("\nNo top-level dependencies entry for @vitejs/plugin-react-swc found in lockfile 'dependencies'. Will search 'packages' section.")
if "packages" in data:
for pkgpath, pkginfo in data["packages"].items():
if pkgpath.endswith("/node_modules/@vitejs/plugin-react-swc"):
print("\nFound packages entry for:",pkgpath)
for k in ("version","resolved","dependencies","peerDependencies","requires"):
if k in pkginfo:
print(f"{k}: {pkginfo[k]}")
break
PYLength of output: 818
🌐 Web query:
What is the Node.js engine requirement for Vite 7?
💡 Result:
Vite 7 requires Node.js version 20.19 or higher, or 22.12 or higher. Support for Node.js 18 has been dropped, as it reached its end-of-life in April 2025[1][4][5][6].
- Minimum supported versions: Node.js 20.19+ or Node.js 22.12+.
- Attempting to use earlier versions, such as 20.18.1, will produce an
EBADENGINEerror and will not work[8]. - Node.js 18 is no longer supported as of Vite 7[1][2][5][6].
This requirement ensures compatibility with Vite’s shift to ESM-only distributions and improved browser API targeting[1][5]. If your package manager warns about your Node version, upgrade to at least the required minimum[4].
Citations:
- 1: https://vite.dev/blog/announcing-vite7
- 2: https://syntackle.com/blog/vite-7-is-here/
- 3: https://laracasts.com/discuss/channels/laravel/vite-not-found-in-production
- 4: https://vite.dev/guide/
- 5: https://vite.dev/guide/migration
- 6: https://dev.to/aggarwal_gaurav_1012/vite-70-is-here-rust-powered-speed-smarter-tooling-a-cleaner-build-experience-1k9j
- 7: https://vite.dev/releases
- 8: Warning Report:
EBADENGINEwhen installingvite@7.0.0with Node.js v20.18.1 google-gemini/gemini-cli#2870 - 9: https://nodejs.org/en/about/previous-releases
- 10: https://v3.vitejs.dev/guide/
Action required — Vite 7 bump introduces compatibility & Node engine issues
Short: you bumped vite to "^7.1.2" (dream_layer_frontend/package.json) but the installed @vitejs/plugin-react-swc in the lockfile requires vite "^4 || ^5 || ^6"; plus Vite 7 requires Node >= 20.19 (or >= 22.12). These must be resolved before merging.
What I verified
- package.json: vite => "^7.1.2" and devDependency "@vitejs/plugin-react-swc": "^3.5.0".
- vite.config.ts imports and uses @vitejs/plugin-react-swc.
- package-lock shows node_modules/@vitejs/plugin-react-swc -> vite: ^4 || ^5 || ^6 (plugin not compatible with Vite 7).
- No "engines", .nvmrc, or CI node-version settings found in the repo.
Action items (fix required)
- Update plugin or revert Vite
- Upgrade @vitejs/plugin-react-swc to a release that explicitly supports Vite 7 (verify on npm/changelog), OR
- Revert Vite to a 6.x range that matches the plugin (e.g., ^6.x).
- Files to change: dream_layer_frontend/package.json (devDependencies) and then run dependency install to update lockfile.
- Pin Node version for dev/CI
- Ensure dev machines and CI run Node >=20.19 (or >=22.12). Add one or more of:
- "engines": { "node": ">=20.19" } in dream_layer_frontend/package.json
- .nvmrc or volta config
- CI workflow update (actions/setup-node node-version)
- Ensure dev machines and CI run Node >=20.19 (or >=22.12). Add one or more of:
- Verify
- After updating, run install in a clean environment and run the app/tests to confirm no EBADENGINE/peer dependency errors.
Evidence: package-lock packages entry shows @vitejs/plugin-react-swc requires vite ^4 || ^5 || ^6; Vite 7 requires Node >=20.19 / 22.12.
🤖 Prompt for AI Agents
In dream_layer_frontend/package.json around line 83, Vite was bumped to "^7.1.2"
but the installed @vitejs/plugin-react-swc in the lockfile is incompatible
(supports only ^4 || ^5 || ^6) and Vite 7 also raises Node engine requirements;
fix by either upgrading @vitejs/plugin-react-swc to a version that explicitly
supports Vite 7 (verify on npm/changelog) or revert the vite version to a 6.x
range that matches the plugin, then update devDependencies in
dream_layer_frontend/package.json and regenerate the lockfile by reinstalling
deps; also pin Node for dev/CI by adding an "engines": { "node": ">=20.19" } to
package.json or adding an .nvmrc/volta file and updating CI workflow
(actions/setup-node) to the same Node version, then run a clean install and full
build/tests to confirm no EBADENGINE or peer dependency errors.
| const exportImg2ImgMetricsToCSV = () => { | ||
| if (!metrics) return; | ||
|
|
||
| const headers = ['elapsed_time_sec', 'time_per_image_sec', 'gpu', 'driver_version']; | ||
| const values = [ | ||
| metrics.elapsed_time_sec, | ||
| metrics.time_per_image_sec, | ||
| metrics.gpu, | ||
| metrics.driver_version | ||
| ]; | ||
|
|
||
| const csv = [headers.join(','), values.join(',')].join('\n'); | ||
| const blob = new Blob([csv], { type: 'text/csv;charset=utf-8;' }); | ||
|
|
||
| const link = document.createElement('a'); | ||
| link.href = URL.createObjectURL(blob); | ||
| link.setAttribute('download', `inference_trace_img2img.csv`); | ||
| document.body.appendChild(link); | ||
| link.click(); | ||
| document.body.removeChild(link); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting the CSV export logic to a shared utility.
Both Img2ImgPage and Txt2ImgPage have nearly identical CSV export functions. This violates the DRY principle.
Create a shared utility function:
// In a new file: src/utils/metricsExport.ts
export interface InferenceMetrics {
elapsed_time_sec: number;
time_per_image_sec: number;
gpu: string;
driver_version: string;
}
export const exportMetricsToCSV = (metrics: InferenceMetrics, filename: string) => {
if (!metrics) return;
const headers = ['elapsed_time_sec', 'time_per_image_sec', 'gpu', 'driver_version'];
const values = [
metrics.elapsed_time_sec,
metrics.time_per_image_sec,
metrics.gpu,
metrics.driver_version
];
const csv = [headers.join(','), values.join(',')].join('\n');
const blob = new Blob([csv], { type: 'text/csv;charset=utf-8;' });
const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.setAttribute('download', filename);
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
};Then use it in both components:
import { exportMetricsToCSV } from '@/utils/metricsExport';
// In the component
const handleExportMetrics = () => {
exportMetricsToCSV(metrics, 'inference_trace_img2img.csv');
};🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx around lines
290-310, the CSV export logic is duplicated and should be moved to a shared
utility; create a new src/utils/metricsExport.ts that exports an
InferenceMetrics interface and an exportMetricsToCSV(metrics, filename) function
which builds the headers/values, creates the CSV Blob, makes an object URL, and
triggers download, then replace the inline exportImg2ImgMetricsToCSV with a
small wrapper that imports and calls exportMetricsToCSV(metrics,
'inference_trace_img2img.csv') (ensure null-checks for metrics remain).
| {metrics && ( | ||
| <div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700"> | ||
| <div className="flex flex-col"> | ||
| <span className="font-semibold text-gray-800 dark:text-gray-200"> | ||
| {`${metrics.time_per_image_sec?.toFixed(2) ?? | ||
| (metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2) | ||
| } s per image`} | ||
| </span> | ||
| <span className="text-gray-600 dark:text-gray-400"> | ||
| {`${metrics.gpu} (Driver ${metrics.driver_version})`} | ||
| </span> | ||
| </div> | ||
| <button | ||
| onClick={exportTxt2ImgMetricsToCSV} | ||
| className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600" | ||
| > | ||
| Download CSV | ||
| </button> | ||
| </div> | ||
| )} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
UI inconsistency between Txt2Img and Img2Img metrics display.
The Txt2ImgPage displays metrics in a more complex nested structure with separate text elements, while Img2ImgPage uses a simpler, cleaner MetricsBadge component. This inconsistency could confuse users.
Consider using the same MetricsBadge component pattern from Img2ImgPage for consistency:
- {/* Metrics Badge */}
- {metrics && (
- <div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700">
- <div className="flex flex-col">
- <span className="font-semibold text-gray-800 dark:text-gray-200">
- {`${metrics.time_per_image_sec?.toFixed(2) ??
- (metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2)
- } s per image`}
- </span>
- <span className="text-gray-600 dark:text-gray-400">
- {`${metrics.gpu} (Driver ${metrics.driver_version})`}
- </span>
- </div>
- <button
- onClick={exportTxt2ImgMetricsToCSV}
- className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600"
- >
- Download CSV
- </button>
- </div>
- )}
+ {metrics && (
+ <div className="flex items-center gap-2">
+ <button
+ onClick={exportTxt2ImgMetricsToCSV}
+ className="rounded-md border border-input bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground"
+ >
+ Download CSV
+ </button>
+ <MetricsBadge
+ elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / Math.max(1, coreSettings.batch_size * coreSettings.batch_count))}
+ gpu={metrics.gpu}
+ driver={metrics.driver_version}
+ />
+ </div>
+ )}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {metrics && ( | |
| <div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700"> | |
| <div className="flex flex-col"> | |
| <span className="font-semibold text-gray-800 dark:text-gray-200"> | |
| {`${metrics.time_per_image_sec?.toFixed(2) ?? | |
| (metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2) | |
| } s per image`} | |
| </span> | |
| <span className="text-gray-600 dark:text-gray-400"> | |
| {`${metrics.gpu} (Driver ${metrics.driver_version})`} | |
| </span> | |
| </div> | |
| <button | |
| onClick={exportTxt2ImgMetricsToCSV} | |
| className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600" | |
| > | |
| Download CSV | |
| </button> | |
| </div> | |
| )} | |
| {metrics && ( | |
| <div className="flex items-center gap-2"> | |
| <button | |
| onClick={exportTxt2ImgMetricsToCSV} | |
| className="rounded-md border border-input bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground" | |
| > | |
| Download CSV | |
| </button> | |
| <MetricsBadge | |
| elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / Math.max(1, coreSettings.batch_size * coreSettings.batch_count))} | |
| gpu={metrics.gpu} | |
| driver={metrics.driver_version} | |
| /> | |
| </div> | |
| )} |
🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx around lines 331 to
350, the metrics UI uses a custom nested div layout which diverges from
Img2ImgPage; replace this block with the same MetricsBadge component used by
Img2ImgPage to ensure consistent styling and behavior, passing the computed
time-per-image (use metrics.time_per_image_sec?.toFixed(2) with the same
fallback to elapsed_time_sec / (coreSettings.batch_size *
coreSettings.batch_count)), gpu and driver_version props, and keep the Download
CSV button as a child or adjacent action in the MetricsBadge pattern; also add
an import for MetricsBadge at the top if not already present.
Description
This PR implements Task 4: Compute Traces by capturing wall time per image for the inference block and recording device information such as GPU name and driver version. The frontend now displays a compact metrics badge showing elapsed time per image and GPU details. Additionally, metrics can be exported to CSV for further analysis. Variance tolerance is documented as part of the metrics.
Changes Made
Txt2ImgPageto handle and display metrics after image generation.Evidence Required ✅
UI Screenshot
Generated Image
Logs
Checklist
Summary by CodeRabbit
New Features
Tests
Chores