-
Notifications
You must be signed in to change notification settings - Fork 11
feat: Command to test saving pipeline results #1033
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for antenna-preview canceled.
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds a new Django management command Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as management command
participant DB as Database
participant PipelineSvc as Pipeline model
participant Validator as PipelineResultsResponse
User->>CLI: run test_save_results(pipeline_id, json_file, project_id)
CLI->>DB: load Pipeline(pipeline_id)
DB-->>CLI: Pipeline instance
CLI->>CLI: read json_file -> results
CLI->>Validator: validate(results)
alt valid
Validator-->>CLI: validated results
CLI->>DB: create minimal Job(project_id)
DB-->>CLI: Job created (job_id)
CLI->>PipelineSvc: Pipeline.save_results(results, job_id)
PipelineSvc-->>DB: persist results
DB-->>PipelineSvc: success
PipelineSvc-->>CLI: success
CLI->>User: print success
else invalid
Validator-->>CLI: validation errors
CLI->>User: print validation errors
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧬 Code graph analysis (1)ami/ml/management/commands/test_save_results.py (1)
🪛 Ruff (0.14.4)ami/ml/management/commands/test_save_results.py42-42: Unused method argument: (ARG002) 51-51: Within an (B904) 51-51: Avoid specifying long messages outside the exception class (TRY003) 58-58: Avoid specifying long messages outside the exception class (TRY003) 67-67: Do not catch blind exception: (BLE001) 68-68: Within an (B904) 68-68: Avoid specifying long messages outside the exception class (TRY003) 80-80: Within an (B904) 80-80: Avoid specifying long messages outside the exception class (TRY003) 96-96: Do not catch blind exception: (BLE001) 97-97: Within an (B904) 97-97: Avoid specifying long messages outside the exception class (TRY003) ⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (3)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request adds a new Django management command test_save_results to facilitate manual testing and debugging of the Pipeline.save_results() method. The command loads pipeline results from a JSON file, validates them against the PipelineResultsResponse schema, creates a test job, and executes save_results() to store the data in the database.
Key Changes
- New management command that accepts pipeline ID, JSON file path, and project ID as arguments
- Validates input data using Pydantic schemas before processing
- Creates a test job associated with the specified project and pipeline
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ami/ml/management/commands/test_save_results.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
ami/ml/management/commands/test_save_results.py (1)
ami/jobs/models.py (1)
MLJob(316-513)
🪛 Ruff (0.14.4)
ami/ml/management/commands/test_save_results.py
43-43: Unused method argument: args
(ARG002)
52-52: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
52-52: Avoid specifying long messages outside the exception class
(TRY003)
59-59: Avoid specifying long messages outside the exception class
(TRY003)
68-68: Do not catch blind exception: Exception
(BLE001)
69-69: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
69-69: Avoid specifying long messages outside the exception class
(TRY003)
94-94: Do not catch blind exception: Exception
(BLE001)
95-95: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
95-95: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Agent
🔇 Additional comments (3)
ami/ml/management/commands/test_save_results.py (3)
12-21: LGTM!Imports are well-organized and all appear necessary for the command's functionality.
26-41: LGTM!Arguments are properly defined with appropriate types and helpful descriptions.
80-87: LGTM!Test job creation is properly configured with all necessary fields.
mihow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @carlosgjs! There is stalled PR in draft (#916) that adds a comprehensive method for importing pipeline results from JSON files, for data that is processed externally. When that PR is resumed, we can probably merge the two implementations!
Summary
This pull request adds a new Django management command to make it easier to test the
Pipeline.save_results()method using data from a JSON file. The command loads pipeline results from a JSON file, validates them, creates a test job, and then callssave_results()to store the results in the database.List of Changes
test_save_resultsmanagement command inami/ml/management/commands/test_save_results.pyto load results from a JSON file, validate them using thePipelineResultsResponseschema, create a test job, and callPipeline.save_results()for manual testing and debugging.Screenshots
Checklist
Summary by CodeRabbit