Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| TARSy alert analysis quality. Supports placeholder substitution for: | ||
| - {{SESSION_CONVERSATION}}: Full conversation from History Service | ||
| - {{ALERT_DATA}}: Original alert data | ||
| - {{ALERT_DATA}}: Original alert data (JSON) |
There was a problem hiding this comment.
Alert data can be anything. Not necessarily JSON. It's just a text.
|
|
||
| return client | ||
|
|
||
| async def _retry_database_operation_async( |
There was a problem hiding this comment.
I see this service duplicates a lot of functionality from the history service.
History service is a legacy name and we should probably rename it but it's essentially our main DB service.
So it works like this:
<Regular Service (like AlertService or ChatService)> -> History Service -> Repo(s) -> DB
The history service encapsulates high level DB logic like initialization, re-try, etc. All DB operations funnel through the history service. Let's keep it this way and move all these DB functionality to the history service.
I've created a PR to split the HistoryService into smaller sub-services: #250
So you can add a new subservice (and update the history service facade) there,
| extract_analysis = True | ||
| logger.info(f"Executing Turn 1: Score evaluation for score {score_id}") | ||
| for _ in range(3): # 3 attempts to get the score | ||
| conversation = await llm_client.generate_response( |
There was a problem hiding this comment.
We need to move most of this functionality to the agent package. See the current structure of agents (base agent and extensions) and also agent controllers.
I would create a scoring agent, very similar to SynthesisAgent. And two controllers for it (very similar to Synthesis controllers). The rest of the functionality will be delegated to the corresponding llm clients.
We also would need a new configuration for chains. Similar to Synthesis configuration. Where we can define things like: llm provider, strategy (react, native-thining), etc.
Basically the scoring service becomes an orchestrator. It delegates DB work to the history service. Agent/LLM work to the Agent/Controllers. Session history building to the History Service. Prompt building to the prompt builder, etc.
| logger.error(f"Failed to get scoring repository: {str(e)}") | ||
| yield None | ||
|
|
||
| def _format_conversation_messages( |
There was a problem hiding this comment.
I created a new method in the history service to generate the investigation session history: #249
You can use it as-is now. This is also used by the chat service so we don't have to duplicate it for different services/agents.
b36d6fa to
fc8a9a9
Compare
949b9ee to
917492e
Compare
917492e to
b3b192e
Compare
fc8a9a9 to
8a27218
Compare
due to some integrity constraint violation.
b3b192e to
f3be3cb
Compare
8a27218 to
a0cd1d1
Compare
a0cd1d1 to
22adfda
Compare
f3be3cb to
32f7e7b
Compare
22adfda to
10f7fe3
Compare
32f7e7b to
0c04a59
Compare
0c04a59 to
b887e28
Compare
10f7fe3 to
8acd64b
Compare
8acd64b to
cb61021
Compare
b887e28 to
0c9b2be
Compare
0c9b2be to
2ddbb44
Compare
cb61021 to
1a0b260
Compare
c6c1bf9 to
9a9bdf5
Compare
This PR stacks on top of #244 and introduces a service that performs the actual conversations with the judge LLMs.
Part of the PR stack: