Skip to content

Conversation

@oliveregger
Copy link
Member

@oliveregger oliveregger commented Jan 19, 2026

don't recreate everytime an new context but use the cached one (might be a memory leak?)

Summary by Sourcery

Enhancements:

  • Replace per-call FHIR R5 context creation with a cached context when generating JSON for AI interpretation during validation.

@sourcery-ai
Copy link

sourcery-ai bot commented Jan 19, 2026

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Switches validation AI analysis serialization to use a cached R5 FhirContext instance instead of constructing a new one on each request, to reduce overhead while acknowledging potential memory implications.

Sequence diagram for cached FhirContext usage during AI validation analysis

sequenceDiagram
    participant HttpServletRequest as HttpServletRequest
    participant ValidationProvider as ValidationProvider
    participant LLMConnector as LLMConnector
    participant FhirContext as FhirContext
    participant IParser as IParser

    HttpServletRequest->>ValidationProvider: validate(theRequest)
    activate ValidationProvider
    ValidationProvider->>LLMConnector: getConnector(cliContext)
    activate LLMConnector
    LLMConnector-->>ValidationProvider: connector
    deactivate LLMConnector

    ValidationProvider->>FhirContext: forR5Cached()
    FhirContext-->>ValidationProvider: fhirContext

    ValidationProvider->>FhirContext: newJsonParser()
    FhirContext-->>ValidationProvider: parser

    ValidationProvider->>IParser: setPrettyPrint(true)
    IParser-->>ValidationProvider: parser

    ValidationProvider->>IParser: encodeResourceToString(oo)
    IParser-->>ValidationProvider: json

    ValidationProvider->>LLMConnector: interpretWithMatchbox(contentString, json)
    LLMConnector-->>ValidationProvider: aiResult

    ValidationProvider->>ValidationProvider: addAIIssueToOperationOutcome(oo, aiResult)
    ValidationProvider-->>HttpServletRequest: OperationOutcome
    deactivate ValidationProvider
Loading

Class diagram for ValidationProvider AI analysis serialization change

classDiagram
    class ValidationProvider {
        +validate(theRequest HttpServletRequest) IBaseResource
        +addAIIssueToOperationOutcome(oo OperationOutcome, aiResult String) OperationOutcome
    }

    class LLMConnector {
        +getConnector(cliContext Object) LLMConnector
        +interpretWithMatchbox(contentString String, json String) String
    }

    class FhirContext {
        +forR5() FhirContext
        +forR5Cached() FhirContext
        +newJsonParser() IParser
    }

    class IParser {
        +setPrettyPrint(prettyPrint boolean) IParser
        +encodeResourceToString(resource Object) String
    }

    ValidationProvider ..> LLMConnector : uses
    ValidationProvider ..> FhirContext : uses
    FhirContext ..> IParser : creates
Loading

File-Level Changes

Change Details Files
Use cached FhirContext instance for R5 JSON serialization in AI analysis path of validation.
  • Replace creation of a new R5 FhirContext with a call to the cached FhirContext accessor when encoding the OperationOutcome to JSON
  • Maintain existing pretty-print configuration and JSON serialization logic while changing only the context source
matchbox-server/src/main/java/ch/ahdis/matchbox/validation/ValidationProvider.java

Possibly linked issues

  • #NA: PR reuses cached FhirContext to reduce memory growth seen during repeated validations with precached implementation guides.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • If there are other call sites in this class or module that still use FhirContext.forR5(), consider switching them to forR5Cached() as well to keep behavior and performance characteristics consistent.
  • Double-check that FhirContext.forR5Cached() is safe to use in this context from a configuration perspective (e.g., any custom settings on the non-cached context are not required here), since changing the factory method can subtly affect behavior.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- If there are other call sites in this class or module that still use `FhirContext.forR5()`, consider switching them to `forR5Cached()` as well to keep behavior and performance characteristics consistent.
- Double-check that `FhirContext.forR5Cached()` is safe to use in this context from a configuration perspective (e.g., any custom settings on the non-cached context are not required here), since changing the factory method can subtly affect behavior.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

try {
LLMConnector openAIConnector = LLMConnector.getConnector(cliContext);
String json = FhirContext.forR5().newJsonParser().setPrettyPrint(true).encodeResourceToString(oo);
String json = FhirContext.forR5Cached().newJsonParser().setPrettyPrint(true).encodeResourceToString(oo);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember, have we tried to disable pretty-printing? I would result in less tokens for the LLM analyze, but would probably retain all information.

@qligier
Copy link
Member

qligier commented Jan 19, 2026

I don't think that would be a memory leak, but it was a heavy memory allocation for sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants