π‘οΈ Sentinel: [HIGH] Fix information exposure through error messages#154
π‘οΈ Sentinel: [HIGH] Fix information exposure through error messages#154AGI-Corporation wants to merge 1 commit intomainfrom
Conversation
- Added global exception handler in backend/main.py to sanitize 500 errors. - Refactored Mistral agent endpoints to remove detail=str(e). - Added comprehensive security tests in tests/test_sentinel_errors.py. - Updated security journal with findings and preventions. Co-authored-by: AGI-Corporation <186229839+AGI-Corporation@users.noreply.github.com>
|
π Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a π emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
π WalkthroughWalkthroughThe changes implement a security-focused exception handling system across the application. A global FastAPI exception handler is introduced to log full error details server-side while returning generic messages to clients. Individual endpoint handlers are updated accordingly, and tests verify that sensitive exception information is not exposed while maintaining proper HTTP status codes. Changes
Sequence DiagramsequenceDiagram
participant Client
participant FastAPI as FastAPI App
participant Handler as Global Exception<br/>Handler
participant Logger
Client->>FastAPI: Request to endpoint
FastAPI->>FastAPI: Endpoint raises exception
FastAPI->>Handler: Global handler catches<br/>exception
Handler->>Logger: Log full traceback<br/>(exc_info=True)
Logger-->>Handler: Logged
Handler->>Client: Return generic response<br/>(500 + "Internal server error")
Estimated code review effortπ― 3 (Moderate) | β±οΈ ~22 minutes Poem
π₯ Pre-merge checks | β 2 | β 1β Failed checks (1 warning)
β Passed checks (2 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing Touchesπ Generate docstrings
π§ͺ Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
π§Ή Nitpick comments (3)
tests/test_sentinel_errors.py (1)
8-13: Avoid shared-app route collisions in repeated test runs.At Line 8, adding a fixed path to the imported global
appcan accumulate mutable route state if tests are re-run in-process. Prefer a unique temporary path per run.β»οΈ Minimal isolation tweak
+import uuid ... - `@app.get`("/api/test-error-leakage") + path = f"/api/test-error-leakage-{uuid.uuid4()}" + `@app.get`(path) async def trigger_error(): raise ValueError("Sensitive database connection string: postgresql://user:password@localhost:5432/db") ... - response = await ac.get("/api/test-error-leakage") + response = await ac.get(path)π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_sentinel_errors.py` around lines 8 - 13, The test registers a fixed route on the shared global app which can accumulate routes across in-process test reruns; change the route registration for trigger_error to use a unique path per run (e.g., append a uuid or timestamp) so the decorator on app.get("/api/test-error-leakage") becomes a dynamically-generated path (or register the handler via app.add_api_route with a unique_path variable) ensuring no route collisions between runs.backend/main.py (1)
100-100: Uselogger.exception()with request context for cleaner error logs.At Line 100, the function has access to the
requestparameter. Usinglogger.exception()instead oflogger.error(..., exc_info=True)is more idiomatic and automatically captures the traceback. Including request method and path adds valuable debugging context.π οΈ Suggested adjustment
- logger.error(f"Unhandled error: {str(exc)}", exc_info=True) + logger.exception("Unhandled error on %s %s", request.method, request.url.path)π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/main.py` at line 100, Replace the logger.error(..., exc_info=True) call with logger.exception to automatically capture the traceback and include the request context; specifically, call logger.exception with a message that embeds request.method and request.path (and the exception variable exc) so the log reads something like "Unhandled error while handling <METHOD> <PATH>: <exc>" using the existing logger, request and exc symbols.agents/mistral_agent/agent.py (1)
321-323: Preserve exception causality and use idiomatic logging in exception handlers.The three exception handlers at lines 321β323, 337β339, and 348β350 re-raise
HTTPExceptionwithout exception chaining (from e), which drops the causal link in tracebacks. Additionally, directlogging.error()calls withf"...{str(e)}"andexc_info=Truecan be simplified using thelogger.exception()method and a module-level logger.β»οΈ Proposed refactor
+logger = logging.getLogger(__name__) + MISTRAL_API_KEY = os.getenv("MISTRAL_API_KEY", "") ... - except Exception as e: - logging.error(f"Mistral Gap Analysis Error: {str(e)}", exc_info=True) - raise HTTPException(status_code=500, detail="Internal server error") + except Exception as e: + logger.exception("Mistral Gap Analysis Error") + raise HTTPException(status_code=500, detail="Internal server error") from e ... - except Exception as e: - logging.error(f"Mistral Code Review Error: {str(e)}", exc_info=True) - raise HTTPException(status_code=500, detail="Internal server error") + except Exception as e: + logger.exception("Mistral Code Review Error") + raise HTTPException(status_code=500, detail="Internal server error") from e ... - except Exception as e: - logging.error(f"Mistral Ask Error: {str(e)}", exc_info=True) - raise HTTPException(status_code=500, detail="Internal server error") + except Exception as e: + logger.exception("Mistral Ask Error") + raise HTTPException(status_code=500, detail="Internal server error") from eAlso applies to: 337β339, 348β350
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/mistral_agent/agent.py` around lines 321 - 323, Replace the ad-hoc logging.error calls and plain re-raises in the except blocks with a module-level logger and exception chaining: create or use logger = logging.getLogger(__name__), call logger.exception(...) instead of logging.error(f"...{str(e)}", exc_info=True) in the three except blocks shown, and re-raise the HTTPException using "raise HTTPException(... ) from e" so the original exception causality is preserved; apply this change to each except block that currently logs then does "raise HTTPException(...)".
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@agents/mistral_agent/agent.py`:
- Around line 321-323: Replace the ad-hoc logging.error calls and plain
re-raises in the except blocks with a module-level logger and exception
chaining: create or use logger = logging.getLogger(__name__), call
logger.exception(...) instead of logging.error(f"...{str(e)}", exc_info=True) in
the three except blocks shown, and re-raise the HTTPException using "raise
HTTPException(... ) from e" so the original exception causality is preserved;
apply this change to each except block that currently logs then does "raise
HTTPException(...)".
In `@backend/main.py`:
- Line 100: Replace the logger.error(..., exc_info=True) call with
logger.exception to automatically capture the traceback and include the request
context; specifically, call logger.exception with a message that embeds
request.method and request.path (and the exception variable exc) so the log
reads something like "Unhandled error while handling <METHOD> <PATH>: <exc>"
using the existing logger, request and exc symbols.
In `@tests/test_sentinel_errors.py`:
- Around line 8-13: The test registers a fixed route on the shared global app
which can accumulate routes across in-process test reruns; change the route
registration for trigger_error to use a unique path per run (e.g., append a uuid
or timestamp) so the decorator on app.get("/api/test-error-leakage") becomes a
dynamically-generated path (or register the handler via app.add_api_route with a
unique_path variable) ensuring no route collisions between runs.
βΉοΈ Review info
βοΈ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: dc4b4692-fe01-43a0-8c1d-a66e805a1edc
π Files selected for processing (4)
.jules/sentinel.mdagents/mistral_agent/agent.pybackend/main.pytests/test_sentinel_errors.py
π¨ Severity: HIGH
π‘ Vulnerability: Raw exception details (
str(e)) were being returned to the client in HTTP 500 responses across multiple endpoints, potentially leaking sensitive system state, file paths, or credentials.π― Impact: An attacker could trigger errors to gain insights into the application's internal architecture, database structure, or environment configuration, facilitating further attacks.
π§ Fix: Implemented a global
@app.exception_handler(Exception)inbackend/main.pythat logs full tracebacks server-side while returning a generic "Internal server error" to clients. Specifically refactoredagents/mistral_agent/agent.pyto remove direct leakage of exception strings.β Verification: Created
tests/test_sentinel_errors.pywhich verifies that sensitive error messages are masked in 500 responses, while ensuring standard FastAPIHTTPException(e.g., 404) andRequestValidationError(422) responses are still correctly passed through to the client.PR created automatically by Jules for task 13873341334655242364 started by @AGI-Corporation
Summary by CodeRabbit