ref(insights): replace useRouter with specific hooks in spanSummary sampleList#110119
ref(insights): replace useRouter with specific hooks in spanSummary sampleList#110119evanpurkhiser wants to merge 18 commits intomasterfrom
Conversation
…ines Add a new REST endpoint that allows driving integration pipelines via JSON API requests instead of the legacy redirect-based flow. The endpoint supports initializing a pipeline (POST with action=initialize), retrieving the current step info (GET), and advancing through steps (POST with step-specific data). Uses ControlSiloOrganizationEndpoint as the base class since integration models live in the control silo, requiring the RPC service layer to resolve organizations from slugs. The endpoint rejects pipelines that don't support API mode with a 400, and returns structured errors for invalid providers, missing sessions, and unsupported pipeline names. Refs VDY-36
Refs [VDY-32: Migrate integration setup pipelines to API-driven flows](https://linear.app/getsentry/issue/VDY-32/migrate-integration-setup-pipelines-to-api-driven-flows)
Add a generic API-mode step that handles the full OAuth2 authorization code flow in a single step: returns the authorize URL for the frontend to open in a popup, then accepts code/state from the trampoline callback and exchanges it for an access token. Includes a configurable bind_key for controlling where token data is stored in pipeline state. Refs VDY-37
Integration providers register callback URLs with external services (e.g. GitHub OAuth redirect). These URLs point to PipelineAdvancerView, which traditionally drives the pipeline server-side by calling pipeline.current_step() on each callback. For the new API-driven pipeline mode, we cannot change the callback URLs already registered with production integrations. Instead, this view now detects when a pipeline was initiated in API mode (api_mode flag in session state) and renders a lightweight trampoline page. The trampoline relays the callback URL query parameters (code, state, installation_id, etc.) back to the opener window via postMessage and closes itself. The frontend pipeline system then continues driving the pipeline via API endpoints. Refs VDY-36
Adds backend support for completing the GitHub integration setup flow via API endpoints instead of server-rendered Django views. Extracts shared logic from the existing template-driven pipeline views into reusable functions (exchange_github_oauth, validate_github_installation, validate_org_installation_choice, _build_installation_info_with_counts) so both the legacy views and new API steps can use them. Adds OAuthLoginApiStep and GithubOrganizationSelectionApiStep which implement the same flow as the existing views but return structured data for the frontend to render. Adds api_finish_pipeline to IntegrationPipeline for completing the pipeline without server-side redirects. Refactors _finish_pipeline to separate model operations (_execute_finish_pipeline) from HTTP response handling, and extracts initialize_integration_pipeline from OrganizationIntegrationSetupView for reuse by the API endpoint. Refs VDY-38
…line Adds the GitHub-specific pipeline step components (OAuth login and organization selection) and registers the GitHub integration pipeline definition in the frontend registry. Refs VDY-38
Add API pipeline steps for GitLab integration installation: InstallationConfigApiStep collects instance URL, group path, OAuth credentials, and SSL preferences. The OAuth step is late-bound via OAuth2ApiStep using config from the previous step's state. build_integration now reads token data from either the legacy state["identity"]["data"] path or the new state["oauth_data"] path so both flows work during migration. Refs VDY-39
…line Add the GitLab pipeline step components and register them in the pipeline registry. The config step uses GuidedSteps to walk users through creating a GitLab OAuth application, then collects instance URL, group path, and OAuth credentials. The OAuth step reuses the shared OAuthLoginStep component. Refs VDY-39
ca8c19e to
d7941c2
Compare
|
🚨 Warning: This pull request contains Frontend and Backend changes! It's discouraged to make changes to Sentry's Frontend and Backend in a single pull request. The Frontend and Backend are not atomically deployed. If the changes are interdependent of each other, they must be separated into two pull requests and be made forward or backwards compatible, such that the Backend or Frontend can be safely deployed independently. Have questions? Please ask in the |
| except NotRegistered: | ||
| return Response({"detail": f"Unknown provider: {provider_id}"}, status=404) | ||
| except IntegrationPipelineError as e: | ||
| return Response({"detail": str(e)}, status=404 if e.not_found else 400) |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 20 days ago
In general, the fix is to avoid sending the raw exception message back to the client. Instead, log the exception (including stack trace / message) on the server, and return a generic, non‑sensitive error message to the user. This preserves debuggability while preventing information exposure.
For this specific code:
- In
_initialize_pipeline, theexcept IntegrationPipelineError as e:block currently returns{"detail": str(e)}with different status codes based one.not_found. - We should:
- Log the exception using the existing
loggerdefined at the top of the file, ideally withlogger.exceptionso the stack trace is captured. - Return a generic error message string that does not include
e’s content. - Preserve the existing behavior of using
404vs400depending one.not_found, since that is part of the API contract and does not itself leak sensitive information.
- Log the exception using the existing
Concretely, in src/sentry/api/endpoints/organization_pipeline.py, around lines 112–118:
- Replace
return Response({"detail": str(e)}, status=404 if e.not_found else 400)with:- A
logger.exceptioncall that logs that initialization failed and includesprovider_idfor context. - A
Responsethat has either:- a 404 status with a generic “Resource not found.”‑style message, or
- a 400 status with a generic “Failed to initialize integration pipeline.”‑style message.
- A
- No new imports are needed;
loggeris already defined and useslogging.getLogger(__name__).
This preserves functionality (status codes and control flow) while ensuring the client never sees the raw exception message.
| @@ -114,7 +114,16 @@ | ||
| except NotRegistered: | ||
| return Response({"detail": f"Unknown provider: {provider_id}"}, status=404) | ||
| except IntegrationPipelineError as e: | ||
| return Response({"detail": str(e)}, status=404 if e.not_found else 400) | ||
| logger.exception( | ||
| "Failed to initialize integration pipeline for provider %s", | ||
| provider_id, | ||
| ) | ||
| if getattr(e, "not_found", False): | ||
| return Response({"detail": "Requested resource was not found."}, status=404) | ||
| return Response( | ||
| {"detail": "Failed to initialize integration pipeline."}, | ||
| status=400, | ||
| ) | ||
|
|
||
| if not pipeline.is_api_ready(): | ||
| return Response({"detail": "Pipeline does not support API mode."}, status=400) |
| query_params=None, | ||
| ) | ||
| except AtlassianConnectValidationError as e: | ||
| raise ValidationError(f"Unable to verify installation: {e}") from e |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 20 days ago
In general, to fix this type of problem you should avoid including raw exception messages in responses that can be seen by external users. Instead, log the exception details on the server side (so developers can debug), and raise or return a generic, non-sensitive error message to the client.
For this specific code, the best fix without changing existing functionality is to keep raising a ValidationError from validate_jwt, but change its message to a generic one that does not interpolate e. If the project has a central logging mechanism (Django’s logger or Sentry itself), we could additionally log e for debugging, but the user-visible string should remain generic. Since the snippet doesn’t show any logger imports, and we are asked to avoid unwarranted assumptions, we’ll implement the minimal fix: replace raise ValidationError(f"Unable to verify installation: {e}") from e with raise ValidationError("Unable to verify installation.") from e. This preserves the behavior of signaling a validation failure to the client but removes exposure of internal error information.
The change is confined to src/sentry/integrations/bitbucket/integration.py, in the BitbucketVerifySerializer.validate_jwt method, lines 214–215 in the provided snippet. No new imports or additional methods are strictly necessary for this minimal fix.
| @@ -212,7 +212,7 @@ | ||
| query_params=None, | ||
| ) | ||
| except AtlassianConnectValidationError as e: | ||
| raise ValidationError(f"Unable to verify installation: {e}") from e | ||
| raise ValidationError("Unable to verify installation.") from e | ||
| return value | ||
|
|
||
|
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| resp = self._advance_step({}) | ||
| assert resp.status_code == 200 | ||
| assert resp.data["status"] == "stay" | ||
| assert "errors" in resp.data["data"] |
There was a problem hiding this comment.
Test expects "stay" response but serializer raises ValidationError
Medium Severity
The test_oauth_step_missing_fields test expects status_code == 200 with resp.data["status"] == "stay" when submitting empty data to the OAuth step. However, api_advance calls serializer.is_valid(raise_exception=True), so the OAuthLoginSerializer (which has code and state as required=True) will raise a DRF ValidationError that gets caught by DRF's exception handler, returning a 400 with raw serializer errors — never reaching handle_post to produce a "stay" PipelineStepResult. The equivalent GitLab and GHE tests correctly expect status_code == 400 with raw error fields for the same scenario. This test will fail as written.
Additional Locations (1)
Backend Test FailuresFailures on
|
| content_type = req.headers.get("Content-Type", "").lower() | ||
| if content_type.startswith("application/x-www-form-urlencoded"): | ||
| return dict(parse_qsl(body)) | ||
| return dict(parse_qsl(body.decode("utf-8"))) |
There was a problem hiding this comment.
UnicodeDecodeError unhandled in OAuth token exchange
The body.decode("utf-8") call can raise UnicodeDecodeError if an external OAuth provider returns form-urlencoded data with invalid UTF-8 bytes. The current exception handler only catches orjson.JSONDecodeError, allowing UnicodeDecodeError to propagate as an unhandled 500 error. This matches the Data Parsing pattern (Check 8) where external data parsing lacks complete error handling.
Verification
Read src/sentry/identity/oauth2.py lines 477-495 to confirm the try/except only catches orjson.JSONDecodeError. Verified safe_urlread returns bytes (src/sentry/http.py line 120). Confirmed UnicodeDecodeError is handled elsewhere in the codebase for similar external data parsing (e.g., src/sentry/integrations/github/platform_detection.py).
Also found at 1 additional location
src/sentry/integrations/github_enterprise/integration.py:571-572
Identified by Warden sentry-backend-bugs · 7ZK-SG2
|
damnit |


Part of the React Router 6 cleanup — replaces
useRouter()with specific hooks (useNavigate,useLocation,useParams, etc.)