Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented Jan 28, 2026

Summary

Adds documentation for the structured outputs feature in LLM Gateway, which allows constraining model responses to follow a specific JSON schema using the response_format parameter.

Documentation page (fern/pages/07-llm-gateway/structured-outputs.mdx):

  • Overview explaining the feature
  • Code examples in Python, JavaScript, and cURL (based on the provided curl example)
  • Example response showing the JSON output format
  • Supported models table (OpenAI GPT-4.1/GPT-5.x and Gemini supported; Claude and gpt-oss not supported)
  • API reference for the response_format and json_schema parameters
  • Best practices and error handling guidance

OpenAPI spec updates (llm-gateway.yml):

  • Added response_format parameter to LLMGatewayRequest schema
  • Added ResponseFormat and JsonSchemaConfig schema definitions
  • Added structured output example in request examples

Updates since last revision

  • Added response_format parameter to the LLM Gateway OpenAPI specification so it now appears in the API reference
  • Added schema definitions for ResponseFormat (with type and json_schema properties) and JsonSchemaConfig (with name, schema, and strict properties)
  • Added a structured output example to the API request examples

Review & Testing Checklist for Human

  • Verify OpenAPI schema accuracy: Confirm the ResponseFormat and JsonSchemaConfig schemas match the actual API implementation (especially the enum: [json_schema] for type - are there other valid values?)
  • Test the API reference: After deployment, verify response_format appears correctly in the API reference at /docs/api-reference/llm-gateway/create-chat-completion
  • Verify supported models: OpenAI (GPT-4.1, GPT-5.x) = yes, Gemini = yes, gpt-oss = no, Claude = no
  • Confirm hidden status: Page is hidden: true (matches other LLM Gateway pages, but verify this is intentional)

Recommended test plan:

  1. Run the cURL example from the docs against the LLM Gateway to verify structured outputs work as documented
  2. Check the preview deployment to confirm the response_format parameter appears in the API reference with correct schema details

Notes

CI failures are pre-existing issues in transcripts.yml (unrelated to this PR). The [docs] validation passed.

Requested by Dan Ince via Slack #ask-api-docs

Link to Devin run: https://app.devin.ai/sessions/4424438ef4014e0e886900b5303e6a6b

Co-Authored-By: Dan Ince <dince@assemblyai.com>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@github-actions
Copy link

Co-Authored-By: Dan Ince <dince@assemblyai.com>
@github-actions
Copy link

Co-Authored-By: Dan Ince <dince@assemblyai.com>
@github-actions
Copy link

…red outputs

Co-Authored-By: Dan Ince <dince@assemblyai.com>
@github-actions
Copy link

@dan-ince-aai dan-ince-aai merged commit c52d728 into main Jan 28, 2026
2 of 4 checks passed
@dan-ince-aai dan-ince-aai deleted the devin/1769565533-structured-outputs-docs branch January 28, 2026 02:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant