fix(openai): make object and created fields optional in CompletionResponse#1451
fix(openai): make object and created fields optional in CompletionResponse#1451DAMEK86 wants to merge 1 commit into0xPlaygrounds:mainfrom
Conversation
|
nono, it's fine. give me a bit to check if we can use it. |
|
Scraping over the GHE endpoints, it seems only the OpenAI protocol-based EP is available.
The idea is to reuse the GHE instead of a public GitHub Copilot subscription. I assume Azure uses a non-Copilot/OpenAI protocol. |
676713c to
0fac6fa
Compare
|
Update: It seems the copilot api is based on OpenAI and works quite well for their models; however, if we try to use Claude models, additional response handling is required. I'm not sure whether this fits your rig design or if an additional OpenAI-derived copilot provider is a better fit. |
|
Hmmmmm. I think at this point an OpenAI derived copilot provider may fit the purpose better. It's quite easy to get Claude or another model to simply spin up the necessary provider code anyway and then we can move as required. The main issue with just adding it onto the OpenAI Chat Completions API is that if we add too many fields onto it, it's going to start diverging which will be quite bad in the future if we need to spin up more providers based on the original OpenAI integration |
…rsing The Copilot API exposes an OpenAI-compatible /chat/completions endpoint but omits several response fields that the standard OpenAI specification requires (object, created, finish_reason). This provider derives from the OpenAI module, reusing its request/message types while relaxing the response contract with Option<T> + serde(default). Key differences from the OpenAI provider: - CompletionResponse.object: Option<String> (omitted by Copilot API) - CompletionResponse.created: Option<u64> (omitted by Copilot API) - Choice.finish_reason: Option<String> (omitted by Claude via Copilot) - ApiErrorResponse handles both 'message' and 'error' field shapes - Bearer auth with COPILOT_API_KEY / GITHUB_TOKEN env vars - Completions-only (no embeddings, image, audio, transcription) Addresses review feedback on PR 0xPlaygrounds#1451: creates a dedicated provider instead of patching optional fields onto the OpenAI module. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…rsing The Copilot API exposes an OpenAI-compatible /chat/completions endpoint but omits several response fields that the standard OpenAI specification requires (object, created, finish_reason). This provider derives from the OpenAI module, reusing its request/message types while relaxing the response contract with Option<T> + serde(default). Key differences from the OpenAI provider: - CompletionResponse.object: Option<String> (omitted by Copilot API) - CompletionResponse.created: Option<u64> (omitted by Copilot API) - Choice.finish_reason: Option<String> (omitted by Claude via Copilot) - ApiErrorResponse handles both 'message' and 'error' field shapes - Bearer auth with COPILOT_API_KEY / GITHUB_TOKEN env vars - Completions-only (no embeddings, image, audio, transcription) - Well-known model constants for GPT-4o, Claude Sonnet 4, Gemini Flash, o3-mini (non-exhaustive — users should pass model name as string for newer models) Addresses review feedback on PR 0xPlaygrounds#1451: creates a dedicated provider instead of patching optional fields onto the OpenAI module. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
|
@joshua-mo-143 do u like a new MR for the topic, or is overriding the current one fine for u? It took a bit to create and verify the new implementation ;) |
Overriding the current one is OK |
ec3c17d to
74c928c
Compare
…rsing The Copilot API exposes an OpenAI-compatible /chat/completions endpoint but omits several response fields that the standard OpenAI specification requires (object, created, finish_reason). This provider derives from the OpenAI module, reusing its request/message types while relaxing the response contract with Option<T> + serde(default). Key differences from the OpenAI provider: - CompletionResponse.object: Option<String> (omitted by Copilot API) - CompletionResponse.created: Option<u64> (omitted by Copilot API) - Choice.finish_reason: Option<String> (omitted by Claude via Copilot) - ApiErrorResponse handles both 'message' and 'error' field shapes - Bearer auth with COPILOT_API_KEY / GITHUB_TOKEN env vars - Completions-only (no embeddings, image, audio, transcription) - Well-known model constants for GPT-4o, Claude Sonnet 4, Gemini Flash, o3-mini (non-exhaustive — users should pass model name as string for newer models) Addresses review feedback on PR 0xPlaygrounds#1451: creates a dedicated provider instead of patching optional fields onto the OpenAI module.
74c928c to
b949ace
Compare
Problem
Azure-flavored OpenAI endpoints (including GitHub Copilot API) return chat completion responses that omit the
objectandcreatedfields. The currentCompletionResponsestruct requires both fields, causing deserialization to fail with:This prevents rig-core from being used with Azure OpenAI Service, GitHub Copilot API, and other Azure-flavored OpenAI-compatible endpoints.
Example: Standard OpenAI response (works today)
{ "id": "chatcmpl-abc123", "object": "chat.completion", "created": 1700000000, "model": "gpt-4o", "usage": { "prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15 } }Example: Azure/Copilot response (fails today, works after this PR)
{ "id": "chatcmpl-xyz789", "model": "gpt-4o", "choices": [{ "index": 0, "message": { "role": "assistant", "content": "Review complete." }, "finish_reason": "stop", "content_filter_results": {} }], "usage": { "prompt_tokens": 20, "completion_tokens": 10, "total_tokens": 30 }, "prompt_filter_results": [{ "prompt_index": 0 }] }Solution
Changed
objectfromStringtoOption<String>andcreatedfromu64toOption<u64>, both with#[serde(default)]. These fields are not accessed anywhere in rig-core's logic — they only participate in deserialization — so this is a fully backwards-compatible change.Tests
Added two deserialization tests:
deserialize_standard_openai_response— confirms standard OpenAI responses still workdeserialize_azure_copilot_response_without_object_and_created— confirms Azure/Copilot responses now deserialize correctlyVerified with
thx for this nice lib <3
CC @olFi95