Skip to content

OpenAI Completions endpoint logprobs should be int, not bool #5253

@mfleader

Description

@mfleader

System Info

llama-stack: 0.6.0 and main (0.6.1.dev289+g47ac8b6e1)
llama-stack-api: 0.6.0
openai (Python SDK): 2.29.0

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

I think the logprobs field in OpenAICompletionRequestWithExtraBody should have type int instead of bool.

The OpenAI Completions web page only shows the legacy Completion Response schema. The markdown translation of the page has the legacy Completion Request schema, which says logprobs should be a number.

The spec file at docs/static/llama-stack-spec.yaml:3621 also defines it as type: boolean.

The midstream CI caught this because it uses openai/gpt-4o-mini (remote::openai) which is not in the skip list, so test_openai_completion_logprobs() and test_openai_completion_logprobs_streaming() get executed and fail in the midstream's build, test, publish workflow.

Error logs

openai.BadRequestError: Error code: 400 - {'error': {'message': "{'errors': [{'loc': ['body', 'logprobs'], 'msg': 'Input should be a valid boolean, unable to interpret input', 'type': 'bool_parsing'}]}"}}

Expected behavior

The Completions endpoint should accept an integer for logprobs and the logprobs tests should pass when run against the OpenAI provider.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions