Skip to content

Conversation

@avigyabb
Copy link
Owner

No description provided.

Comment on lines +432 to +435
if model_config is None:
# Defer OpenAI serving setup if model_config requires awaiting in this vLLM version
# (we cannot await inside Ray's running event loop here). Chat endpoint will be disabled.
return engine
Copy link

@SumanthRH SumanthRH Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty bad. We definitley want to have the openAI chat completions endpoint regardless of weight syncing backend. Is there an open issue in the ray repo to track this limitation? Why do we change things regarding the event loop in the main thread

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I left the PR in a very unorganized state since I didn't get to finish everything before I left. I think this logic may not actually be needed- I just had some version conflict errors with vLLM and I was using this as a workaround. I think figuring out some compatible versions may eliminate the need for this code change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants