Skip to content

Consolidate to GPT-5 only; upgrade deps; remove non-OpenAI paths#2

Open
shouryamaanjain wants to merge 6 commits intomainfrom
capy/upgrade-dependencies-cb917f48
Open

Consolidate to GPT-5 only; upgrade deps; remove non-OpenAI paths#2
shouryamaanjain wants to merge 6 commits intomainfrom
capy/upgrade-dependencies-cb917f48

Conversation

@shouryamaanjain
Copy link
Contributor

Summary (why)

  • Consolidates to a single, stable provider (OpenAI GPT-5) to reduce maintenance overhead and remove ambiguous behavior across multiple providers/paths.
  • Modernizes the codebase to OpenAI SDK v1 with streaming + tool_calls for reliability and forward compatibility.
  • Strips non-core UX and prompts to keep the tool focused on executing code via a single function tool.

Changes

  • Dependencies: upgrade to latest stable
    • openai 1.105.x, rich 14.1.x, tiktoken 0.11.x, python-dotenv 1.1.x, pytest 8.4.x (dev)
    • Remove litellm, huggingface-hub, inquirer, requests, and all local/HF/Azure dependencies
    • poetry.lock updated
  • Model/provider consolidation
    • Fixed model to "gpt-5" end-to-end
    • Removed Azure/custom api_base/LiteLLM/local model paths entirely
  • OpenAI SDK v1 migration
    • Replace legacy/LiteLLM calls with OpenAI().chat.completions.create(...)
    • Implement streaming with tool_calls handling; accumulate streamed function.arguments into a JSON buffer and parse on finish
    • Fallback to non-streaming if the org/model disallows streaming
  • CLI simplification
    • Only -y/--yes is supported; removed --fast, --local, --falcon, --model, --api_base, --use-azure and related env toggles
    • Console script entry updated to emplode.cli:cli_entry; python -m emplode works
  • Token trimming
    • Replace tokentrim usage with tiktoken-based approximate trimming respecting context_window and max_tokens
  • Files & cleanup
    • Delete emplode/get_hf_llm.py
    • Remove all Azure/custom base / local/HF logic and messaging
    • Minimize system_message.txt and README to core functionality only

Impact

  • Breaking: non-OpenAI providers and local models are no longer supported; only OpenAI GPT-5 is available
  • Requires OPENAI_API_KEY; Azure/custom api_base paths are removed
  • Behavior is now deterministic around a single provider/model; fewer failure modes

Testing

  • Locked with Poetry; built wheel; installed into a clean venv; CLI help shows only -y
  • Basic run exercised both streaming and non-streaming paths (fallback when streaming not allowed)

Notes

  • No remaining references to litellm, Azure, HF, or local model logic
  • Documentation updated to reflect single-model policy and setup

₍ᐢ•(ܫ)•ᐢ₎ Generated by Capy (view task)

…ip non-OpenAI paths and flags; update deps to latest stable; remove prompts/extras to keep core only
@shouryamaanjain shouryamaanjain added the capy PR created by Capy label Sep 8, 2025
…allback; use rich print to render Markdown properly
… function-call handling; fix Markdown rendering via rich.print
…e SSE; keep manager as fallback; ensure immediate UI updates
…clines; default language fallback; keep true SSE streaming path
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

capy PR created by Capy

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant