Easier OpenAI wraps the official OpenAI Python SDK so you can drive modern assistants, manage tool selection, search files, and work with speech from one helper package -- all with minimal boilerplate.
- High level
Assistantwrapper with conversation memory, tool toggles, and streaming helpers. - Temporary vector store ingestion to ground answers in local notes or documents.
- Text to speech and speech to text bridges designed for quick experiments or internal tooling.
- Built-in helper for defining and executing OpenAI function tools without leaving Python.
- Lazy module loading so
import easier_openaistays fast even as optional helpers expand. - Type hints and comprehensive inline docstrings across the project for easier discovery.
pip install easier-openaiOptional extras:
pip install "easier-openai[function_tools]" # decorator helpers
pip install "easier-openai[speech_models]" # whisper speech recognition modelsSet the OPENAI_API_KEY environment variable or pass api_key directly when instantiating Assistant.
from easier_openai import Assistant
assistant = Assistant(model="gpt-4o-mini", system_prompt="You are concise.")
response_text = assistant.chat("Summarize Rayleigh scattering in one sentence.")
print(response_text)sped-up video. actual time ~1 min:
quickstart.mp4
Use Assistant.openai_function to convert regular functions into structured tool definitions and hand them to chat:
from easier_openai import Assistant
assistant = Assistant()
@assistant.openai_function
def look_up_fact(topic: str) -> dict:
"""Return a knowledge base lookup result for the given topic."""
return {"topic": topic}
assistant.chat(
"Tell me about the ozone layer using the fact tool.",
custom_tools=[look_up_fact],
)stream = assistant.chat(
"Summarise launch blockers for the robotics demo",
custom_tools=[look_up_fact],
text_stream=True,
)
for delta in stream:
if delta == "done":
break
print(delta, end="", flush=True)notes = ["notes/overview.md", "notes/data-sheet.pdf"]
reply = assistant.chat(
"Highlight key risks from the attached docs",
file_search=notes,
tools_required="auto",
)
print(reply)Generate audio output directly from assistant responses:
assistant.full_text_to_speech(
"Ship a status update that sounds upbeat",
model="gpt-4o-mini-tts",
voice="alloy",
play=True,
)full_text_to_speech accepts the same keyword arguments as chat, so you can pass
custom_tools, file_search, or web_search before the reply is spoken.
Or capture short dictated prompts without leaving the terminal:
transcript = assistant.speech_to_text(mode="vad", model="base.en")
print(transcript)Openai_Images extends the assistant with helpers that accept URLs, file paths, or base64 payloads and normalise them for the Images API:
from easier_openai import Openai_Images
image_client = Openai_Images("samples/promenade.jpg")
# Generated metadata is stored on image_client.image for re-use in calls.model: Default model used for chat, tool calls, and reasoning workflows. Choosing a realtime model (for examplegpt-4o-realtime-preview) automatically routesAssistant.chatthrough the Realtime API for text-only prompts; installopenai[realtime]to satisfy its websocket dependency. Tool execution and thestreamflag are ignored while a realtime model is active.system_prompt: Injected once per conversation to shape assistant behaviour.reasoning_effortandsummary_length: Fine tune reasoning models via the official API semantics.temperature: Pass through value mapped to OpenAI responses for deterministic vs creative answers.function_call_list: Pre-register decorated tool callables that should accompany everychatrequest.default_conversation: Set toFalseif you prefer to supply conversation IDs manually.mass_update(**kwargs): Bulk update configuration attributes using keyword arguments validated by type hints.
assistant.mass_update(model="gpt-4o-mini", temperature=0.2)
assistant.mass_update(reasoning_effort="high", summary_length="concise")- Every public function and class ships with contextual docstrings to make the codebase self-documenting.
- The repository includes unit tests under
tests/that exercise tool-calling flows; run them withpytest. - Generated artifacts in
build/mirror the source package and inherit the same documentation updates. - Issues and pull requests are welcome; please run checks locally before submitting changes.
Licensed under the Apache License 2.0.