Skip to content

a better way to track LLM usage #4311

@beastoin

Description

@beastoin

Currently, we are tracking LLM usage solely through the OpenAI dashboard (we use OpenAI’s top-notch models to bring the highest value to users, assuming that OpenAI is currently the best option for Omi users’ use cases).

The problem is that there’s no way to monitor usage by feature. If a new feature causes high usage, or an existing feature is abused due to a bad code merge, it’s difficult to detect and fix the issue. Our OpenAI bill could explode.

That’s why we need to track LLM usage at the feature level: which features are in the top 3 for LLM usage, and by how much?

Use the keywords utils llm, processing conversation in the backend folder to find the relevant code.

Metadata

Metadata

Assignees

No one assigned

    Labels

    backendBackend Task (python)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions