Skip to content

Latest commit

 

History

History
26 lines (19 loc) · 1.79 KB

File metadata and controls

26 lines (19 loc) · 1.79 KB

infs Roadmap

This document tracks the development roadmap for infs, a provider-agnostic CLI for running AI models from multiple providers through one consistent interface.

Completed

  • fal.ai execution — Image generation via async queue API
  • Replicate execution — Image generation via prediction polling API
  • WaveSpeed AI execution — Image and video generation
  • OS keychain integration — Credentials stored securely in the OS keychain via keyring crate (falls back to credentials.toml when keychain is unavailable)
  • --json output flag — Machine-readable JSON output for scripting and automation (infs --json ...)
  • Shell completion scripts — Generate completions for bash, zsh, fish, PowerShell, and elvish (infs completions <shell>)
  • Retry logic with exponential backoff — Automatically retries transient network errors and HTTP 5xx responses with capped exponential backoff
  • Streaming LLM responses — Stream tokens as they are generated instead of waiting for the full response (--stream flag)
  • Paginated model listing — Handle providers with very large model catalogs via pagination (--page and --per-page flags)
  • File output for image generation — Automatically download and save generated images to a local file (--output flag)
  • File input support — Pass local files (images, audio, etc.) as input to multimodal models (--file flag)

Planned

  • More providers — ElevenLabs (audio), Stability AI (image), and others
  • OAuth support — Support providers that use OAuth-based authentication flows

Contributing

Have a feature suggestion or want to work on one of the planned items? See CONTRIBUTING.md for guidelines on how to contribute.