Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 18 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,25 +17,26 @@ LIRA is a **CLI-first, developer-friendly tool**: run and serve ASR models local

- **Python 3.10** is required.
- We recommend using **conda** for environment management.
- For RyzenAI NPU flow, follow the [RyzenAI installation instructions](https://ryzenai.docs.amd.com/en/latest/inst.html) and verify drivers/runtime for your device.
- For RyzenAI NPU flow, follow the [RyzenAI installation instructions](https://ryzenai.docs.amd.com/en/latest/inst.html) and verify drivers/runtime for your device. Ensure that you have a Ryzen AI 300 Series machine to nebale NPU use cases
- Current recommended Ryzen AI Version: RAI 1.5.1 with 32.0.203.280 driver

**Minimal install steps:**

1. **Clone the repo and change directory:**
```bash
git clone https://github.com/aigdat/LIRA.git
cd LIRA
```
```bash
git clone https://github.com/aigdat/LIRA.git
cd LIRA
```

2. **Activate your conda environment:**
```bash
conda activate ryzen-ai-1.5.0
```
```bash
conda activate ryzen-ai-1.5.0
```

3. **Install LIRA in editable mode:**
```bash
pip install -e .
```
```bash
pip install -e .
```

Now you can run `lira --help` to see available commands.

Expand Down Expand Up @@ -74,13 +75,13 @@ LIRA includes a FastAPI-based HTTP server for rapid integration with your applic
**Start the server:**

- **CPU acceleration:**
```bash
lira serve --backend openai --model whisper-base --device cpu --host 0.0.0.0 --port 5000
```
```bash
lira serve --backend openai --model whisper-base --device cpu --host 0.0.0.0 --port 5000
```
- **NPU acceleration:**
```bash
lira serve --backend openai --model whisper-base --device npu --host 0.0.0.0 --port 5000
```
```bash
lira serve --backend openai --model whisper-base --device npu --host 0.0.0.0 --port 5000
```

> Interested in more server features?
> Try the **LIRA server demo** with Open WebUI.
Expand Down Expand Up @@ -179,15 +180,6 @@ LIRA supports multiple speech-model architectures. Runtime support depends on th

<sub>*NPU support depends on available Vitis AI export artifacts and target hardware.</sub>

---

## 📚 Datasets & Examples

- `datasets/LibriSpeech` contains sample audio and transcripts for quick testing.
Replace or augment with your own data for benchmarking.

---

## 🧪 Early Access & Open Source Intentions

LIRA is released as an open, community-driven project.
Expand Down
122 changes: 0 additions & 122 deletions evaluate/evaluate_models.py

This file was deleted.

155 changes: 0 additions & 155 deletions main.py

This file was deleted.

2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"torch",
"torchaudio",
"sounddevice",
"transformers==4.38.0",
"transformers==4.52.4",
"soundfile",
"gradio",
"jiwer",
Expand Down
Loading