Skip to content

πŸš€ A CLI tool that uses AI to help developers organize thoughts and streamline workflows. πŸ”’ Secure and fun, it integrates smoothly into your setup, managing ideas and projects efficiently with Retrieval-Augmented Generation (RAG) and LLMs. πŸ’‘ Boost your productivity with this local AI assistant.

License

Notifications You must be signed in to change notification settings

montymi/sesh-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Creator Contributors Forks Stargazers Issues GPL License

sesh-cli

A secure CLI brainstorming assistant and productivity manager powered by local and cloud LLMs

Explore the docs Β· Report Bug Β· Request Feature
Table of Contents
  1. About The Project
  2. Installation
  3. Usage
  4. Architecture
  5. Structure
  6. Tasks
  7. Contributing
  8. License
  9. Contact
  10. Acknowledgments

About The Project

Sesh is a CLI tool designed to help developers organize thoughts and streamline workflows with the power of AI. It combines security and simplicity, offering a lightweight brainstorming assistant that integrates smoothly into your development setup.

With Retrieval-Augmented Generation (RAG) and LLMs, Sesh lets you manage ideas, projects, and sensitive data while staying productive. It supports both local models via Ollama and cloud models via the OpenAI API, giving you full control over where your data goes.

Key capabilities include:

  • AI Chat with RAG - Conversational AI augmented with your own documents via ChromaDB vector search
  • Document Import - Ingest PDFs, DOCX, CSVs, images, Python files, URLs, and directories
  • Habit System - Define persistent prompt augmentations (e.g., confidence scoring, topic tagging) that shape every response
  • Journal & Notes - Create, search, and manage notes with full-text search powered by Whoosh
  • Conversation Management - Save, load, trim, search, and export conversations
  • Plugin System - Extend functionality with custom command plugins discovered at runtime
  • Speech & TTS - Speech-to-text (Whisper) and text-to-speech (Kokoro) via the Linguist sub-package

Built With

Python Ollama OpenAI LangChain

(back to top)

Installation

Prerequisites

Confirm prerequisites:

git --version && python --version && pip --version

Clone the repository with submodules:

git clone --recurse-submodules https://github.com/montymi/sesh-cli.git && cd sesh-cli

Setup

Create and activate a virtual environment:

On Unix/macOS:

python -m venv venv
source venv/bin/activate

On Windows:

python -m venv venv
venv\Scripts\activate

Install dependencies:

pip install -r requirements.txt

Configuration

Sesh reads settings from a config.ini file in the project root. The default configuration uses Ollama with llama3.1:latest:

[settings]
debug = false
clerk = ollama          # or "gpt" for OpenAI
librarian = file
llm = llama3.1:latest

[library.file]
data = library

[library.mongo]
url =
username =
password =

[keys]
openai =                # required if clerk = gpt

Set clerk = gpt and provide your OpenAI API key under [keys] to use OpenAI models instead of Ollama.

(back to top)

Usage

Getting Started

Navigate to the source directory and run:

cd src
python main.py

On startup, Sesh will:

  1. Display available Ollama models (or connect to OpenAI)
  2. Prompt you to select a model
  3. Show saved conversations and let you resume one or start fresh

Type your questions at the >>> prompt. Sesh performs a similarity search on your embedded documents to provide relevant context with every response.

Services

Type any of these commands at the >>> prompt (with autocomplete):

Command Description
help Display assistant introduction
habits Add, toggle, or manage prompt augmentation habits
import Import documents (PDF, DOCX, CSV, images, URLs, directories) into the vector store
export Export conversations to a directory
notes Create, read, update, delete, and search journal notes
conversation Trim, clear, save, load, delete, or search conversations
exit Exit the application

Custom plugins placed in resources/plugins/ are automatically discovered and registered as additional commands.

(back to top)

Architecture

Sesh follows an MVC pattern with a plugin system:

User Input (prompt_toolkit)
  --> AppController (boot & orchestration)
    --> ClerkController (conversation loop)
      --> ServiceController (command dispatch + plugin discovery)
      --> Clerk (LLM wrapper: Ollama or GPT)
        --> Librarian (RAG pipeline, vector store, importers, journal)
        --> Habits (prompt augmentation)
    --> CLI View (presentation)

Core flow: User input is first checked against registered service commands. If no match, it becomes a chat message. The Clerk performs a similarity search on ChromaDB for RAG context, appends active habit prompts, and invokes the LLM. The response is displayed along with source documents.

Plugin system: PluginManager and ImporterManager dynamically discover Command and Importer subclasses by scanning directories at runtime using importlib and inspect.

(back to top)

Structure

config.ini              # Application settings (clerk type, LLM, paths, API keys)
requirements.txt        # Python dependencies
LICENSE.txt             # GPL-3.0 license
docs/
  designs/              # PlantUML design diagrams
    models.wsd
    tiers.wsd
library/                # Runtime data directory (created automatically)
  habits.json           # Active/inactive habit definitions
  resources/
    conversations/      # Saved conversation .conv files
    journal/            # Notes as JSON + Whoosh search index
    plugins/            # Custom command plugins (auto-discovered)
    vectors/            # ChromaDB vector store
src/
  main.py               # Entry point
  controllers/
    appcontroller.py    # Top-level orchestrator (boot, model init, run loop)
    clerkcontroller.py  # Conversation loop (entry -> response -> context)
    servicecontroller.py # Command dispatch + plugin discovery
    libcontroller.py    # Conversation persistence
    usercontroller.py   # Login/register (WIP)
  models/
    app.py              # Config reader (config.ini)
    clerk.py            # AI chat model (OllamaClerk, GPTClerk)
    librarian.py        # RAG pipeline, storage, embeddings, importers
    commands.py         # Built-in service commands (help, exit, habits, etc.)
    habits.py           # Prompt augmentation system
    journal.py          # Note CRUD with Whoosh full-text search
    managers.py         # Plugin & importer dynamic discovery
    user.py             # User model (MongoDB, WIP)
    DBlibrarian.py      # MongoDB librarian variant (WIP)
    importers/
      importer.py       # Importer ABC
      CSVImporter.py    # CSV document loader
      PDFImporter.py    # PDF document loader
      DocxImporter.py   # DOCX document loader
      ImageImporter.py  # Image document loader
      TextImporter.py   # Plain text loader
      PythonImporter.py # Python source loader
      URLImporter.py    # Web URL loader
      DirectoryImporter.py          # Directory loader
      RecursiveDirectoryImporter.py # Recursive directory loader
  views/
    cli.py              # CLI view (prompt_toolkit)
packages/
  linguist/             # Git submodule: speech-to-text & TTS
    src/
      controller.py     # Linguist controller
      commands.py       # Speech commands (speak, listen, transcribe)
      models/
        linguist.py     # Whisper STT integration
        microphone.py   # Audio recording
      packages/
        tts/            # Kokoro text-to-speech engine
      views/            # Abstract/CLI/GUI/headless views
sandbox/                # Experimental prototypes (not part of main app)

(back to top)

Tasks

  • Remove debug pdb.set_trace() from src/main.py
  • Fix self.user reference error in src/controllers/clerkcontroller.py
  • Deduplicate packages/linguist/ and src/packages/linguist/
  • Fix mutable default argument in Clerk.chat() (history=[])
  • Wire up UserController for login/register flow
  • Add CI/CD testing for deployment to main
  • Package and publish to PyPI

See the open issues for a full list of issues and proposed features.

(back to top)

Contributing

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the GPL-3.0 License. See LICENSE.txt for more information.


Contact

Michael Montanaro

LinkedIn GitHub


Acknowledgments

(back to top)

About

πŸš€ A CLI tool that uses AI to help developers organize thoughts and streamline workflows. πŸ”’ Secure and fun, it integrates smoothly into your setup, managing ideas and projects efficiently with Retrieval-Augmented Generation (RAG) and LLMs. πŸ’‘ Boost your productivity with this local AI assistant.

Topics

Resources

License

Stars

Watchers

Forks

Contributors 2

  •  
  •  

Languages