Semext aims to provide LLM-powered version of Emacs command in ways that feel natural to Emacs users. The goal is to be as Emacs-like as possible.
An example of searching semantically with semext-search-forward:
Using semext-query-replace:
(use-package semext
:vc (:fetcher github :repo "ahyatt/semext")
:init
;; Replace provider with whatever you want, see https://github.com/ahyatt/llm
(setopt semext-provider (make-llm-ollama :chat-model "gemma3:1b")))This library integrates Emacs with LLMs to provide functionality that is close to what Emacs already has, but more flexible and powerful since it is driven by LLMs. While it is powerful and in many cases much easier to use than the non-LLM powered equivalents, the downside is that it is also slower and not predictable.
The following commands are supported:
semext-forward-part,semext-backward-part. Similar in spirit toforward-pageandbackward-page, these commands ask the LLM to identify significant semantic parts of the buffer (like functions, sections, or important paragraphs) and navigate between them. The parts are computed on demand across the entire buffer each time the command is run.semext-query-replace. Similar toquery-replace, this finds text matching a semantic description provided by the user and interactively asks whether to replace it based on a replacement description. It processes the entire buffer before starting the interactive replacement.semext-search-forward,semext-search-backward. Similar tosearch-forward, these find the next or previous instance of text matching a semantic description provided by the user. They process the entire buffer to find all occurrences before jumping to the relevant one.
- Performance: The first time a command like
semext-search-forwardorsemext-forward-partis run, it queries the LLM over the entire buffer (in chunks), which can be slow. Subsequent calls for the same operation (e.g., searching for the same text again, or moving to the next part) will be much faster as they use a cache. - Caching: Results are cached locally for each buffer. The cache is automatically invalidated if the text at a cached location changes. You can manually clear the cache using
M-x semext-clear-cache. - Consistency: LLMs operate on fixed-size context windows. While
semextprocesses the buffer in overlapping chunks, there might still be inconsistencies in how the LLM interprets boundaries or context across very large distances in the buffer, especially during the initial LLM query. - Provider Choice: A LLM provider capable of handling JSON responses is recommended (see
semext-provider).
There are no mode requirements for any command. They operate in any mode.
semext-forward-part,semext-backward-part: Navigate between semantic parts identified by the LLM. Uses cache after first run.semext-query-replace: Semantic query-replace. Uses cache after first run for a specific query.semext-search-forward,semext-search-backward: Semantic search. Uses cache after first run for a specific query.semext-clear-cache: Manually clear the entire results cache for the current buffer.

