Ask an LLM questions while debugging — with prompts automatically enriched by your stack, variables, and source context.
It is like ShellGPT but for pdb, ipdb, Jupyter, the VS Code Debug Console, etc.
DO NOT USE THIS LIBRARY
“AI everywhere is rocket engines on a skateboard: a thrill that ends in wreckage. The planet pays in energy and emissions, and we pay in something subtler — the slow atrophy of our own intelligence, left idle while the machines do the heavy lifting.” ChatGPT
- 🐍 Generate Python debug commands from natural-language instructions.
- 🔍 Context-aware: prompt auto-includes call stack, local/global variable previews, current function source, and nearby code.
- 🤖 Supports multiple LLM providers: OpenAI, Anthropic, DeepSeek, Groq, Together AI, OpenRouter, Ollama
NOTE: In VS Code, you enter the function in the Debug Console, and get the output in the terminal ; so put both tabs (Debug Console and Terminal) side to side.
uv add ldbg, pixi add --pypi ldbg or pip install ldbg
- "Describe my numpy arrays"
- "plot my_data['b'] as a histogram"
- "give me an example pandas dataframe about employees"
- "generate a 3x12x16 example Pillow image from a numpy array"
- "convert this Pillow image to grayscale"
- "open this 'image.ome.tiff' with bioio"
>>> unknown_data = np.arange(9)
>>> example_dict = {"a": 1, "b": [1, 2, 3]}
>>> example_numbers = list(range(10))
>>> import ldbg
>>> ldbg.gc("describe unknown_data")
The model "gpt-5-mini-2025-08-07" says:
unknown_data is an numpy array which can be described with the following pandas code:
```
pandas.DataFrame(unknown_data).describe()
```
Note: you can use numpy.set_printoptions (or a library like numpyprint) to pretty print your array:
```
with np.printoptions(precision=2, suppress=True, threshold=5):
unknown_data
```
Would you like to execute the following code block:
pandas.DataFrame(unknown_data).describe()
(y/n)User enters y:
0
count 9.000000
mean 4.000000
std 2.738613
min 0.000000
25% 2.000000
50% 4.000000
75% 6.000000
max 8.000000
Would you like to execute the following code block:
with np.printoptions(precision=2, suppress=True, threshold=5):
unknown_data
(y/n)
User enters n and continues:
>>> ldbg.gc("plot example_numbers as a bar chart")
The model "gpt-5-mini-2025-08-07" says:
```
import matplotlib.pyplot as plt
plt.bar(range(len(numbers)), numbers)
plt.show()
```
Would you like to execute the following code block:
...By default, llm-debug uses the OpenAI client. So it reads the OPENAI_API_KEY environment variable.
You can use any of the following LLM providers by setting the LDBG_API environment variable:
export OPENAI_API_KEY="your_api_key_here"export LDBG_API="deepseek"
export DEEPSEEK_API_KEY="your_api_key_here"export LDBG_API="anthropic"
export ANTHROPIC_API_KEY="your_api_key_here"export LDBG_API="groq"
export GROQ_API_KEY="your_api_key_here"export LDBG_API="openrouter"
export OPENROUTER_API_KEY="your_api_key_here"export LDBG_API="together"
export TOGETHER_API_KEY="your_api_key_here"export LDBG_API="ollama"
# No API key required for local Ollama installationsBy default, each provider uses its recommended model. You can override this by passing the model parameter to ldbg.gc():
import ldbg
ldbg.gc("describe my_data", model="gpt-4-turbo")
ldbg.gc("describe my_data", model="deepseek-coder")
ldbg.gc("describe my_data", model="claude-3-opus-20240229")MIT License.