A small, toy HR agent CLI that routes a user question to one of three supported analytics tools and prints a table result.
Supported questions:
- Highest average salary by region
- Top performers in each department
- Most tenured employees in each department
Safeguard: If a question does not match any supported tool, the agent returns NONE and prints a friendly message listing the supported question types (no tool is executed).
Note on LLMs: Local small transformer models via huggingface transformers were unreliable and/or too heavy for my machine, so Ollama is the recommended default for a free access to highly optimized local LLMs. OpenAI also works (paid, but nano is pennies).
First, open the project root folder and create a virtual environment to install packages.
cd /path/to/project/root
python -m venv hr-agent-env
source hr-agent-env/bin/activate
pip install -e .Go to https://ollama.com/ and download Ollama for highly optimized local LLM usage (100% free).
# Ollama example set up
ollama pull llama3.2:3b (model of interest)
ollama serve Go to https://platform.openai.com/ and create an account, then go to https://platform.openai.com/api-keys and generate an api key.
Note: They used to provide free credits, but apparently not anymore.
# From /path/to/project/root
hr-agent# test with paraphased and decoy queries via PyTest
pip install -e .[dev] (install pytest)
pytest tests/ Update configurations via env variables as needed, see src/hr_agent/config.py for default configurations.
# Path to CSV
export CLEAN_CSV="path/to/csv"# Increase/Decrease number of rows in table
export MAX_ROWS=20# Change LLM Provider (default="local")
export LLM_PROVIDER="local"# If using Local
export LOCAL_MODEL="Qwen/Qwen1.5-0.5B-Chat"
export USE_CPU="true"# If using Ollama
export OLLAMA_URL="http://127.0.0.1:11434"
export OLLAMA_MODEL="llama3.2:3b"# If using OpenAI
export OPENAI_MODEL="gpt-5-nano"
export OPENAI_API_KEY=YOUROPENAIKEYThe agent only knows how to answer questions that map to a registered tool. To add a new supported question/tool, you do 4 small steps (not including adding testcases)
Add a new method to HRAnalytics that returns a pandas.DataFrame.
Example: count remote workers:
def remote_worker_count(self) -> pd.DataFrame:
df = self.store.get_df()
out = (
df.groupby("Remote_Work")["Employee_ID"]
.count()
.reset_index()
.rename(columns={"Employee_ID": "Count"})
)
return outKeep it simple: load df → group/filter/sort → return DataFrame.
Create a small function that calls your analytics method:
def remote_worker_count(analytics: HRAnalytics):
return analytics.remote_worker_count()Add a new reg.register(...) entry:
reg.register(
"remote_worker_count",
"Counts employees by Remote_Work status.",
remote_worker_count,
)This makes the tool discoverable by the agent.
Update the system prompt tool list so the model can choose it:
Add remote_worker_count to the “Allowed tools” list in SYSTEM_PROMPT.
Then add one deterministic execution block, just like the others:
if tool == "remote_worker_count":
table = self.tools.run(tool, self.analytics)
return {
"tool": tool,
"answer": f"Ran tool: {tool}",
"table": table,
"llm_raw": raw,
}