Skip to content

ansh0105/agentic_hr_chatbot

Repository files navigation

Agentic HR Chatbot – Intelligent HR Policy Assistant

The Agentic HR Chatbot is a next-generation, GenAI-powered HR assistant designed to deliver precise and context-aware guidance across organizational policies. Leveraging advanced natural language understanding, it analyzes user queries, discerns intent, and classifies questions as either informational (fact-based) or situational (context-driven).

The chatbot intelligently maps inquiries to predefined HR policies:

  1. Benevolence Policy
  2. Leave Policy
  3. Probation Policy
  4. Referral Policy
  5. Relocation Policy
  6. Travel Policy

Beyond policy insights, it integrates seamlessly with the Leave Management System, enabling users to apply for leave and receive actionable responses in real-time. Built as an agentic system, it empowers employees with dynamic, personalized HR guidance while ensuring compliance and operational efficiency.


Table of Contents

  1. Technical Architecture
  2. MongoDB Vector Database Setup using Atlas CLI (Dockerized)
  3. Automated Notification Service Setup
  4. Knowledge Source Initial Setup
  5. Setup Observability Layer
  6. HR Chatbot Application Usage
  7. Future Enhancements

1. Technical Architecture

Image1

2. MongoDB Vector Database Setup using Atlas CLI (Dockerized)

This guide provides step-by-step instructions for setting up a MongoDB vector database using the Atlas CLI with Docker.

Note: that vector storage is only available in MongoDB Atlas (cloud-managed version) and not in the local open-source MongoDB. Therefore, we are using the Dockerized version of Atlas and manage deployments locally via the Atlas CLI.

Prerequisites

Before starting, ensure the following:

  • Windows system
  • Docker Desktop installed and running
  • Internet connection to download images
  • MongoDB Compass for GUI-based database management : Optional

Step 1: Install MongoDB Atlas CLI

Download and install the Atlas CLI binary for Windows from the official MongoDB documentation: Download Atlas CLI for Windows

Verify the installation:

atlas --version

Step 2: Set up a Local Atlas Deployment

MongoDB Atlas CLI provides two deployment options:

  • local – Local Docker-based MongoDB deployment
  • atlas – Cloud-hosted MongoDB Atlas deployment

We will be setting up a local setup, therefore follow these steps:

  1. Run the Atlas deployment setup command:
atlas deployment create | atlas deployments setup
  1. Select local as the deployment type.
  2. Choose custom deployment configuration.
  3. Specify the following parameters:
  • Deployment name: e.g., vector-mongodb-local
  • MongoDB version: 8.0 or above (required for vector storage)
  • Port: e.g., 59719
  1. Ensure Docker Desktop is running the deployment will start automatically.

Note: Skip connection setup during initial deployment. You can connect later using MongoDB Compass or using mongo shell.

Step 3: Manage Deployment

Atlas CLI simplifies deployment management without requiring direct Docker commands. You can manage, start, stop, or delete deployments easily. Below are some frequently used commands.

  • List all deployments:
atlas deployment list
  • Start a deployment:
atlas deployment start <deployment_name>
  • Stop a deployment:
atlas deployment stop <deployment_name>

Step 4: Connect to MongoDB

Once your deployment is running, you have two options to connect:

4.1 Using MongoDB Compass

  1. Download MongoDB Compass from MongoDB Compass Download.
  2. Retrieve the connection string from Atlas CLI:
  3. Enter connection string in Compass and connect with your running mongodb image.

4.1 Using MongoDB Compass

You can use Mongo Shell and connect with the mongodb instance using mongosh commands.


3. Automated Notification Service Setup

This module enables automated email notifications from the HR Chatbot system. Whenever user apply for the leave automatically mail also send to the user's gte approver.

Prerequisites

  • Git repository cloned locally
  • Gmail account with App Password generated for the sender email

Step 1: Encode Gmail App Password

  • In the root directory of the repository, locate the file:
encoding_pass.py
  • Open the file and add your Gmail App Password.
  • Run the encoding script to securely encode your password:
python encoding_pass.py

Note: This will generate an encoded version of your password for secure use in the application.

Step 2: Update Preprocessing Script

  • Navigate to the preprocessing script: src/data_pipeline/preprocessing_script.py
  • Locate the send_email() function.
  • Update the email credentials with the encoded password:
sender_email = "<your_email>@gmail.com"
password = "<encoded_password>"

Once the credentials are configured you are good to go to the next setup process:


4. Knowledge Source Initial Setup

This guide walks you through the initial setup of the knowledge source in MongoDB, including data ingestion, Dockerization, and vectorization.

Prerequisites

  • Git
  • Docker Installed
  • MongoDB installed and running (ensure MongoDB service is up).

Step 1: Clone Repository

Clone the whole Git repository:

git clone <repository_url>
cd <repository_folder if needed> 

Step 2: Prepare Data

Inside the data folder, you will find two subfolders:

  • upload_pii_data
  • upload_static_policy_data

These folders contain the sample data files.

Note: You can add new policy with same filenames. But if you modify/change content of these files, do not change their filenames, as the agentic tool is already built to work with these specific names.

If no changes are needed, you can ingest the same sample data into MongoDB.

Step 3: Build Docker Image

Build the Docker image using the following command:

docker build -t <docker_name> .

Note:Before building docker image check all the credentials are properly but in the creds.yaml file. It requires Azure LLM , Embedding, LLAMA Parser as well as MongoDB credentials. Place all the keys in this file and then only build the docker image.

For login credentials for streamlit it is avaiable in src>steamlit_pipeline>login_creds.yaml. Here is the simulation of two user names has been taken admin and non-admin. Make sure user id of these username should match with the ingested pii_data present in data>upload_pii_data.

Step 4: Run Docker Container

In the docker-compose.yml file, make sure to mention the same Docker image name with wich you have build the docker. Then run:

docker-compose up -d

This will start the application container.

Step 5: Initial knowleadge Setup

For initial setup, SSH into the running container:

ssh root@localhost -p 2222

Note: Password: rootpassword

Make sure your MongoDB service is up and running. You can check using MongoDB Compass or the CLI.

Step 6: Vectorize data and push it in MongoDB

Run the setup script to ingest and vectorize the data:

cd /app
python3 setup_src/setup_main.py

Once completed, you can verify the data in MongoDB using Compass. Two databases will be created:

  • hr_chatbot_pii_db (contatining hr_chatbot_pii_db)
  • hr_chatbot_db (containing upload_static_policy_data)

5. Observability Layer Setup – Grafana, Loki & Promtail

This setup enables observability for the HR Chatbot system by integrating Grafana, Loki, and Promtail to capture, visualize, and monitor application logs.

Prerequisites

  • Repository cloned locally
  • Docker & Docker Compose installed

Step 1: Navigate to Observability Folder

Inside the cloned repository, go to the observability folder:

cd grafana-loki-docker

Here you will find:

  • docker-compose.yml – service definitions for Grafana, Loki, and Promtail
  • loki-config.yaml – Loki configuration
  • promtail_config.yaml – Promtail configuration

Note: If you need configuration changes, modify these files before proceeding. Otherwise, you can use the default setup.

Step 2: Logs Captured

The following logs will be collected by this observability layer (stored under the logs/ directory in the repository root):

  • fast_api_logs
  • llm_orchestration_logs
  • setup_logs
  • streamlit_ui_logs

Step 3: Start the Observability Stack

Run the following command to start services in detached mode:

docker compose up -d

This will create and start Docker containers for Grafana, Loki, and Promtail.

Step 4: Verify Grafana-Loki-Promtail Containers

Check if Grafana started successfully:

docker ps

If Grafana is not running, start it manually:

docker start <container_name_or_id>

Step 5: Access Grafana

You can access the grafana using below details:

  • URL: http://localhost:3000
  • Username: admin
  • Password: ChangeMeNow!

Step 6: Configure Loki as Data Source

  • Log in to Grafana
  • Navigate to Configuration > Data Sources
  • Add a new data source
  • Select Loki
  • Enter the URL: http://loki:3100
  • Save & Test the connection

Now you can visualize logs, build dashboards, and create charts based on application logs.

Step 7: Debugging

You can check logs for each service separately using:

  • Check Grafana logs
docker logs <grafana_container_name_or_id>
  • Check Loki logs
docker logs <loki_container_name_or_id>
  • Check Promtail logs
docker logs <promtail_container_name_or_id>

Note:

  • Make sure the logs/ directory exists in the repository root before starting the observability stack.
  • Customize promtail_config.yaml if you want to add new log paths or change log labels.

6. HR Chatbot Application Usage

Now that the initial setup is complete and the application Docker container is up and running, you can leverage the HR Chatbot in two ways:

  1. Scalable Async API developed in FastAPI
    • You can use this API to build a custom UI and integrate it directly.
  2. Streamlit Application
    • Use the existing Streamlit UI to interact with the bot.

Prerequisites

  • MongoDB must be up and running with the initial data setup as described in the initial setup steps.
  • Application Docker container must be up and running.
  • Grafana-Loki-Promtail docker container must be up and running to capture logs for observability.

Available Ports

Port Purpose How to Access / Notes
8080 Nginx reverse proxy FastAPI is configured with Nginx for reverse proxy. Access endpoints via: http://localhost:8080/api/<endpoint>
8000 Direct FastAPI access Direct FastAPI access: http://localhost:8000/<endpoint>
2222 SSH access ssh root@localhost -p 2222
Password: rootpassword
9001 Supervisor Web UI Access at: http://localhost:9001
Username: admin
Password: admin
8501 Streamlit Web UI Access at: http://localhost:8501

Admin Login:
Username: anshbahuguna
Password: admin

Non-Admin Login:
Username: vaishaliraheja
Password: non-admin

Note:

  • FastAPI is accessible both directly on port 8000 and through Nginx reverse proxy on port 8080.
  • Use SSH (2222) to connect to the container for initial setup or debugging.
  • Supervisor UI (9001) provides process management.
  • Streamlit (8501) offers the HR Chatbot’s interactive web interface with role-based login.

FastAPI Endpoints

The following endpoints are available:

Path Method Description
/chatbot/ask POST Chat with HR bot
/check-health GET Health check
/list-endpoints GET List all endpoints

Example Payload for /chatbot/ask

{
  "user_id": "U001",
  "question": "Your query here"
  
}

NOTE: You can utilize these endpoints in any UI or test them using Postman.


Streamlit Application

  • Access the Streamlit app via port 8501.
  • Log in using the credentials as mentioned in login_creds.yaml.
  • Explore and enjoy all available features of the HR Chatbot application.

7. Future Enhancements

  • Integration with enterprise-grade SSO for secure authentication
  • Enhanced analytics dashboards for HR query trends
  • Multi-language policy support using GenAI models
  • Role-based access control with fine-grained permissions
  • And many more

About

Agentic HR Chatbot

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published