Skip to content

GoodCODER280722/llm-output-validator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LLM Output Validator (Mock Mode)

A lightweight Python CLI tool that validates AI-generated responses using configurable rule-based guardrails and produces structured JSON reports.

Why This Project Matters

This project demonstrates:

  • Practical AI workflow validation
  • Rule-based output guardrails
  • Input normalization and scoring
  • Structured JSON reporting
  • Clean separation of configuration and logic
  • Portfolio-ready CLI tooling

This simulates how AI outputs can be programmatically evaluated in production pipelines.

Example

Input: "This response failed due to an exception."

Output: FAILED Score: 80 Issues:

  • flag_words: Found 'exception'

Real-World Application

This tool demonstrates how rule-based validation can be used in AI-assisted workflows, such as:

  • Automated LLM output quality assurance
  • Guardrail enforcement in production pipelines
  • Prompt response scoring systems
  • Compliance and safety filtering
  • Structured evaluation before database ingestion

While this project runs in mock mode, the architecture supports integration with live LLM APIs.

About

Rule-based AI output validation CLI tool (mock mode) with structured JSON reporting.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages