Version 3 - A web-based application that orchestrates three Azure AI Foundry agents to answer questions with fact-checking and link validation. Features both individual question processing and Excel import/export functionality, with a browser-based interface, desktop GUI, and command-line interface.
Originally created by Marco Casalaina. This version was authored using GitHub Copilot Agent, the Microsoft Agent Framework, and Spec Kit.
This tool implements a multi-agent system that:
- Question Answerer: Searches the web for evidence and synthesizes a candidate answer
- Answer Checker: Validates factual correctness, completeness, and consistency
- Link Checker: Verifies that every URL cited in the answer is reachable and relevant
If either checker rejects the answer, the Question Answerer reformulates and the cycle repeats up to 25 attempts.
- Web Interface: Browser-based UI with real-time progress via Server-Sent Events
- Desktop GUI: Alternative Python tkinter interface for local use
- Command Line Options: Configure settings and auto-start processing from the command line
- Excel Integration: Import questions from Excel files and export results
- Parallel Processing: Process up to 3 questions simultaneously using multiple agent sets
- Real-time Progress: Live reasoning display showing agent workflow
- Character Limit Control: Configurable answer length with automatic retries
- Web Grounding: All agents use Bing search via Azure AI Foundry
- Multi-agent Validation: Three-stage validation ensures answer quality
- Source Verification: All cited URLs are checked for reachability and relevance
- Python 3.8 or higher
- Azure subscription with AI Foundry project
- Bing Search resource connected to your AI Foundry project
Authentication: The application will test your Azure authentication immediately on startup. If you have already run az login or azd login, the application will use that existing session. Otherwise, it will automatically open a browser window for interactive login.
If you prefer to authenticate before starting the app, you can:
- Run
az loginin your terminal - Set up environment variables for service principal authentication
Create and activate a virtual environment, then install the required dependencies:
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txtpip install -e .The web interface is the primary way to use the application. Start the web server:
# Start web server on default port (8080) and auto-open browser
python run_app.py --web
# Start on a custom port
python run_app.py --web --port 3000
# Start without opening browser (for remote servers or automated testing)
python run_app.py --web --port 8080 --no-browserSee the "Web Interface" section below for detailed usage instructions.
Alternatively, run the standalone desktop application (requires display):
python run_app.pyThe application supports command line options to configure settings and auto-start processing:
Configure Settings:
# Set context (default: "Microsoft Azure AI")
python run_app.py --context "Custom Context"
# Set character limit (default: 2000)
python run_app.py --charlimit 3000
# Combine both
python run_app.py --context "Azure Services" --charlimit 1500Auto-start Question Processing:
# Process a question immediately after initialization
python run_app.py --question "What types of text-to-speech do you offer?"
# With custom settings
python run_app.py --context "Microsoft Azure AI" --charlimit 2000 --question "How many languages does your TTS service support?"Auto-start Spreadsheet Processing:
# Process an Excel file immediately after initialization
python run_app.py --spreadsheet ./tests/sample_questionnaire_1_sheet.xlsx
# With custom settings
python run_app.py --context "Azure AI" --charlimit 1500 --spreadsheet ./path/to/questionnaire.xlsxView All Options:
python run_app.py --helpThe application also provides a browser-based web interface for processing questions and spreadsheets.
Start the Web Server:
# Start web server on default port (8080) and auto-open browser
python run_app.py --web
# Start on a custom port
python run_app.py --web --port 3000
# Start without opening browser (useful for automated testing or remote servers)
python run_app.py --web --port 8080 --no-browserWeb Interface Features:
- Single Question Mode: Enter a question in the sidebar and click "Ask!" to get an answer
- Spreadsheet Mode: Click "Import From Excel" to upload a spreadsheet with questions
- Auto-mapping: Automatically detects Question and Answer columns in uploaded spreadsheets
- Parallel Processing: Processes up to 3 questions simultaneously using multiple agent sets
- Real-time Updates: Live progress via Server-Sent Events (SSE)
- Visual Feedback:
- Pink rows indicate questions currently being processed
- Light green rows indicate completed answers
- Status bar shows current processing progress
- Sheet Tabs: Navigate between sheets in multi-sheet workbooks
- Download Results: Export processed spreadsheet when complete
Web Interface Layout:
- Left Sidebar: Configuration (Context, Character Limit, Retries), Question input, Import/Export buttons
- Main Panel: Answer display (single question) or spreadsheet grid (batch processing)
- Status Bar: Connection status, processing progress, and status messages
- Header: Azure authentication status indicator
Single Question Mode:
- Enter your context (default: "Microsoft Azure AI")
- Set character limit (default: 2000)
- Type your question and click "Ask!"
- Monitor progress in the Reasoning tab
- View results in Answer and Documentation tabs
Excel Import Mode:
- Click "Import From Excel" button
- Select Excel file with questions
- System auto-detects question columns
- Monitor real-time processing progress
- Choose save location when complete
================================================================================
FINAL ANSWER:
================================================================================
Based on web search results, here's what I found:
The sky appears blue due to a phenomenon called Rayleigh scattering. When sunlight enters Earth's atmosphere, it collides with tiny gas molecules. Blue light has a shorter wavelength than other colors, so it gets scattered more in all directions by these molecules. This scattered blue light is what we see when we look at the sky.
Sources:
- [NASA Science](https://science.nasa.gov/earth/earth-atmosphere/why-is-the-sky-blue/)
- [National Weather Service](https://www.weather.gov/jetstream/color)
================================================================================
| Component | Responsibility | Grounding Source |
|---|---|---|
| Question Answerer | Searches the web for evidence, synthesizes a candidate answer | Web search API |
| Answer Checker | Validates factual correctness, completeness, and consistency | Web search API |
| Link Checker | Verifies that every URL cited in the answer is reachable and relevant | Browser Automation tool |
- Read Input: Accept a question from the command line
- Answer Generation: Question Answerer retrieves evidence and produces a draft answer
- Validation:
- Answer Checker reviews the draft for accuracy and completeness
- Link Checker tests all cited URLs for reachability and relevance
- Decision:
- If both checkers approve: Output the final answer and terminate successfully
- If either checker rejects: Log rejection reasons, increment attempt counter, and retry (up to 25 attempts)
Copy the template file and configure your values:
cp .env.template .envThen edit .env with your actual Azure AI Foundry configuration values.
The application requires the following environment variables to be set in your .env file:
| Variable | Description | Where to Find |
|---|---|---|
AZURE_OPENAI_ENDPOINT |
Azure AI Foundry project endpoint | Azure AI Foundry Portal > Project Overview > Project Details |
AZURE_OPENAI_MODEL_DEPLOYMENT |
Your deployed model name | Azure AI Foundry Portal > Models + Endpoints |
BING_CONNECTION_ID |
Bing Search connection name | Azure AI Foundry Portal > Connected Resources |
BROWSER_AUTOMATION_CONNECTION_ID |
Browser Automation connection name | Azure AI Foundry Portal > Connected Resources |
APPLICATIONINSIGHTS_CONNECTION_STRING |
Application Insights connection string | Azure Portal > Application Insights > Overview |
AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED |
Enable AI content tracing (optional) | Set to true or false |
Example .env file:
AZURE_OPENAI_ENDPOINT=https://your-project.services.ai.azure.com/api/projects/your-project
AZURE_OPENAI_MODEL_DEPLOYMENT=gpt-4.1
BING_CONNECTION_ID=your-bing-connection-name
BROWSER_AUTOMATION_CONNECTION_ID=your-browser-automation-connection-name
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=your-key;IngestionEndpoint=https://your-region.in.applicationinsights.azure.com/;LiveEndpoint=https://your-region.livediagnostics.monitor.azure.com/;ApplicationId=your-app-id
AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED=trueImportant Security Notes:
- Never commit your
.envfile to version control (it's already in.gitignore) - The
.env.templatefile shows the required structure without sensitive values - Application Insights connection string enables Azure AI Foundry tracing for monitoring and debugging
The application includes built-in Azure AI Foundry tracing integration that provides:
- Distributed Tracing: Full visibility into multi-agent workflows
- Performance Monitoring: Track execution times and bottlenecks
- Gen AI Content Capture: Record prompts and responses (when enabled)
- Error Tracking: Detailed error context and stack traces
- Resource Usage: Monitor token consumption and API calls
Traces appear in:
- Azure AI Foundry Portal → Tracing tab
- Azure Portal → Application Insights → Transaction search
Set AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED=false in production if you want to exclude AI content from traces for privacy reasons.
The FoundryAgentSession class in utils/resource_manager.py provides a context manager for safely managing Azure AI Foundry agent and thread resources. This helper is required because:
- Resource Cleanup: Azure AI Foundry agents and threads are persistent resources that must be explicitly deleted to avoid resource leaks
- Exception Safety: Ensures cleanup occurs even if exceptions are raised during agent operations
- Cost Management: Prevents accumulation of unused resources that could incur costs
Usage example:
with FoundryAgentSession(client, model="gpt-4o-mini",
name="my-agent",
instructions="You are a helpful assistant") as (agent, thread):
# Use agent and thread for operations
# Resources are automatically cleaned up when exiting the contextThe context manager handles:
- Creating agent and thread resources on entry
- Automatic cleanup on exit (even if exceptions occur)
- Robust error handling during cleanup to prevent masking original exceptions
The tool uses Azure AI Foundry with integrated Bing search grounding. For alternative search APIs, consider:
- Google Custom Search API
- Bing Search API
- SerpAPI
- Demo Implementation: Uses basic web search and text processing
- Rate Limiting: May encounter rate limits with free APIs
- Language Support: Optimized for English questions
- Fact Checking: Uses heuristic-based validation rather than advanced fact-checking
QuestionnaireAgent_v3/
├── run_app.py # Main application entry point
├── src/
│ ├── agents/ # Agent implementations
│ │ ├── __init__.py
│ │ ├── workflow_manager.py
│ │ └── ...
│ ├── ui/ # Desktop GUI components (tkinter)
│ │ ├── main_window.py
│ │ └── ...
│ ├── web/ # Web interface (Flask)
│ │ ├── app.py # Flask application & API endpoints
│ │ ├── models.py # Pydantic models for API
│ │ ├── sse_manager.py # Server-Sent Events manager
│ │ └── static/ # Frontend assets
│ │ ├── index.html # Main HTML template
│ │ ├── styles.css # CSS styles
│ │ ├── app.js # Main JavaScript application
│ │ └── spreadsheet.js # ag-Grid spreadsheet component
│ ├── excel/ # Excel processing
│ │ ├── loader.py
│ │ ├── processor.py
│ │ └── column_identifier.py
│ ├── utils/ # Shared utilities
│ │ ├── __init__.py
│ │ ├── logger.py
│ │ └── azure_auth.py # Azure authentication
│ ├── resource_manager.py # Azure AI Foundry resource management
│ └── web_search.py
├── tests/ # Test suite
├── specs/ # Feature specifications
├── requirements.txt # Python dependencies
├── setup.py # Installation script
├── README.md # This documentation
└── README_Questionnaire_UI.md # Detailed UI documentation
To extend the system:
- New Validation: Add checks to
AnswerChecker - Better Search: Upgrade
WebSearcherwith more sophisticated APIs - Advanced NLP: Integrate language models for better synthesis and validation
- Caching: Add response caching to reduce API calls
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions welcome! Please read the contributing guidelines and submit pull requests for any improvements.