This repository demonstrates an agentic loan underwriting system built with Temporal workflows, Strands agent orchestration, Ollama LLM integration, and a Streamlit UI. The system simulates a loan underwriter copilot with specialist agents and supervisor orchestration for automated loan processing with human review capabilities.
- FastAPI Backend: REST API with endpoints for loan submission, workflow status checking, and human review
- Temporal Workflows: SupervisorWorkflow orchestrates the entire loan processing pipeline with durable execution
- Supports both local Temporal (default) and Temporal Cloud with API key authentication
- Automatic connection detection based on environment variables
- Fan-Out Parallel Processing Pattern: All documents processed simultaneously using
asyncio.gather()
- Strands HTTP Agents: Reusable agent classes for intelligent data fetching with error handling and validation
DataFetchAgent: Generic HTTP data fetching with validationCreditReportAgent: Specialized credit report validation with multi-provider support
- Specialist Activities: Mock data fetching (bank, documents, credit) and AI-powered assessments (income, expense, credit analysis)
trigger_document_processing: Processes documents using AWS Bedrock Nova Pro vision model for OCR- Synchronous document processing with structured JSON extraction
- AWS Bedrock Nova Pro: Cloud-based vision model for document OCR (bank statements, IDs, payslips)
- Parallel fan-out processing: All documents processed simultaneously
- Uses
asyncio.gather()for concurrent execution (Temporal-safe) - Document-specific prompts for accurate extraction
- Independent retry policies and fault isolation per document
- Structured JSON output saved locally
- Strands BedrockModel integration for simplified AWS Bedrock API calls
- AWS Bedrock AgentCore: Code Interpreter for sophisticated financial analysis (DTI, risk scoring, trend analysis)
- See AGENTCORE_INTEGRATION.md for implementation details
- Strands Integration: Agent orchestration with structured output validation using Ollama or AWS Bedrock models
- Provider Fallback: Temporal-orchestrated fallback from CIBIL to Experian for credit reports
- Streamlit UI: User interface for loan submission and underwriter review workflow
- Environment Configuration: Configurable LLM settings (Ollama/AWS Bedrock) and Temporal settings (Local/Cloud) via
.envfile
The diagram below shows the end-to-end flow: user submits via Streamlit, Streamlit calls FastAPI which starts a Temporal workflow. A worker executes activities (mock APIs and specialist agents). Documents are processed in parallel using Temporal's fan-out pattern with asyncio.gather() and AWS Bedrock Nova Pro for OCR. The workflow calls Ollama/Bedrock for a summary/decision, then awaits a human-review signal. The underwriter approves/rejects via the UI which signals the running workflow.
sequenceDiagram
participant User as Applicant / Underwriter
participant Streamlit as Streamlit UI
participant FastAPI as FastAPI Backend
participant Temporal as Temporal Server
participant Worker as Temporal Worker
participant Supervisor as SupervisorWorkflow
participant DataAgent as DataFetchAgent
participant CreditAgent as CreditReportAgent
participant Mockoon as Mock APIs
participant NovaPro as AWS Bedrock Nova Pro
participant LLM as LLM Provider (Ollama/Bedrock)
User->>Streamlit: Submit loan application
Streamlit->>FastAPI: POST /submit (application data)
FastAPI->>Temporal: start_workflow("SupervisorWorkflow", data, workflow_id)
Temporal->>Worker: schedule activities on task queue
Note over Worker, Supervisor: PHASE 1: Data Acquisition (Sequential)
Worker->>DataAgent: fetch_bank_account(applicant_id)
DataAgent->>Mockoon: HTTP GET /bank?applicant_id=X
Mockoon-->>DataAgent: bank account data
DataAgent-->>Worker: validated bank data
Note over Supervisor, Worker: Document Processing - Fan-Out Parallel Pattern
Note over Supervisor: asyncio.gather() processes all docs in parallel
par Document 1 (Bank Statement)
Supervisor->>Worker: _process_single_document(bank_stmt)
Worker->>NovaPro: trigger_document_processing (image + prompt via Strands)
NovaPro-->>Worker: Extracted JSON data
Worker-->>Supervisor: Success + structured data
and Document 2 (ID Card)
Supervisor->>Worker: _process_single_document(id_card)
Worker->>NovaPro: Nova Pro vision OCR (via Strands BedrockModel)
NovaPro-->>Worker: License data (name, DOB, address)
Worker-->>Supervisor: Success + data
and Document 3 (Salary Slip)
Supervisor->>Worker: _process_single_document(salary_slip)
Worker->>NovaPro: Nova Pro vision OCR (via Strands BedrockModel)
NovaPro-->>Worker: Salary, deductions, YTD
Worker-->>Supervisor: Success + data
Note over Supervisor: Each doc isolated - failures don't affect others
end
Note over Supervisor: asyncio.gather() waits for all docs
Note over Supervisor: All OCR processed by AWS Bedrock Nova Pro
Worker->>CreditAgent: fetch_credit_report_cibil(applicant_id)
CreditAgent->>Mockoon: HTTP GET /cibil?applicant_id=X
alt CIBIL Success
Mockoon-->>CreditAgent: credit report data
CreditAgent-->>Worker: validated credit (provider: CIBIL)
else CIBIL Failure (after 2 retries)
Note over Supervisor: Temporal orchestrates fallback
Worker->>CreditAgent: fetch_credit_report_experian(applicant_id)
CreditAgent->>Mockoon: HTTP GET /experian?applicant_id=X
Mockoon-->>CreditAgent: credit report data
CreditAgent-->>Worker: validated credit (provider: Experian)
end
Note over Worker, Supervisor: PHASE 2: Parallel Specialist Assessments
par Income Assessment
Worker->>Worker: income_assessment(app, bank, credit)
Note over Worker: Heuristic: income/amount ratio
Worker->>Worker: income_ok + income value
and Expense Assessment
Worker->>Worker: expense_assessment(app, bank)
Note over Worker: Heuristic: disposable income check
Worker->>Worker: affordability_ok + expenses
and Credit Assessment
Worker->>Worker: credit_assessment(app, credit)
Note over Worker: Heuristic: score > 620
Worker->>Worker: credit_ok + score
end
Note over Worker, Supervisor: PHASE 3: Decision Aggregation with LLM
alt Ollama Configuration
Worker->>LLM: aggregate_and_decide(all data)
Note over LLM: Strands Agent + Ollama Model
LLM-->>Worker: AI summary + recommendation
else AWS Bedrock Configuration
Worker->>LLM: aggregate_and_decide(all data)
Note over LLM: Strands Agent + Claude Model
LLM-->>Worker: AI summary + recommendation
end
Worker-->>Supervisor: Store summary with suggested decision
Note over Supervisor: PHASE 4: Human-in-the-Loop Review
Note over Supervisor: Workflow pauses, exposes queries
User->>Streamlit: Navigate to Review tab
Streamlit->>FastAPI: GET /workflow/{id}/summary
FastAPI->>Temporal: query("get_summary")
Temporal->>Supervisor: get_summary query
Supervisor-->>Temporal: summary with assessments + AI decision
Temporal-->>FastAPI: complete summary
FastAPI-->>Streamlit: display summary JSON
Streamlit-->>User: Show AI analysis + recommendation
User->>Streamlit: Click Approve/Reject
Streamlit->>FastAPI: POST /workflow/{id}/review {"action": "approve/reject"}
FastAPI->>Temporal: signal("human_review", decision)
Temporal->>Supervisor: receive human_review signal
Note over Supervisor: Workflow continues and completes
Supervisor-->>Temporal: finalize with human decision
opt Final Status Check
Streamlit->>FastAPI: GET /workflow/{id}/final
FastAPI->>Temporal: query("get_final_result")
Temporal->>Supervisor: get_final_result query
Supervisor-->>Temporal: complete result
Temporal-->>FastAPI: summary + human decision
FastAPI-->>Streamlit: display final outcome
end
- Temporal Server (choose one):
- Local Temporal (default): Running locally at localhost:7233
- Temporal Cloud: Account with API key for cloud-hosted Temporal
- LLM Provider (one of the following):
- Ollama: Local installation with model (default:
llama3:latest, configurable via.env) - AWS Bedrock: Access to AWS Bedrock service with API key and supported models (e.g.,
au.anthropic.claude-sonnet-4-5-20250929-v1:0)
- Ollama: Local installation with model (default:
- AWS Bedrock Nova Pro (for document OCR):
- AWS account with Bedrock access
- Access to Nova Pro vision model (inference profile or direct model access)
- Required IAM permissions:
bedrock:InvokeModel - Processes bank statements, salary slips, and ID documents using cloud-based vision AI
- Default model:
arn:aws:bedrock:us-west-2:1111111111:inference-profile/us.amazon.nova-pro-v1:0
- AWS Bedrock AgentCore (optional, for advanced financial analysis):
- AWS account with Bedrock access for Code Interpreter
- Required IAM permissions:
bedrock:InvokeAgent,bedrock:InvokeCodeInterpreter
- Mockoon: Mock API server running on port 3233 (configuration available in
mockoonfolder) - Python 3.9+: Required for all dependencies
- Dependencies: Install from
requirements.txt
- Clone and setup environment:
git clone <repository-url>
cd temporal-agentic--loan-underwriter
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt- Configure environment variables:
cp .env.example .env
# Edit .env to configure Temporal and LLM provider settings:
# Temporal Configuration:
# For Local (default):
TEMPORAL_ADDRESS=localhost:7233
TEMPORAL_NAMESPACE=default
TEMPORAL_TASK_QUEUE=loan-underwriter-queue
# For Temporal Cloud (just add API key and update address/namespace):
# TEMPORAL_API_KEY=your-api-key-here
# TEMPORAL_ADDRESS=your-namespace.account-id.tmprl.cloud:7233
# TEMPORAL_NAMESPACE=your-namespace.account-id
# LLM Configuration - For Ollama:
MODEL_PROVIDER=ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3:latest
# OR for AWS Bedrock:
MODEL_PROVIDER=aws-bedrock
AWS_BEARER_TOKEN_BEDROCK=<your-api-key>
AWS_REGION=<your-region> # e.g., ap-southeast-2
AWS_BEDROCK_MODEL=<model-id> # e.g., au.anthropic.claude-sonnet-4-5-20250929-v1:0
# AWS Bedrock Nova Pro for Document OCR:
AWS_BEDROCK_NOVA_MODEL_ID=arn:aws:bedrock:us-west-2:1111111111:inference-profile/us.amazon.nova-pro-v1:0
# Or use a specific region model ARN
# AWS_BEDROCK_NOVA_MODEL_ID=us.amazon.nova-pro-v1:0-
Start required services:
Temporal Server (choose one):
- Local Temporal (default): Start local Temporal server (see Temporal docs)
- Temporal Cloud: Configure API key in
.env- no local server needed
LLM Provider (choose one):
- Ollama: Start Ollama with your preferred model (e.g.,
ollama run llama3.2:1b) - AWS Bedrock: Configure API key in
.env- no local service needed
Mock APIs:
- Start Mockoon with the configuration from the
mockoonfolder (import the JSON config file into Mockoon and start the mock API on port 3233)
-
Launch the application:
Terminal 1 - Temporal Worker:
python backend/worker.py
Terminal 2 - FastAPI Backend:
uvicorn backend.main:app --reload --port 8000
Terminal 3 - Streamlit UI:
streamlit run ui/streamlit_app.py
-
Access the application:
- Streamlit UI: http://localhost:8501
- FastAPI docs: http://localhost:8000/docs
The application supports both local Temporal and Temporal Cloud deployments. By default, it connects to a local Temporal server.
To use Temporal Cloud instead of a local server:
-
Obtain Temporal Cloud credentials:
- Sign up for Temporal Cloud
- Create a namespace (e.g.,
my-namespace.account-id) - Generate an API key from the Temporal Cloud console
-
Update your
.envfile:# Update the address and namespace for your Temporal Cloud instance TEMPORAL_ADDRESS=my-namespace.account-id.tmprl.cloud:7233 TEMPORAL_NAMESPACE=my-namespace.account-id TEMPORAL_TASK_QUEUE=loan-underwriter-queue # Add your API key TEMPORAL_API_KEY=your-actual-api-key-here
-
Connection Detection:
- The application uses the same
TEMPORAL_ADDRESSandTEMPORAL_NAMESPACEfor both local and cloud - If
TEMPORAL_API_KEYis set, it automatically enables TLS and API key authentication for cloud - If
TEMPORAL_API_KEYis not set, it connects to local Temporal without authentication
- The application uses the same
-
Restart your services:
- Stop any running worker and FastAPI processes
- Start them again - they will automatically connect to Temporal Cloud
- Zero infrastructure management: No need to run local Temporal server
- High availability: Built-in redundancy and failover
- Scalability: Auto-scaling workers and workflow capacity
- Security: TLS encryption and API key authentication
- Monitoring: Built-in observability and metrics
- Structured Data Validation: Strands integration provides automatic validation of loan applications
- Parallel Document Processing: AWS Bedrock Nova Pro vision model extracts structured data from documents with:
- Fan-out parallelism: All documents processed simultaneously using
asyncio.gather() - Synchronous processing: Direct OCR without polling overhead
- Fault isolation: Each document's failure won't affect others
- Cloud-based vision AI: AWS Bedrock Nova Pro for high-accuracy OCR
- Independent retry policies per document
- State persistence ensures completed documents won't reprocess on failure
- Support for bank statements, IDs, and salary slips
- Document-specific prompts for accurate data extraction
- Fan-out parallelism: All documents processed simultaneously using
- AgentCore Code Interpreter: Sophisticated financial analysis with Python code execution
- DTI calculations, trend analysis, risk scoring
- π Implementation guide
- Mock Data Services: Simulated bank account and credit report fetching
- AI-Powered Analysis: Specialist agents for income, expense, and credit assessment using Ollama or AWS Bedrock
- Human-in-the-Loop: Workflow pauses for human underwriter review and decision
- Durable Execution: Temporal ensures reliable workflow execution with automatic retries
- Real-time UI: Streamlit interface for application submission and review workflow
βββ backend/
β βββ main.py # FastAPI application and API endpoints
β βββ workflows.py # Temporal SupervisorWorkflow definition
β βββ activities.py # Temporal activities (data fetching & AI assessments)
β βββ worker.py # Temporal worker process
β βββ classes/
β β βββ agents/
β β βββ data_fetch_agent.py # Strands agent for HTTP data fetching
β β βββ credit_report_agent.py # Strands agent for credit report validation
β βββ utilities/
β βββ temporal_client.py # Temporal connection utility (local/cloud)
βββ ui/
β βββ streamlit_app.py # Streamlit user interface
βββ mockoon/ # Mock API configuration for local development
βββ .env.example # Environment configuration template
βββ requirements.txt # Python dependencies
- Mock Data Services: Uses Mockoon for simulating bank and credit bureau APIs
- Agent Architecture: Strands agents are organized in reusable classes under
backend/classes/agents/ - Separation of Concerns:
- Temporal activities handle durable execution and retry logic (outer loop)
- Strands agents handle intelligent data fetching and validation (inner loop)
- Fan-Out Parallel Processing Pattern: Document processing demonstrates Temporal best practices:
- Parallel execution:
asyncio.gather()processes all documents simultaneously - Helper method:
_process_single_document()encapsulates document lifecycle - Synchronous OCR: AWS Bedrock Nova Pro processes documents directly via Strands
- Fault isolation: Each document has independent retry policy; failures don't affect others
- State persistence: Completed documents won't reprocess after workflow restart
- Cloud vision AI: AWS Bedrock Nova Pro provides enterprise-grade OCR
- Parallel execution:
- Provider Fallback Pattern: Temporal workflow orchestrates CIBIL β Experian fallback for credit reports
- Configurable LLM: Support for both Ollama (local) and AWS Bedrock (cloud) models via environment variables
- Production Considerations: Would require secure API integrations, authentication, and real data providers
- Human-in-the-Loop: Workflow supports binary approve/reject decisions with AI-generated explanations
- AGENTCORE_INTEGRATION.md: Comprehensive guide for AWS Bedrock Data Automation and AgentCore Code Interpreter integration
- Detailed setup instructions
- Configuration examples
- Troubleshooting guide
- Testing procedures
We welcome contributions to improve this demo! Here's how you can help:
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature-name - Make your changes and test locally
- Commit with clear messages:
git commit -m "Add: description of changes" - Push to your fork:
git push origin feature/your-feature-name - Open a Pull Request with a clear description of your changes
- Enhanced AI Models: Integration with other LLM providers (OpenAI, Azure OpenAI, etc.)
- Real Integrations: Replace mock activities with actual bank/credit APIs
- UI Improvements: Enhanced Streamlit interface or alternative frontend
- Security Features: Authentication, authorization, and data encryption
- Testing: Unit tests, integration tests, and workflow testing
- Documentation: API documentation, deployment guides, or tutorials
- Performance: Optimization and monitoring capabilities
- Follow existing code style and patterns
- Add appropriate error handling and logging
- Update documentation for any new features
- Ensure all tests pass before submitting
This project is licensed under the MIT License - see the LICENSE file for details.
Copyright (c) 2025 Temporal Technologies
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.