Transaction Risk Analysis is an intelligent financial monitoring system that uses a Large Language Model (LLM) to assess the risk level of transactions in real-time. It flags potentially fraudulent activities and notifies administrators when the risk level is high.
This system is designed to:
- Receive transaction data securely via a webhook endpoint.
- Analyze transaction risk using a natural language-based LLM prompt.
- Notify administrators for high-risk transactions.
- Protect endpoints using HTTP Basic Authentication.
This solution is suitable for payment gateways, financial institutions, and risk monitoring platforms.
- RESTful Webhook API with Basic Authentication
- LLM-based financial reasoning for risk analysis
- Environment variable configuration for secure credentials
- Automated notifications to external services
- Comprehensive unit tests using
pytest
transaction-risk-analysis/
├── app/
│ ├── api/v1/endpoints/webhook.py
│ ├── core/auth.py, prompts.py, config.py
│ ├── models/transaction.py, analysis.py
│ ├── services/llm_service.py, notifier_service.py
│ └── main.py
├── tests/test_webhook.py
├── requirements.txt
├── .env
├── .gitignore
└── README.md
git clone https://github.com/shamax1999/transaction-risk-analysis.git
cd transaction-risk-analysispython -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activatepip install -r requirements.txtCreate a .env file at the project root:
WEBHOOK_USER=user
WEBHOOK_PASS=123
LLM_API_KEY=your_groq_api_key
NOTIFIER_URL=http://localhost:9000/notify
uvicorn app.main:app --reloadpytest tests/test_webhook.pyEndpoint to handle admin notifications, you can run it using
uvicorn app.api.v1.endpoints.admin_receiver:app --port 9000Ensure that the NOTIFIER_URL in your .env is set to:
http://localhost:9000/notify
The API uses HTTP Basic Authentication. To authorize requests, add an Authorization header with Base64-encoded credentials.
For example, with user:123, the header will be:
Authorization: Basic dXNlcjoxMjM=
Receives a transaction payload, analyzes it using an LLM, and returns a risk score with a recommended action.
Headers:
Authorization: Basic <base64_encoded_credentials>
Content-Type: application/json
Request Body Example:
{
"transaction_id": "tx_123",
"timestamp": "2025-07-01T12:00:00Z",
"amount": 500.00,
"currency": "USD",
"customer": {
"id": "cust_001",
"country": "US",
"ip_address": "10.0.0.1"
},
"payment_method": {
"type": "credit_card",
"last_four": "1234",
"country_of_issue": "US"
},
"merchant": {
"id": "merch_001",
"name": "ABC Electronics",
"category": "electronics"
}
}Response Example:
{
"status": "received",
"risk_score": 0.65,
"recommended_action": "review"
}If the risk score is above 0.7, the system sends a notification to the URL specified in NOTIFIER_URL.
- A structured prompt is generated for each transaction.
- The Groq-hosted LLM evaluates the risk and responds with a JSON output.
- The risk score and recommended action are extracted from the response.
- If the transaction is high-risk, a notification is sent to the admin endpoint.
Included test cases cover:
- Normal transactions
- Cross-border payments
- High-value purchases
- High-risk countries
- Missing required fields
- Invalid credentials
Run all tests using:
pytest