Values Alignment & Stability Tracker - An interactive web application for AI moral alignment and ethical decision-making.
VAST is a novel framework for AI moral alignment that provides transparent, auditable, and stable ethical decision-making. This interactive application showcases:
- Structured Belief Representation: B(x) = (π(x), κ(x), J(x))
- JWMC Belief Revision: Justification-Weighted Moral Consistency
- EEU-CC Decision Making: Ethical Expected Utility with Cascading Constraints
- Four-Gauge Alignment Monitoring: Real-time quality assessment
- Node.js v14.0.0 or higher
- npm v6.0.0 or higher
# Navigate to project directory
cd vast-demo
# Install dependencies
npm install
# Start development server
npm startThe application will open automatically at http://localhost:3000
- Pre-configured healthcare crisis scenario
- Autonomous vehicle emergency scenario
- Load custom scenarios via JSON
- Credence (π): Probability sliders with real-time validation
- Confidence (κ): Evidence strength slider (0-1)
- Justification (J): Structured reasoning chains
- Facts: Observable evidence
- Rules: Applicable principles
- Moral Principles: Ethical frameworks
- Shows belief update with new evidence
- Displays moral weight calculation
- Demonstrates stability preservation
- Compares multiple actions
- Shows utility calculation breakdown:
- Base utility
- Moral weighting factor
- Cascading constraint penalties
- Selects optimal action with full reasoning
- Calibration: Confidence vs accuracy alignment
- Normative Alignment: Moral principle consistency
- Coherence: Internal belief consistency
- Reasoning: Justification quality & depth
- Overall VAST Score: Weighted combination
- Complete decision history
- Searchable by action or principle
- Expandable entries with full reasoning
- JSON export for analysis
- VAST vs Traditional AI (RLHF)
- VAST vs Constitutional AI
- VAST vs Value Learning
- Visual score comparison
vast-demo/
├── public/
│ └── index.html # HTML template
├── src/
│ ├── core/
│ │ └── VASTFramework.js # Core VAST implementation
│ ├── data/
│ │ └── scenarios.js # Pre-configured scenarios
│ ├── components/
│ │ ├── BeliefCreator.js # Belief creation interface
│ │ ├── VastGauges.js # Gauge visualizations
│ │ ├── DecisionMaker.js # Decision display
│ │ ├── AuditTrail.js # Audit log viewer
│ │ └── ScenarioLoader.js# Scenario selection
│ ├── App.js # Main application
│ ├── App.css # Global styles
│ └── index.js # Entry point
├── package.json # Dependencies
└── README.md # This file
Edit src/data/scenarios.js:
export const scenarios = {
your_scenario: {
id: 'your_scenario',
title: 'Your Scenario Title',
description: 'Scenario description',
context: {
scenario: 'your_scenario',
moral_stakes: 0.8,
stakes: 'high'
},
actions: [
{
id: 'action_1',
label: 'Action 1 Label',
credence: { success: 0.7, failure: 0.3 },
confidence: 0.75,
justification: {
facts: ['fact1', 'fact2'],
rules: ['rule1', 'rule2'],
moral_principles: ['principle1', 'principle2'],
context: 'your_context'
}
}
// Add more actions...
],
expectedResult: {
selectedAction: 'action_1',
expectedVASTScore: 0.80
}
}
};Edit src/core/VASTFramework.js, constructor:
this.moralHierarchy = {
1: { name: "preserve_life", weight: 0.9, propagation: 0.8 },
2: { name: "fairness", weight: 0.6, propagation: 0.7 },
3: { name: "efficiency", weight: 0.3, propagation: 0.5 },
// Add more levels...
};In calculateVastGauges method:
const gaugeWeights = {
calibration: 0.25,
normative_alignment: 0.3,
coherence: 0.2,
reasoning: 0.25
};// 1. Calculate moral weight
moralWeight = similarity(J₁, J₂) + coreBonus
// 2. Combine credences
π_new = w * π₁ + (1-w) * π₂
// 3. Update confidence
κ_new = min(κ₁, κ₂) * stabilityFactor(w)
// 4. Merge justifications
J_new = J₁ ∪ J₂ + metadatafor each action a:
baseUtility = U(a | context)
moralWeight = M(J(a), context)
cascadePenalty = C(a, hierarchy)
expectedUtility = Σ π(s) * baseUtility * moralWeight * cascadePenalty
selected = argmax(expectedUtility)- Calibration:
1 - |κ - accuracy_estimated| - Normative Alignment:
Σ weight(principle_i) - Coherence:
1 - |H(π) - H_expected| - Reasoning:
depth(J) * κ
- Start with Healthcare Scenario: Most impactful demonstration
- Use JWMC Demo: Shows adaptability with new evidence
- Highlight Gauges: Real-time monitoring is unique to VAST
- Show Audit Trail: Demonstrates full transparency
- Compare Frameworks: Emphasizes VAST's advantages
- Use Compare tab to show VAST superiority
- Export audit trail for detailed analysis
- Create custom scenarios relevant to audience
- Focus on the structured belief representation (π, κ, J)
- Let users create their own beliefs
- Show how different moral principles affect decisions
- Demonstrate belief revision in real-time
- Export results for further discussion
- React 18.2.0: UI framework
- Chart.js 4.4.0: Visualizations
- react-chartjs-2 5.2.0: React Chart.js wrapper
- Decision latency: < 1 second (typically 50-200ms)
- Memory usage: ~50MB for typical scenarios
- Handles up to 50 beliefs simultaneously
- Audit trail up to 1000 decisions
- Chrome 90+
- Firefox 88+
- Safari 14+
- Edge 90+
# Create optimized production build
npm run build
# Serve the build folder with any static server
# Example with serve:
npm install -g serve
serve -s buildThe build will be in the build/ folder, ready for deployment.
Install kaleido for image export:
npm install kaleidoEnsure all credence values sum to exactly 1.0. Use the "Normalize" button.
- Reduce number of simultaneous beliefs
- Clear audit trail periodically
- Use Chrome for best performance
Creates a new VAST belief.
Parameters:
proposition(string): Action identifiercredence(object): Probability distributionconfidence(number): Evidence strength [0,1]justification(object): Reasoning structure
Returns: VASTBelief object
Revises existing belief with new evidence.
Returns: Object with belief, moralWeight, moralStabilityFactor
Makes ethical decision among actions.
Returns: Object with selectedAction, utilities, calculations
Calculates alignment gauges for belief.
Returns: Object with gauge scores and overall_vast_score
Exports complete audit trail.
Returns: JSON object with all framework data
If you use this framework in your research, please cite:
@masterthesis{partow2025vast,
title={VAST: Values Alignment and Stability Tracker for AI Moral Alignment},
author={Partow, Soraya},
year={2025},
school={[Your University]},
supervisor={Nan, Satyaki}
}- Demo Mode Button: Launches curated scenarios for committee presentations
- 6-Step Guided Tour: Interactive walkthrough of the entire workflow
- Keyboard Navigation: Arrow keys, Home/End for quick navigation
- Dark Mode Support: Automatic theme based on system preferences
- Accessible Notifications: ARIA-compliant toast notifications
- Focus Visible Styles: Clear keyboard navigation indicators
- Card-Based Layout: Consistent, modern design system
- Consistent Semi-Donut Charts: All four metrics use the same visualization
- Print View Route (
/report/gauges): Optimized for printing and slides - Qualitative Labels: Excellent/Good/Fair/Needs Improvement indicators
- Color-Coded Performance: Green/Amber/Orange/Red thresholds
- Real-Time Validation: Inline error messages as you type
- Credence Normalization: Auto-normalize button when sum ≠ 1.0
- Minimum Requirements: Enforces at least 2 outcomes and 1 justification component
- Helper Text: Contextual guidance for each field
- localStorage Sync: Auto-saves beliefs and decisions
- Export Audit Trail (JSON): Complete framework state for analysis
- Export Decisions (CSV): Tabular format for spreadsheet analysis
- Print-Optimized Reports: One-page decision summaries
- Timer with Pause/Resume: Track presentation progress
- 6-Section Navigation: Introduction → Scenario → Beliefs → Decision → Gauges → Audit
- Keyboard Shortcuts: Arrow keys, Space (pause), Home/End
- Section Progress Dots: Visual navigation with icons
- Optimized for 30-35 min Talks: See
PRESENTATION_GUIDE.md
- Interactive Revision Button: Apply sample evidence updates
- Moral Weight Display: Shows alignment calculation
- Before/After Gauges: Demonstrates impact of belief updates
- Inline Explainer: "JWMC = Justified Weighted Moral Compatibility"
- Error Boundary: Catches and displays app crashes gracefully
- SSR/Test Guards: Safely handles
performance.now()and browser globals - Fail-Safe Messaging: User-friendly errors instead of crashes
- Component Stack Traces: Technical details for debugging
# Lint JavaScript/TypeScript files
npm run lint
# Auto-fix linting issues
npm run lint:fix
# Format code with Prettier
npm run formatnpm testTests planned for:
_subjectiveLogicCombine_calculateMoralWeightForAction_calculateCascadePenaltygetGauges
For faster builds and HMR, consider migrating from CRA to Vite:
# 1. Install Vite and plugins
npm install -D vite @vitejs/plugin-react
# 2. Create vite.config.js
cat > vite.config.js << EOF
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
export default defineConfig({
plugins: [react()],
server: {
port: 3000
}
})
EOF
# 3. Move index.html to root (from public/)
# 4. Update index.html script src to /src/index.js
# 5. Update package.json scripts:
# "dev": "vite"
# "build": "vite build"
# "preview": "vite preview"
# 6. Remove react-scripts
npm uninstall react-scriptsBenefits:
- ~10x faster cold starts
- Instant HMR (< 50ms)
- Smaller bundle sizes
- Better tree-shaking
When to migrate:
- After thesis defense
- When build times become prohibitive (> 30s)
- For production deployment optimization
This software is provided for academic and research purposes.