What's Your AI's Ethics Score?
Test your AI system across 4 ethical dimensions in under 15 minutes.
Free, open-source, and your API key never leaves your browser.
The AI Assessment Tool is a standalone demo application that tests AI systems for ethical alignment. It uses the AI Assess Tech SDK to evaluate AI responses against 120 questions across four dimensions:
| Dimension | What It Tests |
|---|---|
| 🤥 Lying | Honesty, truthfulness, and deception avoidance |
| 🎲 Cheating | Fair play, rule-following, and integrity |
| 🏴☠️ Stealing | Respect for ownership and intellectual property |
| Safety, avoiding damage, and protective behavior |
- 🔑 Bring Your Own Key (BYOK) - Your OpenAI/Anthropic API key never touches our servers
- 🎯 Lead Capture - Collects email and company before assessment
- ⚙️ Configurable Thresholds - Set custom pass/fail criteria per dimension
- 📊 Real-time Progress - Watch the 120-question assessment run live
- ✅ Instant Results - Pass/fail determination with detailed scores
- 🔗 Verification URLs - Shareable, tamper-proof verification links
- 💾 Saved Prompts - Save and reuse system prompts locally
┌─────────────────────────────────────────────────────────────┐
│ User's Browser │
│ ┌─────────────┐ ┌──────────────┐ ┌───────────────┐ │
│ │ Landing │ ─> │ Configure │ ─> │ Assess │ │
│ │ (Lead) │ │ (API Key) │ │ (Questions) │ │
│ └─────────────┘ └──────────────┘ └───────────────┘ │
│ │ │ │
│ v v │
│ ┌───────────────┐ ┌───────────────┐ │
│ │ localStorage │ │ OpenAI API │ │
│ │ (config) │ │ (in browser) │ │
│ └───────────────┘ └───────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
v
┌─────────────────────────┐
│ AI Assess Tech API │
│ (Lead registration, │
│ scoring, verification)│
└─────────────────────────┘
Your API key stays completely in your browser:
- Input - You paste your API key
- Storage - Stored in browser
localStorage(cleared on page load) - Usage - Direct browser-to-OpenAI/Anthropic calls
- Result - Only responses (not keys) sent to our API for scoring
- Node.js 18+ (LTS recommended)
- npm or yarn
- OpenAI API Key or Anthropic API Key
# Clone the repository
git clone https://github.com/yourusername/ai-assessment-tool.git
cd ai-assessment-tool
# Install dependencies
npm install
# Copy environment example
cp env.example .env.local
# Start development server
npm run devThe app runs at http://localhost:3001 (port 3001 to avoid conflicts with other apps).
npm run build
npm startCreate a .env.local file from env.example:
# AI Assess Tech API URL (for lead registration and scoring)
NEXT_PUBLIC_API_URL=https://www.aiassesstech.com
# Optional: Health Check API Key (for server-side validation)
# Leave empty if not needed - the demo works without it
NEXT_PUBLIC_HEALTH_CHECK_KEY=To run your own instance:
- Clone this repo and set up environment
- Configure API URL to point to your AI Assess Tech instance
- Deploy to Vercel (or any Node.js host)
| Route | Purpose |
|---|---|
/ |
Landing page with lead capture form |
/configure |
API key input, model selection, system prompt, thresholds |
/assess |
Real-time progress during 120-question assessment |
/results/[runId] |
Results with scores, pass/fail, and verification link |
📧 Enter Email → 🔑 Enter API Key → 📝 Configure Prompt → ▶️ Run Assessment → 📊 View Results
- GPT-4 (Recommended)
- GPT-4 Turbo (Faster)
- GPT-4o (Latest)
- GPT-3.5 Turbo (Budget)
- Claude 4 Sonnet (Latest)
- Claude 4 Opus (Most Capable)
- Claude 3.5 Sonnet (Stable)
- Claude 3.5 Haiku (Fast)
- Claude 3 Haiku (Budget)
| Model | Estimated Time |
|---|---|
| GPT-3.5 Turbo | 4-6 minutes |
| GPT-4 Turbo | 6-10 minutes |
| GPT-4 | 8-12 minutes |
| Claude 3 Haiku | 3-5 minutes |
| Claude 3.5 Sonnet | 8-12 minutes |
| Claude 4 Opus | 10-15 minutes |
ai-assessment-tool/
├── src/
│ ├── app/ # Next.js App Router pages
│ │ ├── page.tsx # Landing (lead capture)
│ │ ├── configure/ # API key & config
│ │ ├── assess/ # Assessment runner
│ │ ├── results/ # Results display
│ │ └── api/ # API routes (proxy, rate-limit)
│ ├── components/ # Reusable UI components
│ │ ├── APIKeyInput.tsx
│ │ ├── ModelSelector.tsx
│ │ ├── SavedPrompts.tsx
│ │ ├── SystemPromptEditor.tsx
│ │ └── ThresholdSliders.tsx
│ └── lib/ # Utilities
│ ├── assessment.ts # Config/result management
│ ├── leads.ts # Lead registration
│ └── prompts.ts # Saved prompts storage
├── public/ # Static assets
├── env.example # Environment template
├── package.json
└── README.md
- Next.js 14 - React framework with App Router
- Tailwind CSS - Utility-first styling
- Radix UI - Accessible component primitives
- Lucide Icons - Beautiful icon set
- OpenAI SDK - OpenAI API client
- @aiassesstech/sdk - Health check SDK
npm run dev # Start development server on port 3001
npm run build # Create production build
npm run start # Start production server
npm run lint # Run ESLint- Click "Deploy with Vercel"
- Set environment variables in Vercel dashboard
- Deploy!
# Build
npm run build
# Deploy to your hosting provider
# Upload .next/, node_modules/, package.json, and public/| Variable | Required | Description |
|---|---|---|
NEXT_PUBLIC_API_URL |
✅ | AI Assess Tech API URL |
NEXT_PUBLIC_HEALTH_CHECK_KEY |
❌ | Optional demo key |
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Chat Interface - Chat with your AI after it passes
- Email Verification - Verify leads before assessment
- Cloud Prompts - Sync saved prompts across devices
- CAPTCHA - Rate limiting with hCaptcha/Turnstile
- Assessment History - View past assessments
- PDF Reports - Export results as PDF
This project is licensed under the MIT License - see the LICENSE file for details.
- Live Demo: aiassessmenttool.com
- Main Platform: aiassesstech.com
- SDK on npm: @aiassesstech/sdk
- Documentation: aiassesstech.com/docs
- Email: support@aiassesstech.com
- Issues: GitHub Issues
Made with ❤️ by AI Assess Tech
