An interactive web application for exploring OpenAI language model token probabilities and generation alternatives. This tool provides real-time visualization of how AI models make token choices during text generation.
- Token Probability Visualization: View actual token probabilities returned by OpenAI's API using the
logprobsparameter - Interactive Token Exploration: Click on any generated token to see alternative choices the model considered
- Color-coded Probability Display: Visual representation of token confidence levels from high (green) to low (red)
- Real-time Generation: Watch as the model generates text with visible probability distributions
- Temperature Adjustment: Control randomness in token selection (0.0 to 2.0)
- Max Tokens Setting: Limit response length
- Model Selection: Choose between GPT-3.5 Turbo and GPT-4
- Token Counter: Estimate API usage before generation
- Passkey Authentication: Secure access control for shared environments
- Environment-based Configuration: API keys and secrets managed through environment variables
- Session Persistence: Login state remembered across browser sessions
- Responsive Design: Optimized for desktop, tablet, and mobile devices
- Compact Interface: Side-by-side prompt and response layout
- Copy Functionality: Easy copying of generated responses
- Performance Metrics: Response time and token usage tracking
- Frontend: React 18, TypeScript, Vite
- UI Components: shadcn/ui, Radix UI primitives
- Styling: Tailwind CSS with custom theming
- Backend: Node.js, Express, TypeScript
- API Integration: OpenAI API with logprobs support
- State Management: TanStack Query for data fetching
- Build & Dev: Vite with HMR support
- Node.js 18+ installed
- OpenAI API account and key
- Git (for cloning)
-
Clone the repository
git clone <your-repo-url> cd llm-token-explorer
-
Install dependencies
npm install
-
Set up environment variables Create a
.envfile in the root directory:OPENAI_API_KEY=your_openai_api_key_here EXPLORER_PASSKEY=your_custom_passkey_here
-
Start the development server
npm run dev
-
Access the application Open your browser to
http://localhost:5000
- Enter your custom passkey on the login screen
- The passkey is set via the
EXPLORER_PASSKEYenvironment variable
- Enter a prompt in the left panel
- Adjust model parameters (temperature, max tokens, model)
- Click "Generate" to get the response
- View color-coded tokens in the response panel
- Click any token to see alternative choices and their probabilities
- Green tokens: High probability (>70%)
- Yellow tokens: Medium probability (30-70%)
- Red tokens: Low probability (<30%)
- Token alternatives panel: Shows top 5 alternative tokens with probability bars
The application leverages OpenAI's chat completions API with specific parameters:
{
model: "gpt-4", // or "gpt-3.5-turbo"
messages: [...],
temperature: 0.7,
max_tokens: 150,
logprobs: true,
top_logprobs: 5
}The logprobs parameter returns probability information for each token, which is then processed and visualized in the interface.
├── client/ # Frontend React application
│ ├── src/
│ │ ├── components/ # Reusable UI components
│ │ ├── pages/ # Page components
│ │ ├── lib/ # Utility functions and API calls
│ │ └── hooks/ # Custom React hooks
├── server/ # Backend Express application
│ ├── index.ts # Server entry point
│ ├── routes.ts # API route definitions
│ ├── openai.ts # OpenAI integration
│ └── storage.ts # Data persistence layer
├── shared/ # Shared TypeScript types and schemas
└── package.json # Dependencies and scripts
npm run dev: Start development server with hot reloadnpm run build: Build for productionnpm run type-check: Run TypeScript type checking
- InputPanel: Prompt input and model parameter controls
- ResultsPanel: Token visualization and probability display
- Home: Main application logic and state management
Set these in your deployment environment:
OPENAI_API_KEY: Your OpenAI API keyEXPLORER_PASSKEY: Custom passkey for access control
npm run buildThe application serves both frontend and backend from a single Express server, making deployment straightforward.
- AI/ML Courses: Demonstrate how language models make token choices
- Research Projects: Analyze model behavior and confidence patterns
- Interactive Learning: Hands-on exploration of AI text generation
- Model Evaluation: Compare token probability distributions across models
- Prompt Engineering: Understand how different prompts affect model confidence
- AI Transparency: Show clients/stakeholders how AI systems make decisions
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI for providing the API with logprobs support
- shadcn/ui for the excellent component library
- The React and TypeScript communities for robust tooling
If you encounter issues or have questions:
- Check the existing issues in the repository
- Create a new issue with detailed information
- Include steps to reproduce any bugs
Built with dedication for exploring the inner workings of AI language models.