Tinker is an API from Thinking Machines Lab that provides direct access to LLM training primitives. Unlike inference-only APIs, Tinker lets you fine-tune and train language models programmatically.
Tinker enables you to:
- Access low-level training operations (
forward_backward(),optim_step(),sample(),save_state()) - Fine-tune models using LoRA without managing infrastructure
- Experiment with custom training loops and reward functions
- Build and iterate on ML research ideas quickly
This hands-on workshop introduces the Tinker API through a practical demo. You'll learn how to use Tinker's training primitives to improve a language model's capabilities in real-time.
Through practical examples in the guess_number_demo.ipynb, you'll explore:
- Tinker's Core API - Understanding the training primitives
- Sampling - Generating outputs from your model
- Training Steps - Running forward/backward passes and optimizer steps
- Custom Training Loops - Building your own training logic
Tinker democratizes access to LLM training by providing:
- Accessibility - Train models without managing GPU clusters
- Flexibility - Full control over the training process
- Experimentation - Rapid iteration on training ideas
- Education - Learn how LLM training actually works
Whether you're a researcher, hobbyist, or developer, Tinker opens up LLM training in a way that wasn't previously accessible.
uv sync- Sign up at auth.thinkingmachines.ai/sign-up
- Verify your email - Check your inbox for a verification link
- Log in to the Tinker Console
- Create an API key - Navigate to the API Keys section and generate a new key
- Save your key - Copy the key immediately (you won't be able to see it again)
Create a .env file in the project root:
echo "TINKER_API_KEY=your_key_here" > .envOr manually create .env with:
TINKER_API_KEY=your_api_key_here
uv run jupyter lab guess_number_demo.ipynbThe notebook is organized into focused sections:
- Setup & Configuration - API keys and environment preparation
- Understanding Tinker - Overview of the API and its primitives
- Sampling - Generating model outputs
- Training - Running training steps with custom rewards
- Evaluation - Measuring model improvement
By completing this workshop, you'll understand how to:
- Connect to and authenticate with Tinker's API
- Use
sample()to generate model outputs - Run
forward_backward()andoptim_step()for training - Build custom training loops for your own use cases
- Apply Tinker to your own ML projects
- Tinker Documentation
- Tinker Cookbook - More recipes and examples
- Thinking Machines Lab
guess_number_demo.ipynb- Main demo notebookmain.py- Supporting Python code.env- Your API key (not tracked in git)
Start exploring Tinker by opening the notebook and running through the examples!
