Thanks for your interest in contributing to NeuraScreen!
Open an issue with:
- What you expected to happen
- What actually happened
- Steps to reproduce
- Your OS and Python version
Open an issue with the enhancement label. Describe the use case and why it would be useful.
- Fork the repository
- Create a feature branch:
git checkout -b my-feature - Make your changes
- Test locally:
neurascreen validate examples/01-simple-navigation.json - Commit:
git commit -m "Add my feature" - Push:
git push origin my-feature - Open a pull request
TTS providers are defined in neurascreen/tts.py.
- Create a new class extending
BaseTTSClient - Implement the
_synthesize(text: str) -> bytesmethod (must return WAV audio bytes) - Add your provider to the
create_tts_client()factory function - Document the required
.envvariables in the README - Add the provider name to
.env.example
Example:
class MyTTSClient(BaseTTSClient):
def _synthesize(self, text: str) -> bytes:
# Call your TTS API and return WAV bytes
...Actions are defined in neurascreen/browser.py in the _do_step() method.
- Add your action name to
VALID_ACTIONSinneurascreen/scenario.py - Add validation rules in
validate_scenario()if needed - Implement the action in
_do_step()using acaseblock - Document the action in the README actions table
- Python 3.12+ with type hints
- No external linter enforced — keep it readable
- Prefer simple, clear code over abstractions
- Log important steps with
logger.info()and details withlogger.debug()
NeuraScreen should remain:
- Simple — easy to understand and modify
- Scriptable — CLI-first, optional GUI (
pip install neurascreen[gui]) - Focused — generate demo videos, nothing else