Multimodal AI Research & Quantitative Engineering
Iridyne is a technical collective dedicated to bridging the gap between clinical medical imaging and systematic financial markets through high-performance computing and state-of-the-art machine learning architectures.
We develop specialized diagnostic architectures that leverage multimodal data fusion. Our primary focus is the integration of structured clinical data with high-fidelity imaging to improve predictive accuracy in medical environments.
- Multimodal Fusion: Combining MLP-based tabular feature extraction with CNN-based (MobileNetV2) image processing.
- Statistical Validation: Implementing rigorous performance metrics including DeLong’s Test for AUC comparison and ROC curve analysis.
Systematic strategy development for the Web3 and Digital Asset ecosystem, focusing on robust backtesting and high-frequency execution logic.
- Strategy Engineering: Development of volatility-based models such as the
KeltnerRSBreakoutStrategy. - Backtesting Infrastructure: Multi-year, high-granularity analysis across major assets (BTC, ETH, SOL) with a focus on risk-adjusted returns and market regime adaptation.
- Languages: Python (Primary), Rust, Typst, Shell (Fish).
- Deep Learning: PyTorch, MobileNetV2, Multimodal Fusion Architectures.
- Development Tools:
uv(Package Management), Docker, AI-Assisted Coding (Claude Code/OpenCode). - Operating Environments: CachyOS, Debian.
Our engineering philosophy is built on three pillars:
- Mathematical Rigor: Every AI model and trading strategy must pass exhaustive statistical validation before deployment.
- Performance Optimization: Utilizing rolling-release kernels and optimized toolchains to ensure maximum computational throughput.
- Minimalist Implementation: Reducing complexity in the codebase to ensure maintainability and rapid iteration.
Iridyne is actively exploring the intersection of Web3 / Solana Ecosystem and Advanced AI Testing Frameworks.
"Precision in Data. Logic in Execution."
Using đź’• with the power of Markdown