A Simple and Universal Swarm Intelligence Engine, Predicting Anything
A Simple and Universal Swarm Intelligence Engine, Predicting Anything
MiroFish is a next-generation AI prediction engine powered by multi-agent technology. By extracting seed information from the real world such as breaking news, policy drafts, or financial signals, it automatically constructs a high-fidelity parallel digital world. Within this space, thousands of intelligent agents with independent personalities, long-term memory, and behavioral logic freely interact and undergo social evolution. You can inject variables dynamically from a God's-eye view to infer future trajectories with much higher context.
You only need to upload seed materials such as reports or stories and describe your prediction request in natural language.
MiroFish returns a detailed prediction report and a deeply interactive high-fidelity digital world.
MiroFish aims to build a swarm-intelligence mirror of reality by modeling emergent collective behavior from individual interactions:
- Macro use cases: a rehearsal lab for decision-makers to test policy or communications strategies with near-zero real-world risk.
- Micro use cases: a creative sandbox for individuals to explore novel endings, alternate histories, or speculative scenarios.
From serious forecasting to playful simulation, MiroFish is designed to make "what if" questions explorable.
Try the online demo here: mirofish-live-demo
Click the image to watch the full demo video based on a BettaFish-generated Wuhan University public-opinion report.
Click the image to watch MiroFish infer a lost ending based on the first 80 chapters of Dream of the Red Chamber.
More examples for finance, politics, and current-events forecasting are planned.
- Graph Building: extract seeds from source material, inject individual and collective memory, and build a GraphRAG-ready graph.
- Environment Setup: extract entities and relationships, generate personas, and inject simulation parameters.
- Simulation: run dual-platform simulations, parse the prediction request, and update temporal memory dynamically.
- Report Generation: use ReportAgent and its tools to analyze the post-simulation environment.
- Deep Interaction: talk to agents inside the simulated world or continue through ReportAgent.
| Tool | Version | Description | Check |
|---|---|---|---|
| Node.js | 18+ | Frontend runtime, includes npm | node -v |
| Python | >=3.11, <=3.12 | Backend runtime | python --version |
| uv | Latest | Python package manager | uv --version |
cp .env.example .envFill in the required API keys in .env.
# LLM API configuration (any OpenAI-compatible API)
LLM_API_KEY=your_api_key
LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
LLM_MODEL_NAME=qwen-plus
# Zep Cloud configuration
ZEP_API_KEY=your_zep_api_keynpm run setup:allOr install by layer:
npm run setup
npm run setup:backendnpm run devService URLs:
- Frontend:
http://localhost:3000 - Backend API:
http://localhost:5001
Run individually:
npm run backend
npm run frontendcp .env.example .env
docker compose up -dThis reads the root .env file and maps ports 3000 and 5001.
The MiroFish team is recruiting full-time and internship roles. Contact: mirofish@shanda.com
MiroFish has received strategic support and incubation from Shanda Group.
MiroFish's simulation engine is powered by OASIS. Thanks to the CAMEL-AI team for the open-source foundation.






