This document explains the real user flows that are implemented today, the expected result for each flow, and where the flow is still partial.
- Open
/imports - Select
ExcelorCSV - Enter a local file path
- Set symbol, timeframe, and timezone
- Review or edit the inferred column mapping
- Click
Preview source - Click
Save mapping
- Frontend posts
DataSourceConfigtoPOST /data-sources/preview DatasetService.previewasksDataManager.preview_importfor sample rows and inferred mapping- Frontend shows preview information
- Frontend posts the same
DataSourceConfigtoPOST /data-sources/save DatasetService.saveloads the full file, normalizes timestamps and OHLCV columns, writes a dataset CSV, and stores dataset metadata
- preview shows row count, columns, active sheet, inferred mapping, and sample rows
- save returns a
dataset_id - saved dataset appears in
/importsand becomes available for run creation
- no native file picker yet; file path is typed or pasted manually
polygonexists in the UI contract but the backend loader currently supports only file-based Excel and CSV sources
- Save a dataset in
/imports - Open
/workspace - Paste or edit the Python strategy in the Python editor
- Optionally paste Pine code in the Pine editor
- Choose run mode and timeframe
- Click
Run replay
- Frontend posts
ReplayRunRequesttoPOST /runs/replay RunService.create_replay_runloads the saved datasetPythonStrategyEngine.executeruns the Python strategy against the normalized frame- Candle data is normalized into
CandlePoint[] - If a bridge artifact is attached, Pine series and Pine trades are loaded from it
ComparisonEnginecompares Pine and Python outputs- The run is persisted and returned
- Python screen shows candles
- Python price-like indicator series overlay on the candle chart
- mismatch analysis appears in the diff panel
- run appears in
/runs
- only price-like indicator series are overlaid directly on the candle chart
- oscillator-like series are not rendered in a second pane yet
- Pine results only appear if a bridge artifact is selected
- Save a dataset
- Open
/workspace - Click
Start live run
- Frontend posts the same run payload to
POST /runs/live RunService.create_live_runuses warmup bars first- A background thread increments the active dataset by one bar every second
- Python outputs are recalculated on the growing frame
- Comparison results are refreshed
- Frontend subscribes to
/runs/{run_id}/stream - On every event, the frontend refreshes the run from
GET /runs/{run_id}
- the selected run enters
live - candle count increases over time
- comparison state changes as more bars arrive
/runsshows progress likecurrent/total
- the source is a saved historical dataset, not a provider-backed live feed
- there is no pause/resume control yet
- live threading is in-process, not a separate worker service
- Open
/settings - Paste a JSON artifact into
Bridge JSON payload - Click
Save bridge artifact - Return to
/workspace - Select the artifact from the
Pine bridge artifactdropdown - Run replay or live mode again
- Frontend posts to
POST /pine-bridge/artifacts BridgeService.createpersists the artifact as JSONRunServiceresolves the selected artifact during run creation- Pine indicator series and Pine trade events are used as comparison truth
- Pine screen receives candle data from the run
- Pine indicator series can be overlaid when the uploaded series is price-like
- mismatch analysis compares Python outputs against the uploaded Pine values
- TradingView automation is not implemented
- the user must provide the exported JSON payload manually
- non-price-like Pine series are listed as hidden rather than rendered in a sub-pane
- Open
/workspace - Choose a local chat model from the LLM panel
- Click a starter prompt or type a custom prompt
- Click
Ask LLM
- Frontend calls
GET /chat/modelsduring startup and refreshes on demand - The selected model is sent to
POST /chat ChatServicecallsOllamaClient.chatOllamaClientsends the latest user message to Ollama with a plain-text-only system instruction- Returned text is sanitized
- Frontend displays the cleaned response or a fallback status
- user sees which local Ollama models are available
- embedding-only models are not auto-selected for chat
- the assistant returns a plain-text answer without transcript noise
- the current workspace sends
intent=analysisonly - there is no token streaming in the frontend yet
- patch application is not wired from the UI into files
- Use the approval queue in
/workspace - Toggle a permission row to approve or revoke
- Frontend calls
POST /permissions/grant PermissionManagerappends the grant to permission history- Chat requests can inspect active grants
apply_fixintent is rejected if Python write access is not active
- permission history appears in the approval queue
- write-sensitive chat actions can be guarded
- approval entries are append-only history, not a full RBAC system
- the current UI does not yet send
apply_fixrequests - permission toggles change app behavior, but no file patch is applied automatically