Complete reference for configuring Phlo.
Phlo uses multiple configuration sources:
flowchart TB
defaults["Infrastructure defaults<br/>phlo.yaml"]
local["Local secrets and overrides<br/>.phlo/.env.local"]
runtime[Runtime environment variables]
python[Python package settings]
runconfig[Dagster run config]
defaults --> local --> runtime --> python --> runconfig
- Infrastructure defaults (
phlo.yaml,env:) - Local secrets/overrides (
.phlo/.env.local) - Runtime environment (process environment variables)
- Python settings (package settings modules like
phlo_postgres.settings) - Runtime configuration (Dagster run config)
Environment variables are materialized into .phlo/.env (generated, non-secret defaults)
and .phlo/.env.local (local secrets). Edit phlo.yaml for committed defaults and
.phlo/.env.local for secrets.
# Active orchestrator adapter (default: dagster)
PHLO_ORCHESTRATOR=dagster
# Alias
PHLO_ORCHESTRATOR_NAME=dagster# Log level (default: INFO)
PHLO_LOG_LEVEL=INFO
# Log output format: auto (tty=console, else JSON), json, console
PHLO_LOG_FORMAT=auto
# Emit structured log events to the hook bus (default: true)
PHLO_LOG_ROUTER_ENABLED=true
# Default service name attached to log records (default: phlo)
PHLO_LOG_SERVICE_NAME=phlo
# Log file path template with date placeholders (default: .phlo/logs/{YMD}.log)
# Available placeholders: {YMD}, {YM}, {Y}, {YYYY}, {M}, {MM}, {D}, {DD}, {H}, {HM}, {HMS}, {DATE}, {TIMESTAMP}
# Set empty to disable file logging
PHLO_LOG_FILE_TEMPLATE=.phlo/logs/{YMD}.log
# Default service namespace attached to observability resources (default: phlo)
PHLO_SERVICE_NAMESPACE=phlo
# Optional default service version attached to observability resources
PHLO_SERVICE_VERSION=
# Optional default service instance ID attached to observability resources
PHLO_SERVICE_INSTANCE_ID=
# Optional project identifier attached to observability resources
PHLO_PROJECT=
# Runtime environment attached to logs and observability resources (default: dev)
PHLO_ENVIRONMENT=devNotes:
PHLO_LOG_SERVICE_NAMEis the defaultservice.nameforphlo-otelwhenOTEL_SERVICE_NAMEis unset.PHLO_SERVICE_NAMESPACE,PHLO_SERVICE_VERSION,PHLO_SERVICE_INSTANCE_ID, andPHLO_PROJECTprovide Phlo-native defaults for OTel resource metadata.- Standard
OTEL_*variables still take precedence when set.
# phlo.yaml
capabilities:
defaults:
table_store: iceberg
query_engine: trino# Optional environment override for the same mapping.
PHLO_DEFAULT_CAPABILITIES='{"table_store":"iceberg","query_engine":"trino"}'Capability resolution order:
flowchart TB
explicit[Explicit provider passed by caller]
tag["Runtime tag<br/>phlo/capability/<capability_type>=<provider>"]
asset[Asset capability_overrides]
env[PHLO_DEFAULT_CAPABILITIES]
yaml["phlo.yaml capabilities.defaults"]
implicit[Implicit single installed provider]
fail[Fail with installed provider names]
explicit --> tag --> asset --> env --> yaml --> implicit --> fail
- explicit provider name passed by the caller
- runtime/workflow tag:
phlo/capability/<capability_type>=<provider> - asset-level
capability_overrides PHLO_DEFAULT_CAPABILITIESphlo.yamlcapabilities.defaults- implicit selection only when exactly one provider of that capability type is installed
If multiple providers are installed and none of the rules above selects one, Phlo fails with the installed provider names instead of picking one implicitly.
PostgreSQL database settings:
# Host and port
POSTGRES_HOST=postgres
POSTGRES_PORT=10000
# Credentials
POSTGRES_USER=lake
POSTGRES_PASSWORD=phlo
# Database
POSTGRES_DB=lakehouse
POSTGRES_MART_SCHEMA=marts
# Lineage tracking database (optional, defaults to Dagster Postgres connection)
PHLO_LINEAGE_DB_URL=postgresql://lake:phlo@postgres:10000/lakehouse
# Alternative: DAGSTER_PG_DB_CONNECTION_STRING (alias for lineage_db_url)Connection string format:
postgresql://lake:phlo@postgres:10000/lakehouse
MinIO S3-compatible object storage:
# Host and ports
MINIO_HOST=minio
MINIO_API_PORT=10001
MINIO_CONSOLE_PORT=10002
# Credentials
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadminMinIO endpoint:
http://minio:10001
Console UI: http://localhost:10002
When multiple object_store capability providers are installed, set
PHLO_OBJECT_STORE=minio to select MinIO for integrations that resolve the
active object store via capabilities, such as phlo-sling auto-connections.
RustFS S3-compatible object storage:
# Host and ports
RUSTFS_HOST=rustfs
RUSTFS_API_PORT=9000
RUSTFS_CONSOLE_PORT=9001
# Credentials
RUSTFS_ACCESS_KEY=rustfsadmin
RUSTFS_SECRET_KEY=rustfsadminRustFS endpoint:
http://rustfs:9000
Console UI: http://localhost:9001
When multiple object_store capability providers are installed, set
PHLO_OBJECT_STORE=rustfs to select RustFS for integrations that resolve the
active object store via capabilities.
Nessie Git-like catalog:
# Version and connectivity
NESSIE_VERSION=0.107.2
NESSIE_PORT=10003
NESSIE_HOST=nessie
NESSIE_API_VERSION=v1API endpoints:
- v1 API:
http://nessie:10003/api/v1 - v2 API:
http://nessie:10003/api/v2 - Iceberg REST:
http://nessie:10003/iceberg
Trino distributed SQL engine:
# Version and connectivity
TRINO_VERSION=477
TRINO_PORT=10005
TRINO_HOST=trino
# Catalog
TRINO_CATALOG=icebergConnection string:
trino://trino:10005/iceberg_dev
ClickHouse analytical database for data plane:
# Version and connectivity
CLICKHOUSE_VERSION=latest
CLICKHOUSE_HTTP_PORT=8123
CLICKHOUSE_NATIVE_PORT=19000
CLICKHOUSE_HOST=clickhouse
# Credentials
CLICKHOUSE_USER=default
CLICKHOUSE_PASSWORD=
# Database
CLICKHOUSE_DB=default
# TLS
CLICKHOUSE_SECURE=falseHTTP endpoint:
http://clickhouse:8123
Native endpoint:
clickhouse:19000
Default databases (created by clickhouse-setup):
raw- Raw ingestion tablesstaging- Intermediate tablescurated- Cleaned/validated tablesmarts- Published analytical marts
Observatory API runtime routing:
PHLO_API_PORT=4000
HOST=0.0.0.0
PHLO_AUTHORIZATION_BACKEND=
PHLO_AUTHORIZATION_MODE=optional
PHLO_QUERY_ENGINE_URL=
PHLO_QUERY_CATALOG=
PHLO_DEFAULT_REF=
PHLO_API_DISCOVERY_SCHEMAS=Notes:
PHLO_AUTHORIZATION_MODE=optionalkeeps guardedphlo-apiroutes reachable when no authz backend is configured.PHLO_AUTHORIZATION_MODE=requiredmakes guardedphlo-apiroutes fail closed with HTTP503untilPHLO_AUTHORIZATION_BACKENDresolves.- The same settings can be declared in
phlo.yamlunderapi.authorizationorservices.phlo-api.authorization. PHLO_QUERY_ENGINE_URLis required unless the resolvedquery_enginecapability exposesurl,http_url, orhost/portmetadata.PHLO_QUERY_CATALOGis required unless the resolvedquery_enginecapability exposesdefault_catalog.PHLO_DEFAULT_REFis required for ref-dependent endpoints unless the resolvedquery_enginecapability exposesdefault_ref.PHLO_API_DISCOVERY_SCHEMASis optional only when table discovery can use requestbranch/preferred_schemavalues orquery_enginecapability metadata such asdiscovery_schemas.
Generated dbt profile settings:
DBT_PROJECT_DIR=workflows/transforms/dbt
DBT_PROFILES_DIR=workflows/transforms/dbt/profiles
DBT_QUERY_ENGINE_TYPE=trino
DBT_QUERY_HOST=trino
DBT_QUERY_PORT=8080
DBT_QUERY_CATALOG=iceberg
DBT_QUERY_SCHEMA=raw
DBT_QUERY_USER=dagster
DBT_QUERY_HTTP_SCHEME=http
DBT_QUERY_AUTH_METHOD=none
DBT_QUERY_THREADS=2These values are used to generate profiles.yml for dbt runtime execution.
Target and ref selection are derived from canonical runtime routing.
Apache Iceberg table format:
# Storage paths
ICEBERG_WAREHOUSE_PATH=s3://lake/warehouse
ICEBERG_STAGING_PATH=s3://lake/stage
# Default namespace
ICEBERG_DEFAULT_NAMESPACE=raw
# Default catalog reference
ICEBERG_DEFAULT_REF=main
# Iceberg REST catalog endpoint
ICEBERG_CATALOG_URI=http://nessie:19120/icebergWarehouse paths by branch:
# main branch
s3://lake/warehouse
# Custom branch
s3://lake/warehouse@feature-branchDelta Lake table format (alternative to Iceberg):
# Storage paths
DELTA_WAREHOUSE_PATH=s3://lake/warehouse/delta
DELTA_STAGING_PATH=s3://lake/stage
# Default namespace
DELTA_DEFAULT_NAMESPACE=raw
# S3 endpoint
DELTA_S3_ENDPOINT=http://minio:10001
# Allow unsafe rename for S3
DELTA_S3_ALLOW_UNSAFE_RENAME=trueNessie branch lifecycle configuration:
# Retention periods (days)
BRANCH_RETENTION_DAYS=7
BRANCH_RETENTION_DAYS_FAILED=2
# Automation
AUTO_PROMOTE_ENABLED=true
BRANCH_CLEANUP_ENABLED=falseBehavior:
BRANCH_RETENTION_DAYS: Days to keep successful pipeline branchesBRANCH_RETENTION_DAYS_FAILED: Days to keep failed pipeline branchesAUTO_PROMOTE_ENABLED: Auto-merge to main when quality checks passBRANCH_CLEANUP_ENABLED: Automatically delete old branches
Dagster WAP sensor intervals:
PHLO_WAP_BRANCH_CREATION_INTERVAL_SECONDS=30
PHLO_WAP_PROMOTION_INTERVAL_SECONDS=60
PHLO_WAP_CLEANUP_INTERVAL_SECONDS=3600These settings only matter when the active profile includes a versioned catalog capability.
Data quality validation settings:
# Freshness blocking
FRESHNESS_BLOCKS_PROMOTION=false
# Pandera validation level
PANDERA_CRITICAL_LEVEL=error # error, warning, or skip
# Validation retry
VALIDATION_RETRY_ENABLED=true
VALIDATION_RETRY_MAX_ATTEMPTS=3
VALIDATION_RETRY_DELAY_SECONDS=300 # secondsPandera levels:
error: Validation failures block pipelinewarning: Log warnings but continueskip: Skip validation entirely (not recommended)
Business intelligence and visualization:
SUPERSET_PORT=10007
SUPERSET_ADMIN_USER=admin
SUPERSET_ADMIN_PASSWORD=admin
SUPERSET_ADMIN_EMAIL=admin@superset.com
SUPERSET_DATABASE_NAME=Access: http://localhost:10007
SUPERSET_DATABASE_NAME is required unless a resolved query_engine capability
declares catalog metadata.
Orchestration platform:
DAGSTER_PORT=10006
# Executor configuration (set only one)
PHLO_FORCE_IN_PROCESS_EXECUTOR=false # Force in-process executor
PHLO_FORCE_MULTIPROCESS_EXECUTOR=false # Force multiprocess executor
# Host platform (auto-detected, but can be set explicitly for daemon/webserver on macOS)
PHLO_HOST_PLATFORM= # Darwin, Linux, or WindowsAccess: http://localhost:10006
Internal API server:
HUB_APP_PORT=10009
HUB_DEBUG=falseJWT authentication:
JWT_SECRET_KEY=your-secret-key-change-this-in-production
JWT_ALGORITHM=HS256
JWT_EXPIRATION_HOURS=24HASURA_GRAPHQL_PORT=10012
HASURA_GRAPHQL_ADMIN_SECRET=hasura-admin-secret
HASURA_GRAPHQL_ENABLE_CONSOLE=trueAccess: http://localhost:10012
POSTGREST_PORT=10011
POSTGREST_DB_SCHEMA=marts
POSTGREST_DB_ANON_ROLE=web_anon
DBT_API_SOURCE_SCHEMA=Access: http://localhost:10011
DBT_API_SOURCE_SCHEMA is optional only when the dbt manifest contains exactly one
model schema; otherwise it must be set explicitly.
Data catalog and governance:
OPENMETADATA_HOST=openmetadata-server
OPENMETADATA_PORT=8585
OPENMETADATA_HEAP_OPTS="-Xmx512m -Xms512m"
OPENMETADATA_ES_JAVA_OPTS="-Xms512m -Xmx512m"
OPENMETADATA_USERNAME=admin
OPENMETADATA_PASSWORD=admin
OPENMETADATA_VERIFY_SSL=false
OPENMETADATA_SERVICE_TYPE=
OPENMETADATA_CATALOG_SCANNER=
OPENMETADATA_QUERY_ENGINE=
OPENMETADATA_DATABASE_NAME=
OPENMETADATA_DBT_MANIFEST_PATH=workflows/transforms/dbt/target/manifest.json
OPENMETADATA_DBT_CATALOG_PATH=workflows/transforms/dbt/target/catalog.json
OPENMETADATA_SYNC_ENABLED=true
OPENMETADATA_SYNC_INTERVAL_SECONDS=300 # Minimum interval between syncsOPENMETADATA_DATABASE_NAME and OPENMETADATA_SERVICE_TYPE are required unless a
resolved query_engine capability declares both catalog and service_type
metadata.
Access: http://localhost:8585
Metrics collection:
PROMETHEUS_PORT=9090
PROMETHEUS_PUBLIC_URL=
PROMETHEUS_QUERY_PATH=/graphAccess: http://localhost:9090
Log aggregation:
LOKI_PORT=3100
LOKI_PUBLIC_URL=
LOKI_LOGS_PATH=/logsDashboards and visualization:
GRAFANA_PORT=3000
GRAFANA_ADMIN_USER=admin
GRAFANA_ADMIN_PASSWORD=admin
GRAFANA_PUBLIC_URL=
GRAFANA_DASHBOARD_PATH_TEMPLATE=/d/{uid}Access: http://localhost:3000
PHLO_OBSERVABILITY_PUBLIC_HOST=localhost
PHLO_OBSERVABILITY_PUBLIC_SCHEME=http- Set
*_PUBLIC_URLwhen Grafana, Loki, or Prometheus are exposed behind a proxy or custom domain. - If
*_PUBLIC_URLis unset, Phlo builds links fromPHLO_OBSERVABILITY_PUBLIC_SCHEME,PHLO_OBSERVABILITY_PUBLIC_HOST, and the configured service port.
PHLO_ALERT_SLACK_WEBHOOK=https://hooks.slack.com/services/YOUR/WEBHOOK/URL
PHLO_ALERT_SLACK_CHANNEL=#data-alertsPHLO_ALERT_PAGERDUTY_KEY=your-integration-keyPHLO_ALERT_EMAIL_SMTP_HOST=smtp.gmail.com
PHLO_ALERT_EMAIL_SMTP_PORT=587
PHLO_ALERT_EMAIL_SMTP_USER=your-email@gmail.com
PHLO_ALERT_EMAIL_SMTP_PASSWORD=your-app-password
PHLO_ALERT_EMAIL_RECIPIENTS=team@yourdomain.com,alerts@yourdomain.com # Comma-separated listSee Security Setup Guide for detailed setup instructions.
# Authentication type (PASSWORD, OAUTH2, JWT, CERTIFICATE, KERBEROS, or empty)
TRINO_AUTH_TYPE=
# LDAP Authentication (when TRINO_AUTH_TYPE=PASSWORD)
TRINO_LDAP_URL=ldaps://ldap.example.com:636
TRINO_LDAP_USER_BIND_PATTERN=${USER}@example.com
# OAuth2/OIDC Authentication (when TRINO_AUTH_TYPE=OAUTH2)
TRINO_OAUTH2_ISSUER=https://auth.example.com
TRINO_OAUTH2_CLIENT_ID=trino
TRINO_OAUTH2_CLIENT_SECRET=your-client-secret
# HTTPS/TLS
TRINO_HTTPS_ENABLED=false
TRINO_HTTPS_KEYSTORE_PATH=/etc/trino/keystore.jks
TRINO_HTTPS_KEYSTORE_PASSWORD=keystore-password
# Access Control
TRINO_ACCESS_CONTROL_TYPE=file
TRINO_ACCESS_CONTROL_CONFIG_FILE=/etc/trino/access-control.json# OIDC/OAuth2 Authentication
NESSIE_OIDC_ENABLED=false
NESSIE_OIDC_SERVER_URL=https://auth.example.com/realms/phlo
NESSIE_OIDC_CLIENT_ID=nessie
NESSIE_OIDC_CLIENT_SECRET=your-client-secret
NESSIE_OIDC_ISSUER=https://auth.example.com
# Authorization
NESSIE_AUTHZ_ENABLED=false# TLS (set server URL to enable HTTPS)
MINIO_SERVER_URL=https://minio.example.com
# OIDC Authentication
MINIO_OIDC_CONFIG_URL=https://auth.example.com/.well-known/openid-configuration
MINIO_OIDC_CLIENT_ID=minio
MINIO_OIDC_CLIENT_SECRET=your-client-secret
MINIO_OIDC_CLAIM_NAME=policy
MINIO_OIDC_SCOPES=openid
# LDAP Authentication
MINIO_LDAP_SERVER=ldap.example.com:636
MINIO_LDAP_BIND_DN=cn=admin,dc=example,dc=com
MINIO_LDAP_BIND_PASSWORD=ldap-password
MINIO_LDAP_USER_BASE_DN=ou=users,dc=example,dc=com
MINIO_LDAP_USER_FILTER=(uid=%s)
# Encryption at Rest
MINIO_AUTO_ENCRYPTION=off
# Audit Logging
MINIO_AUDIT_ENABLED=off
MINIO_AUDIT_ENDPOINT=http://audit-service:8080/logsMINIO_AUDIT_ENABLED and MINIO_AUDIT_ENDPOINT are Phlo's bundled audit-log automation surface for object storage events. In production, point the endpoint at a durable backend and pair it with centralized application logs. See Audit Logging.
# SSL Mode (disable, allow, prefer, require, verify-ca, verify-full)
POSTGRES_SSL_MODE=prefer
POSTGRES_SSL_CERT_FILE=/path/to/cert.pem
POSTGRES_SSL_KEY_FILE=/path/to/key.pem
POSTGRES_SSL_CA_FILE=/path/to/ca.pem# dbt artifact paths (defaults to <DBT_PROJECT_DIR>/target when unset)
DBT_MANIFEST_PATH=workflows/transforms/dbt/target/manifest.json
DBT_CATALOG_PATH=workflows/transforms/dbt/target/catalog.json
# dbt project directory
DBT_PROJECT_DIR=workflows/transforms/dbt
# Workflows path (for external projects)
WORKFLOWS_PATH=workflows# Plugin system
PLUGINS_ENABLED=true
PLUGINS_AUTO_DISCOVER=true
# Whitelist/blacklist (comma-separated)
PLUGINS_WHITELIST=plugin1,plugin2
PLUGINS_BLACKLIST=deprecated_plugin
# Plugin registry
PLUGIN_REGISTRY_URL=https://registry.phlohouse.com/plugins.json
PLUGIN_REGISTRY_CACHE_TTL_SECONDS=3600
PLUGIN_REGISTRY_TIMEOUT_SECONDS=10PLUGINS_AUTO_DISCOVER is the default switch. PHLO_NO_AUTO_DISCOVER has disable
override precedence at runtime:
- Truthy values disable auto-discovery (
1,true,yes,on). - Falsy values do not disable (
0,false,no,off). - Any other non-empty value is treated as disable and logged as invalid.
# Disabled (env override wins)
PLUGINS_AUTO_DISCOVER=true
PHLO_NO_AUTO_DISCOVER=1
# Enabled (falsy env does not disable)
PLUGINS_AUTO_DISCOVER=true
PHLO_NO_AUTO_DISCOVER=0
# Disabled (settings already false; env cannot force enable)
PLUGINS_AUTO_DISCOVER=false
PHLO_NO_AUTO_DISCOVER=0Project-level configuration in phlo.yaml:
name: my-project
description: My data lakehouse project
infrastructure:
# Container naming pattern
container_naming_pattern: "{{project}}-{{service}}-1"
# Service-specific configuration
services:
dagster_webserver:
container_name: null # Use pattern
service_name: dagster-webserver
host: localhost
internal_host: dagster-webserver
port: 10006
postgres:
container_name: null
service_name: postgres
host: localhost
internal_host: postgres
port: 10000
credentials:
user: postgres
password: postgres
database: cascade
minio:
container_name: null
service_name: minio
host: localhost
internal_host: minio
api_port: 10001
console_port: 10002
nessie:
container_name: null
service_name: nessie
host: localhost
internal_host: nessie
port: 10003
trino:
container_name: null
service_name: trino
host: localhost
internal_host: trino
port: 10005from phlo.infrastructure.config import (
load_infrastructure_config,
get_container_name,
get_service_config
)
# Load config
config = load_infrastructure_config()
# Get container name
container = get_container_name("dagster-webserver")
# Returns: "my-project-dagster-webserver-1"
# Get service config
service = get_service_config("postgres")
# Returns: dict with host, port, credentials, etc.Programmatic access to configuration lives in each capability package:
from phlo_postgres.settings import get_settings as get_postgres_settings
from phlo_minio.settings import get_settings as get_minio_settings
from phlo_nessie.settings import get_settings as get_nessie_settings
from phlo_trino.settings import get_settings as get_trino_settings
from phlo_iceberg.settings import get_settings as get_iceberg_settings
# Database
postgres = get_postgres_settings()
postgres.postgres_host
postgres.postgres_port
postgres.get_postgres_connection_string()
# MinIO
minio = get_minio_settings()
minio.minio_endpoint()
# Nessie
nessie = get_nessie_settings()
nessie.nessie_uri()
nessie.nessie_api_uri()
nessie.nessie_iceberg_rest_uri()
# Trino
trino = get_trino_settings()
trino.trino_connection_string()
# Iceberg
iceberg = get_iceberg_settings()
iceberg.iceberg_warehouse_path
iceberg.get_iceberg_warehouse_for_branch("main")Dagster run configuration for asset execution:
# Example run config
{
"ops": {
"my_asset": {
"config": {
"partition_date": "2025-01-15",
"full_refresh": false
}
}
},
"resources": {
"iceberg": {
"config": {
"ref": "pipeline/run-abc123"
}
}
}
}Standard port assignments:
10000 PostgreSQL
10001 MinIO API
10002 MinIO Console
10003 Nessie
10005 Trino
10006 Dagster
10007 Superset
10009 Hub/Flask
10011 PostgREST
10012 Hasura GraphQL
8585 OpenMetadata
3000 Grafana
9090 Prometheus
3100 Loki
# phlo.yaml (development)
env:
POSTGRES_HOST: localhost
MINIO_HOST: localhost
DAGSTER_HOST_PLATFORM: local
HUB_DEBUG: true
AUTO_PROMOTE_ENABLED: true
BRANCH_CLEANUP_ENABLED: false# phlo.staging.yaml
env:
POSTGRES_HOST: postgres-staging
MINIO_HOST: minio-staging
DAGSTER_HOST_PLATFORM: docker
HUB_DEBUG: false
AUTO_PROMOTE_ENABLED: true
BRANCH_CLEANUP_ENABLED: true
BRANCH_RETENTION_DAYS: 3# phlo.production.yaml
env:
POSTGRES_HOST: postgres-prod.internal
POSTGRES_PORT: 5432
MINIO_HOST: minio-prod.internal
NESSIE_HOST: nessie-prod.internal
TRINO_HOST: trino-prod.internal
DAGSTER_HOST_PLATFORM: k8s
DAGSTER_EXECUTOR: multiprocess
HUB_DEBUG: false
AUTO_PROMOTE_ENABLED: true
BRANCH_CLEANUP_ENABLED: true
BRANCH_RETENTION_DAYS: 7
BRANCH_RETENTION_DAYS_FAILED: 2
FRESHNESS_BLOCKS_PROMOTION: true
PANDERA_CRITICAL_LEVEL: error
VALIDATION_RETRY_ENABLED: true
OPENMETADATA_SYNC_ENABLED: trueDo not commit secrets to version control:
# .gitignore
.phlo/.env.localUse environment-specific files:
.env.example # Secrets template (committed)
.phlo/.env.local # Local secrets (ignored)Generate secure passwords:
# Generate random password
openssl rand -base64 32
# Use in .phlo/.env.local
POSTGRES_PASSWORD=<generated-password>
MINIO_ROOT_PASSWORD=<generated-password>
JWT_SECRET_KEY=<generated-password>Use least-privilege principle:
# Read-only user for BI tools
POSTGRES_BI_USER=bi_readonly
POSTGRES_BI_PASSWORD=<password>
# Grant only SELECT on marts
GRANT SELECT ON SCHEMA marts TO bi_readonly;# Validate phlo.yaml
phlo config validate phlo.yaml
# Show current config
phlo config show
# Show with secrets (masked by default)
phlo config show --secretsfrom phlo_postgres.settings import get_settings
from pydantic import ValidationError
try:
# Access settings (validates on load)
conn_str = get_settings().get_postgres_connection_string()
except ValidationError as e:
print(f"Configuration error: {e}")# Test PostgreSQL
psql postgresql://postgres:password@localhost:10000/cascade
# Test MinIO
mc alias set local http://localhost:10001 minioadmin minioadmin
mc ls local
# Test Nessie
curl http://localhost:10003/api/v2/config
# Test Trino
phlo trinoCheck if ports are in use:
# macOS/Linux
lsof -i :10000
lsof -i :10006
# Windows
netstat -ano | findstr :10000Change ports in phlo.yaml (env:):
POSTGRES_PORT=15432
DAGSTER_PORT=13000Fix Docker volume permissions:
sudo chown -R $USER:$USER .phlo/
chmod -R 755 .phlo/- Installation Guide - Setup instructions
- CLI Reference - Command-line tools
- Developer Guide - Building workflows
- Troubleshooting - Common issues