Skip to content

Latest commit

 

History

History
55 lines (42 loc) · 1.74 KB

File metadata and controls

55 lines (42 loc) · 1.74 KB

Contributing

Thanks for helping with the ARCEME Data Cube Pipeline. This guide describes how to set up the environment, run the pipeline, and propose changes.

Scope

  • Main code lives in src/processor.
  • Keep large outputs (Zarr, logs, data cubes) outside the repo in the configured output directory.
  • Avoid committing credentials or generated artifacts.

Requirements

Setup

cd /home/eouser/datacubes/data-cubes-arceme
uv sync

Configuration

Create a local .env with S3 credentials (do not commit real secrets). See README.md for the full template and endpoint notes.

Running

uv run python src/processor/pipeline_orchestrator.py

Custom config:

uv run python src/processor/pipeline_orchestrator.py --config src/processor/test_config.yaml

Tests / checks

There is a simple cloud-mask smoke script:

uv run python test/senselv_tests.py

For a pipeline smoke run, use src/processor/test_config.yaml to keep runtime short.

Dependencies

Dependencies are managed with uv.

  • Add/update: uv add <package>
  • Sync lockfile: uv sync
  • Commit changes to pyproject.toml and uv.lock together.

Submitting changes

  • Keep changes focused and describe how to reproduce or validate.
  • Update README.md when adding new options or workflow steps.
  • If you touch configs or outputs, note the config used and the expected output location.