diff --git a/rfc/5/index.md b/rfc/5/index.md index 83986762..644ff879 100644 --- a/rfc/5/index.md +++ b/rfc/5/index.md @@ -1,4 +1,4 @@ -# RFC-5 Coordinate systems and transformations +# RFC-5: Coordinate Systems and Transformations ```{toctree} :hidden: @@ -9,315 +9,437 @@ responses/index versions/index ``` -Add named coordinate systems and expand and clarify coordinate transformations. +Add named coordinate systems and expand and clarify coordinate transformations. This document represents the updated proposal following the [original RFC5 proposal](./versions/1/index.md) and incorporates feedback from reviewers and implementers. ## Status -This RFC is currently in RFC state `R1` (send for review). - -```{list-table} Record -:widths: 8, 20, 20, 20, 15, 10 -:header-rows: 1 -:stub-columns: 1 - -* - Role - - Name - - GitHub Handle - - Institution - - Date - - Status -* - Author - - John Bogovic - - @bogovicj - - HHMI Janelia - - 2024-07-30 - - Implemented -* - Author - - Davis Bennett - - @d-v-b - - - - 2024-07-30 - - Implemented validation -* - Author - - Luca Marconato - - @LucaMarconato - - EMBL - - 2024-07-30 - - Implemented -* - Author - - Matt McCormick - - @thewtex - - ITK - - 2024-07-30 - - Implemented -* - Author - - Stephan Saalfeld - - @axtimwalde - - HHMI Janelia - - 2024-07-30 - - Implemented (with JB) -* - Endorser - - Norman Rzepka - - @normanrz - - Scalable Minds - - 2024-08-22 - - -* - Reviewer - - Dan Toloudis, David Feng, Forrest Collman, Nathalie GAudreault, Gideon Dunster - - toloudis, dyf, fcollman - - Allen Institutes - - 2024-11-28 - - [Review](./reviews/1/index) -* - Reviewer - - Will Moore, Jean-Marie Burel, Jason Swedlow - - will-moore, jburel, jrswedlow - - University of Dundee - - 2025-01-22 - - [Review](./reviews/2/index) -``` +This RFC is currently in RFC state `R4` (authors prepare responses). + +| **Role** | Name | GitHub Handle | Institution | Date | Status | +|----------|------|---------------|-------------|------|--------| +| **Author** | John Bogovic | @bogovicj | HHMI Janelia | 2024-07-30 | Implemented | +| **Author** | Davis Bennett | @d-v-b | | 2024-07-30 | Implemented validation | +| **Author** | Luca Marconato | @LucaMarconato | EMBL | 2024-07-30 | Implemented | +| **Author** | Matt McCormick | @thewtex | ITK | 2024-07-30 | Implemented | +| **Author** | Stephan Saalfeld | @axtimwalde | HHMI Janelia | 2024-07-30 | Implemented (with JB) | +| **Author** | Johannes Soltwedel | @jo-mueller | German Bioimaging e.V. | 2025-10-07 | Implemented | +| **Endorser** | Will Moore | @will-moore | University of Dundee | 2025-10-23 | Implemented | +| **Endorser** | David Stansby | @dstansby | University College London | 2025-10-23 | Implemented | +| **Endorser** | Norman Rzepka | @normanrz | Scalable Minds | 2024-08-22 | | +| **Reviewer** | Dan Toloudis, David Feng, Forrest Collman, Nathalie GAudreault, Gideon Dunster | toloudis, dyf, fcollman | Allen Institutes | 2024-11-28 | [Review](rfcs:rfc5:review1) | +| **Reviewer** | Will Moore, Jean-Marie Burel, Jason Swedlow | will-moore, jburel, jrswedlow | University of Dundee | 2025-01-22 | [Review](rfcs:rfc5:review2)| ## Overview This RFC provides first-class support for spatial and coordinate transformations in OME-Zarr. +Working version title: **0.6dev2** + ## Background -Coordinate and spatial transformation are vitally important for neuro and bio-imaging and broader scientific imaging practices -to enable: - -1. Reproducibility and Consistency: Supporting spatial transformations explicitly in a file format ensures that transformations - are applied consistently across different platforms and applications. This FAIR capability is a cornerstone of scientific - research, and having standardized formats and tools facilitates verification of results by independent - researchers. -2. Integration with Analysis Workflows: Having spatial transformations as a first-class citizen within file formats allows for - seamless integration with various image analysis workflows. Registration transformations can be used in subsequent image - analysis steps without requiring additional conversion. -3. Efficiency and Accuracy: Storing transformations within the file format avoids the need for re-sampling each time the data is - processed. This reduces sampling errors and preserves the accuracy of subsequent analyses. Standardization enables on-demand - transformation, critical for the massive volumes collected by modern microscopy techniques. -4. Flexibility in Analysis: A file format that natively supports spatial transformations allows researchers to apply, modify, or - reverse transformations as needed for different analysis purposes. This flexibility is critical for tasks such as - longitudinal studies, multi-modal imaging, and comparative analysis across different subjects or experimental conditions. - -Toward these goals, this RFC expands the set of transformations in the OME-Zarr spec covering many of the use cases -requested in [this github issue](https://github.com/ome/ngff/issues/84). It also adds "coordinate systems" - named -sets of "axes." Related the relationship of discrete arrays to physical coordinates and the interpretation and motivation for -axis types. +Coordinate and spatial transformation are vitally important +for neuro and bio-imaging and broader scientific imaging practices to enable: + +1. Reproducibility and Consistency: + Supporting spatial transformations explicitly in a file format ensures that + transformations are applied consistently across different platforms and applications. + This FAIR capability is a cornerstone of scientific research, + and having standardized formats and tools facilitates verification of results by independent researchers. +2. Integration with Analysis Workflows: + Having spatial transformations as a first-class citizen within file formats + allows for seamless integration with various image analysis workflows. + Registration transformations can be used in subsequent image analysis steps + without requiring additional conversion. +3. Efficiency and Accuracy: + Storing transformations within the file format avoids + the need for re-sampling each time the data is processed. + This reduces sampling errors and preserves the accuracy of subsequent analyses. + Standardization enables on-demand transformation, + critical for the massive volumes collected by modern microscopy techniques. +4. Flexibility in Analysis: + A file format that natively supports spatial transformations allows researchers to apply, modify, + or reverse transformations as needed for different analysis purposes. + This flexibility is critical for tasks such as longitudinal studies, multi-modal imaging, + and comparative analysis across different subjects or experimental conditions. + +Toward these goals, this RFC expands the set of transformations in the OME-Zarr spec +covering many of the use cases requested in [this github issue](https://github.com/ome/ngff/issues/84). +It also adds "coordinate systems" - named sets of "axes." +Related the relationship of discrete arrays to physical coordinates +and the interpretation and motivation for axis types. ## Proposal -Below is a slightly abridged copy of the proposed changes to the specification (examples are omitted), the full set of changes -including all examples are publicly available on the [github pull request](https://github.com/ome/ngff/pull/138). +Below is a complete copy of the proposed changes including suggestions +from reviewers and contributors of the previously associated [github pull request](https://github.com/ome/ngff/pull/138). +The changes, if approved, shall be translated into bikeshed syntax and added to the ngff repository in a separate PR. +This PR will then comprise complete json schemas when the RFC enters the SPEC phase (see RFC1). ### "coordinateSystems" metadata -A "coordinate system" is a collection of "axes" / dimensions with a name. Every coordinate system: -- MUST contain the field "name". The value MUST be a non-empty string that is unique among `coordinateSystem`s. +A `coordinateSystem` is a JSON object with a "name" field and a "axes" field. +Every coordinate system: +- MUST contain the field "name". + The value MUST be a non-empty string that is unique among all entries under `coordinateSystems`. - MUST contain the field "axes", whose value is an array of valid "axes" (see below). +The elements of `"axes"` correspond to the index of each array dimension and coordinates for points in that coordinate system. +For the below example, the `"x"` dimension is the last dimension. +The "dimensionality" of a coordinate system is indicated by the length of its "axes" array. +The "volume_micrometers" example coordinate system below is three dimensional (3D). -The order of the `"axes"` list matters and defines the index of each array dimension and coordinates for points in that -coordinate system. The "dimensionality" of a coordinate system -is indicated by the length of its "axes" array. The "volume_micrometers" example coordinate system above is three dimensional (3D). - -The axes of a coordinate system (see below) give information about the types, units, and other properties of the coordinate -system's dimensions. Axis `name`s may contain semantically meaningful information, but can be arbitrary. As a result, two -coordinate systems that have identical axes in the same order may not be "the same" in the sense that measurements at the same -point refer to different physical entities and therefore should not be analyzed jointly. Tasks that require images, annotations, -regions of interest, etc., SHOULD ensure that they are in the same coordinate system (same name, with identical axes) or can be -transformed to the same coordinate system before doing analysis. See the example below. +````{admonition} Example +Coordinate Systems metadata example -### "axes" metadata - -"axes" describes the dimensions of a coordinate systems. It is a list of dictionaries, where each dictionary describes a dimension (axis) and: -- MUST contain the field "name" that gives the name for this dimension. The values MUST be unique across all "name" fields. -- SHOULD contain the field "type". It SHOULD be one of the strings "array", "space", "time", "channel", "coordinate", or "displacement" but MAY take other string values for custom axis types that are not part of this specification yet. -- MAY contain the field "discrete". The value MUST be a boolean, and is `true` if the axis represents a discrete dimension. -- SHOULD contain the field "unit" to specify the physical unit of this dimension. The value SHOULD be one of the following strings, which are valid units according to UDUNITS-2. +```json +{ + "name" : "volume_micrometers", + "axes" : [ + {"name": "z", "type": "space", "unit": "micrometer"}, + {"name": "y", "type": "space", "unit": "micrometer"}, + {"name": "x", "type": "space", "unit": "micrometer"} + ] +} +``` +```` + +The axes of a coordinate system (see below) give information +about the types, units, and other properties of the coordinate system's dimensions. +Axis names may contain semantically meaningful information, but can be arbitrary. +As a result, two coordinate systems that have identical axes in the same order +may not be "the same" in the sense that measurements at the same point +refer to different physical entities and therefore should not be analyzed jointly. +Tasks that require images, annotations, regions of interest, etc., +SHOULD ensure that they are in the same coordinate system (same name and location within the Zarr hierarchy, with identical axes) +or can be transformed to the same coordinate system before doing analysis. +See the [example below](example:coordinate_transformation). + +#### "axes" metadata + +"axes" describes the dimensions of a coordinate systems +and adds an interpretation to the samples along that dimension. + +It is a list of dictionaries, +where each dictionary describes a dimension (axis) and: +- MUST contain the field "name" that gives the name for this dimension. + The values MUST be unique across all "name" fields in the same coordinate system. +- SHOULD contain the field "type". + It SHOULD be one of the strings "array", "space", "time", "channel", "coordinate", or "displacement" + but MAY take other string values for custom axis types that are not part of this specification yet. +- MAY contain the field "discrete". + The value MUST be a boolean, + and is `true` if the axis represents a discrete dimension (see below for details). +- SHOULD contain the field "unit" to specify the physical unit of this dimension. + The value SHOULD be one of the following strings, + which are valid units according to UDUNITS-2. - Units for "space" axes: 'angstrom', 'attometer', 'centimeter', 'decimeter', 'exameter', 'femtometer', 'foot', 'gigameter', 'hectometer', 'inch', 'kilometer', 'megameter', 'meter', 'micrometer', 'mile', 'millimeter', 'nanometer', 'parsec', 'petameter', 'picometer', 'terameter', 'yard', 'yoctometer', 'yottameter', 'zeptometer', 'zettameter' - Units for "time" axes: 'attosecond', 'centisecond', 'day', 'decisecond', 'exasecond', 'femtosecond', 'gigasecond', 'hectosecond', 'hour', 'kilosecond', 'megasecond', 'microsecond', 'millisecond', 'minute', 'nanosecond', 'petasecond', 'picosecond', 'second', 'terasecond', 'yoctosecond', 'yottasecond', 'zeptosecond', 'zettasecond' -- MAY contain the field "longName". The value MUST be a string, and can provide a longer name or description of an axis and its properties. +- MAY contain the field "longName". + The value MUST be a string, + and can provide a longer name or description of an axis and its properties. -If part of multiscales metadata, the length of "axes" MUST be equal to the number of dimensions of the arrays that contain the image data. +The length of "axes" MUST be equal to the number of dimensions of the arrays that contain the image data. -Arrays are inherently discrete (see Array coordinate systems, below) but are often used to store discrete samples of a -continuous variable. The continuous values "in between" discrete samples can be retrieved using an *interpolation* method. If an -axis is continuous (`"discrete" : false`), it indicates that interpolation is well-defined. Axes representing `space` and -`time` are usually continuous. Similarly, joint interpolation across axes is well-defined only for axes of the same `type`. In -contrast, discrete axes (`"discrete" : true`) may be indexed only by integers. Axes of representing a `channel`, `coordinate`, or `displacement` are -usually discrete. +Arrays are inherently discrete (see Array coordinate systems, below) +but are often used to store discrete samples of a continuous variable. +The continuous values "in between" discrete samples can be retrieved using an *interpolation* method. +If an axis is continuous (`"discrete" : false`), it indicates that interpolation is well-defined. +Axes representing `space` and `time` are usually continuous. +Similarly, joint interpolation across axes is well-defined only for axes of the same `type`. +In contrast, discrete axes (`"discrete" : true`) may be indexed only by integers. +Axes representing a channel, coordinate, or displacement are usually discrete. -Note: The most common methods for interpolation are "nearest neighbor", "linear", "cubic", and "windowed sinc". Here, we refer -to any method that obtains values at real-valued coordinates using discrete samples as an "interpolator". As such, label images -may be interpolated using "nearest neighbor" to obtain labels at points along the continuum. +```{note} +The most common methods for interpolation are "nearest neighbor", "linear", "cubic", and "windowed sinc". +Here, we refer to any method that obtains values at real-valued coordinates using discrete samples as an "interpolator". +As such, label images may be interpolated using "nearest neighbor" to obtain labels at points along the continuum. +``` +#### Array coordinate systems -### Array coordinate systems +The dimensions of an array do not have an interpretation +until they are associated with a coordinate system via a coordinate transformation. +Nevertheless, it can be useful to refer to the "raw" coordinates of the array. +Some applications might prefer to define points or regions-of-interest in "pixel coordinates" rather than "physical coordinates," for example. +Indicating that choice explicitly will be important for interoperability. +This is possible by using **array coordinate systems**. -Every array has a default coordinate system whose parameters need not be explicitly defined. Its name is the path to the array -in the container, its axes have `"type":"array"`, are unitless, and have default "name"s. The ith axis has `"name":"dim_i"` -(these are the same default names used by [xarray](https://docs.xarray.dev/en/stable/user-guide/terminology.html)). +Every array has a default coordinate system whose parameters need not be explicitly defined. +The dimensionality of each array coordinate system equals the dimensionality of its corresponding Zarr array. +Its name is the path to the array in the container, +its axes have `"type": "array"`, are unitless, and have default names. +The i-th axis has `"name": "dim_i"` (these are the same default names used by [xarray](https://docs.xarray.dev/en/stable/user-guide/terminology.html)). +As with all coordinate systems, the dimension names must be unique and non-null. +````{admonition} Example +```json +{ + "arrayCoordinateSystem" : { + "name" : "myDataArray", + "axes" : [ + {"name": "dim_0", "type": "array"}, + {"name": "dim_1", "type": "array"}, + {"name": "dim_2", "type": "array"} + ] + } +} -The dimensionality of each array coordinate system equals the dimensionality of its corresponding zarr array. The axis with -name `"dim_i"` is the ith element of the `"axes"` list. The axes and their order align with the `shape` -attribute in the zarr array attributes (in `.zarray`), and whose data depends on the byte order used to store -chunks. As described in the [zarr array metadata](https://zarr.readthedocs.io/en/stable/spec/v2.html#arrays), -the last dimension of an array in "C" order are stored contiguously on disk or in-memory when directly loaded. +``` + +For example, if 0/zarr.json contains: +```jsonc +{ + "zarr_format": 3, + "node_type": "array", + "shape": [4, 3, 5], + //... +} +``` +Then `dim_0` has length 4, `dim_1` has length 3, and `dim_2` has length 5. -The name and axes names MAY be customized by including a `arrayCoordinateSystem` field in -the user-defined attributes of the array whose value is a coordinate system object. The length of -`axes` MUST be equal to the dimensionality. The value of `"type"` for each object in the -axes array MUST equal `"array"`. +```` + +The axes and their order align with the shape of the corresponding zarr array, +and whose data depends on the byte order used to store chunks. +As described in the [Zarr array metadata](https://zarr.readthedocs.io/en/stable/spec/v3.html#arrays), +the last dimension of an array in "C" order are stored contiguously on disk or in-memory when directly loaded. +The name and axes names MAY be customized by including a `arrayCoordinateSystem` field +in the user-defined attributes of the array whose value is a coordinate system object. +The length of `axes` MUST be equal to the dimensionality. +The value of `"type"` for each object in the axes array MUST equal `"array"`. -### Coordinate convention +#### Coordinate convention **The pixel/voxel center is the origin of the continuous coordinate system.** -It is vital to consistently define relationship between the discrete/array and continuous/interpolated -coordinate systems. A pixel/voxel is the continuous region (rectangle) that corresponds to a single sample -in the discrete array, i.e., the area corresponding to nearest-neighbor (NN) interpolation of that sample. -The center of a 2d pixel corresponding to the origin `(0,0)` in the discrete array is the origin of the continuous coordinate -system `(0.0, 0.0)` (when the transformation is the identity). The continuous rectangle of the pixel is given by the -half-open interval `[-0.5, 0.5) x [-0.5, 0.5)` (i.e., -0.5 is included, +0.5 is excluded). See chapter 4 and figure 4.1 of the ITK Software Guide. +It is vital to consistently define relationship +between the discrete/array and continuous/interpolated coordinate systems. +A pixel/voxel is the continuous region (rectangle) that corresponds to a single sample in the discrete array, i.e., +the area corresponding to nearest-neighbor (NN) interpolation of that sample. +The center of a 2d pixel corresponding to the origin `(0,0)` in the discrete array +is the origin of the continuous coordinate system `(0.0, 0.0)` (when the transformation is the identity). +The continuous rectangle of the pixel is given +by the half-open interval `[-0.5, 0.5) x [-0.5, 0.5)` (i.e., -0.5 is included, +0.5 is excluded). +See chapter 4 and figure 4.1 of the ITK Software Guide. ### "coordinateTransformations" metadata -"coordinateTransformations" describe the mapping between two coordinate systems (defined by "axes"). +"coordinateTransformations" describe the mapping between two coordinate systems (defined by "coordinateSystems"). For example, to map an array's discrete coordinate system to its corresponding physical coordinates. -Coordinate transforms are in the "forward" direction. They represent functions from *points* in the -input space to *points* in the output space. +Coordinate transforms are in the "forward" direction. +This means they represent functions from *points* in the input space to *points* in the output space +(see [example below](example:coordinate_transformation_scale)). +They: -- MUST contain the field "type". +- MUST contain the field "type" (string). - MUST contain any other fields required by the given "type" (see table below). -- MUST contain the field "output", unless part of a `sequence` or `inverseOf` (see details). -- MUST contain the field "input", unless part of a `sequence` or `inverseOf` (see details). -- MAY contain the field "name". Its value MUST be unique across all "name" fields for coordinate transformations. +- MUST contain the field "output" (string), + unless part of a `sequence` or `inverseOf` (see details). +- MUST contain the field "input" (string), + unless part of a `sequence` or `inverseOf` (see details). +- MAY contain the field "name" (string). + Its value MUST be unique across all "name" fields for coordinate transformations. - Parameter values MUST be compatible with input and output space dimensionality (see details). - - -
identity - - The identity transformation is the default transformation and is typically not explicitly defined. -
mapAxis - "mapAxis":Dict[String:String] - A maxAxis transformation specifies an axis permutation as a map between axis names. -
translation - one of:
"translation":List[number],
"path":str -
translation vector, stored either as a list of numbers ("translation") or as binary data at a location - in this container (path). -
scale - one of:
"scale":List[number],
"path":str -
scale vector, stored either as a list of numbers (scale) or as binary data at a location in this - container (path). -
affine - one of:
"affine": List[List[number]],
"path":str -
affine transformation matrix stored as a flat array stored either with json uing the affine field - or as binary data at a location in this container (path). If both are present, the binary values at path should be used. -
rotation - one of:
"rotation":List[number],
"path":str -
rotation transformation matrix stored as an array stored either - with json or as binary data at a location in this container (path). - If both are present, the binary parameters at path are used. -
sequence - "transformations":List[Transformation] - A sequence of transformations, Applying the sequence applies the composition of all transforms in the list, in order. -
displacements - "path":str
"interpolation":str -
Displacement field transformation located at (path). -
coordinates - "path":str
"interpolation":str -
Coordinate field transformation located at (path). -
inverseOf - "transform":Transform - The inverse of a transformation. Useful if a transform is not closed-form invertible. See Forward and inverse for details and examples. -
bijection - "forward":Transform
"inverse":Transform -
Explicitly define an invertible transformation by providing a forward transformation and its inverse. -
byDimension - "transformations":List[Transformation] - Define a high dimensional transformation using lower dimensional transformations on subsets of - dimensions. -
typefieldsdescription -
+The following transformations are supported: + +| Type | Fields | Description | +|------|--------|-------------| +| [`identity`](#identity) | | The identity transformation is the do-nothing transformation and is typically not explicitly defined. | +| [`mapAxis`](#mapaxis) | `"mapAxis":List[number]` | an axis permutation as a transpose array of integer indices that refer to the ordering of the axes in the respective coordinate system. | +| [`translation`](#translation) | one of:
`"translation":List[number]`,
`"path":str` | Translation vector, stored either as a list of numbers (`"translation"`) or as a zarr array at a location in this container (`path`). | +| [`scale`](#scale) | one of:
`"scale":List[number]`,
`"path":str` | Scale vector, stored either as a list of numbers (`scale`) or as a zarr array at a location in this container (`path`). | +| [`affine`](#affine) | one of:
`"affine":List[List[number]]`,
`"path":str` | 2D affine transformation matrix stored either with JSON (`affine`) or as a zarr array at a location in this container (`path`). | +| [`rotation`](#rotation) | one of:
`"rotation":List[List[number]]`,
`"path":str` | 2D rotation transformation matrix stored as an array stored either with json (`rotation`) or as a zarr array at a location in this container (`path`).| +| [`sequence`](#sequence) | `"transformations":List[Transformation]` | sequence of transformations. Applying the sequence applies the composition of all transforms in the list, in order. | +| [`displacements`](#coordinates-and-displacements) | `"path":str`
`"interpolation":str` | Displacement field transformation located at `path`. | +| [`coordinates`](#coordinates-and-displacements) | `"path":str`
`"interpolation":str` | Coordinate field transformation located at `path`. | +| [`inverseOf`](#inverseof) | `"transformation":Transformation` | The inverse of a transformation. Useful if a transform is not closed-form invertible. See forward and inverse of [bijections](#bijection) for details and examples. | +| [`bijection`](#bijection) | `"forward":Transformation`
`"inverse":Transformation` | An invertible transformation providing an explicit forward transformation and its inverse. | +| [`byDimension`](#bydimension) | `"transformations":List[Transformation]`,
`"input_axes": List[str]`,
`"output_axes": List[str]` | A high dimensional transformation using lower dimensional transformations on subsets of dimensions. | + +Implementations SHOULD prefer to store transformations as a sequence of less expressive transformations where possible +(e.g., sequence[translation, rotation], instead of affine transformation with translation/rotation). + +````{admonition} Example +(example:coordinate_transformation_scale)= +```json +{ + "coordinateSystems": [ + { "name": "in", "axes": [{"name": "j"}, {"name": "i"}] }, + { "name": "out", "axes": [{"name": "y"}, {"name": "x"}] } + ], + "coordinateTransformations": [ + { + "type": "scale", + "scale": [2, 3.12], + "input": "in", + "output": "out" + } + ] +} + +``` + +For example, the scale transformation above defines the function: + +``` +x = 3.12 * i +y = 2 * j +``` + +i.e., the mapping from the first input axis to the first output axis is determined by the first scale parameter. +```` Conforming readers: - MUST parse `identity`, `scale`, `translation` transformations; -- SHOULD parse `mapAxis`, `affine` transformations; +- SHOULD parse `mapAxis`, `affine`, `rotation` transformations; +- SHOULD display an informative warning if encountering transformations that cannot be parsed or displayed by a viewer; - SHOULD be able to apply transformations to points; - SHOULD be able to apply transformations to images; -Coordinate transformations from array to physical coordinates MUST be stored in multiscales, -and MUST be duplicated in the attributes of the zarr array. Transformations between different images MUST be stored in the -attributes of a parent zarr group. For transformations that store data or parameters in a zarr array, those zarr arrays SHOULD -be stored in a zarr group `"coordinateTransformations"`. +Coordinate transformations can be stored in multiple places to reflect different usecases. + +- Transformations in individual multiscale datasets represent a special case of transformations + and are explained [below](#multiscales-metadata). +- Additional transformations for single multiscale images MUST be stored under a field `coordinateTransformations` + in the multiscales dictionaries. + This `coordinateTransformations` field MUST contain a list of valid [transformations](#transformation-types). +- Transformations between two or more images MUST be stored in the attributes of a parent zarr group. + For transformations that store data or parameters in a zarr array, + those zarr arrays SHOULD be stored in a zarr group called "coordinateTransformations".
 store.zarr                      # Root folder of the zarr store
 │
-├── .zattrs                     # coordinate transformations describing the relationship between two image coordinate systems
+├── zarr.json                   # coordinate transformations describing the relationship between two image coordinate systems
 │                               # are stored in the attributes of their parent group.
-│                               # transformations between 'volume' and 'crop' coordinate systems are stored here.
+│                               # transformations between coordinate systems in the 'volume' and 'crop' multiscale images are stored here.
 │
-├── coordinateTransformations   # transformations that use array storage go in a "coordinateTransformations" zarr group.
+├── coordinateTransformations   # transformations that use array storage for their parameters should go in a zarr group named "coordinateTransformations".
 │   └── displacements           # for example, a zarr array containing a displacement field
-│       ├── .zattrs
-│       └── .zarray
+│       └── zarr.json
 │
 ├── volume
-│   ├── .zattrs                 # group level attributes (multiscales)
-│   └── 0                       # a group containing the 0th scale
-│       └── image               # a zarr array
-│           ├── .zattrs         # physical coordinate system and transformations here
-│           └── .zarray         # the array attributes
+│   ├── zarr.json              # group level attributes (multiscales)
+│   └── 0                      # a group containing the 0th scale
+│       └── image              # a zarr array
+│           └── zarr.json      # physical coordinate system and transformations here
 └── crop
-    ├── .zattrs                 # group level attributes (multiscales)
-    └── 0                       # a group containing the 0th scale
-        └── image               # a zarr array
-            ├── .zattrs         # physical coordinate system and transformations here
-            └── .zarray         # the array attributes
+    ├── zarr.json              # group level attributes (multiscales)
+    └── 0                      # a group containing the 0th scale
+        └── image              # a zarr array
+            └── zarr.json      # physical coordinate system and transformations here
 
-### Additional details - -Most coordinate transformations MUST specify their input and output coordinate systems using `input` and `output` with a string value -corresponding to the name of a coordinate system. The coordinate system's name may be the path to an array, and therefore may -not appear in the list of coordinate systems. - -Exceptions are if the the coordinate transformation appears in the `transformations` list of a `sequence` or is the -`transformation` of an `inverseOf` transformation. In these two cases input and output could, in some cases, be omitted (see below for -details). - -Transformations in the `transformations` list of a `byDimensions` transformation MUST provide `input` and `output` as arrays -of strings corresponding to axis names of the parent transformation's input and output coordinate systems (see below for -details). +````{admonition} Example +(example:coordinate_transformation)= +Two instruments simultaneously image the same sample from two different angles, +and the 3D data from both instruments are calibrated to "micrometer" units. +An analysis of sample A requires measurements from images taken from both instruments at certain points in space. +Suppose a region of interest (ROI) is determined from the image obtained from instrument 2, +but quantification from that region is needed for instrument 1. +Since measurements were collected at different angles, +a measurement by instrument 1 at the point with image array coordinates (x,y,z) +may not correspond to the measurement at the same array coordinates in instrument 2 +(i.e., it may not be the same physical location in the sample). +To analyze both images together, they must be transformed to a common coordinate system. + +The set of coordinate transformations encodes relationships between coordinate systems, +specifically, how to convert points from one coordinate system to another. +Implementations can apply the coordinate transform to images or points +in coordinate system "sampleA_instrument2" to bring them into the "sampleA_instrument1" coordinate system. +In this case, image data within the ROI defined in image2 should be transformed to the "sampleA_image1" coordinate system, +then used for quantification with the instrument 1 image. + +The `coordinateTransformations` in the parent-level metadata would contain the following data. +The transformation parameters are stored in a separate zarr-group +under `coordinateTransformations/sampleA_instrument2-to-instrument1` as shown above. +```json +"coordinateTransformations": [ + { + "type": "affine", + "path": "coordinateTransformations/sampleA_instrument2-to-instrument1", + "input": "sampleA_instrument2", + "output": "sampleA_instrument1" + } +] +``` -Coordinate transformations are functions of *points* in the input space to *points* in the output space. We call this the "forward" direction. -Points are ordered lists of coordinates, where a coordinate is the location/value of that point along its corresponding axis. -The indexes of axis dimensions correspond to indexes into transformation parameter arrays. For example, the scale transformation above -defines the function: +And the image at the path `sampleA_instrument1` would have the following as the first coordinate system: -``` -x = 0.5 * i -y = 1.2 * j +```json +"coordinateSystems": [ + { + "name": "sampleA-instrument1", + "axes": [ + {"name": "z", "type": "space", "unit": "micrometer"}, + {"name": "y", "type": "space", "unit": "micrometer"}, + {"name": "x", "type": "space", "unit": "micrometer"} + ] + }, +] ``` -i.e., the mapping from the first input axis to the first output axis is determined by the first scale parameter. +The image at path `sampleA_instrument2` would have this as the first listed coordinate system: -When rendering transformed images and interpolating, implementations may need the "inverse" transformation - from the output to -the input coordinate system. Inverse transformations will not be explicitly specified when they can be computed in closed form from the -forward transformation. Inverse transformations used for image rendering may be specified using the `inverseOf` -transformation type, for example: +```json +[ + { + "name": "sampleA-instrument2", + "axes": [ + {"name": "z", "type": "space", "unit": "micrometer"}, + {"name": "y", "type": "space", "unit": "micrometer"}, + {"name": "x", "type": "space", "unit": "micrometer"} + ] + } +], +``` +```` + +#### Additional details + +Most coordinate transformations MUST specify their input and output coordinate systems +using `input` and `output` with a string value +that MUST correspond to the name of a coordinate system or the path to a multiscales group. +Exceptions are if the coordinate transformation is wrapped in another transformation, +e.g. as part of a `transformations` list of a `sequence` or +as `transformation` of an `inverseOf` transformation. +In these two cases input and output could, in some cases, be omitted (see below for details). +If unused, the `input` and `output` fields MAY be null. + +If used in a parent-level zarr-group, the `input` and `output` fields +can be the name of a `coordinateSystem` in the same parent-level group or the path to a multiscale image group. +If either `input` or `output` is a path to a multiscale image group, +the authoritative coordinate system for the respective image is the first `coordinateSystem` defined therein. +If the names of `input` or `output` correspond to both an existing path to a multiscale image group +and the name of a `coordinateSystem` defined in the same metadata document, +the `coordinateSystem` MUST take precedent. + +For usage in multiscales, see [the multiscales section](#multiscales-metadata) for details. + +Coordinate transformations are functions of *points* in the input space to *points* in the output space. +We call this the "forward" direction. +Points are ordered lists of coordinates, +where a coordinate is the location/value of that point along its corresponding axis. +The indexes of axis dimensions correspond to indexes into transformation parameter arrays. + +When rendering transformed images and interpolating, +implementations may need the "inverse" transformation - +from the output to the input coordinate system. +Inverse transformations will not be explicitly specified +when they can be computed in closed form from the forward transformation. +Inverse transformations used for image rendering may be specified using +the `inverseOf` transformation type, for example: ```json { @@ -325,179 +447,229 @@ transformation type, for example: "transformation" : { "type": "displacements", "path": "/path/to/displacements", - } + }, + "input": "input_image", + "output": "output_image", } ``` -Implementations SHOULD be able to compute and apply the inverse of some coordinate transformations when they -are computable in closed-form (as the [Transformation types](#transformation-types) section below indicates). If an -operation is requested that requires the inverse of a transformation that can not be inverted in closed-form, -implementations MAY estimate an inverse, or MAY output a warning that the requested operation is unsupported. +Implementations SHOULD be able to compute and apply +the inverse of some coordinate transformations when they are computable +in closed-form (as the [Transformation types](#transformation-types) section below indicates). +If an operation is requested that requires +the inverse of a transformation that can not be inverted in closed-form, +implementations MAY estimate an inverse, +or MAY output a warning that the requested operation is unsupported. #### Matrix transformations -Two transformation types ([affine](#affine) and [rotation](#rotation)) are parametrized by matrices. Matrices are applied to -column vectors that represent points in the input coordinate system. The first (last) axis in a coordinate system is the top -(bottom) entry in the column vector. Matrices are stored as two-dimensional arrays, either as json or in a zarr array. When -stored as a 2D zarr array, the first dimension indexes rows and the second dimension indexes columns (e.g., an array of -`"shape":[3,4]` has 3 rows and 4 columns). When stored as a 2D json array, the inner array contains rows (e.g. `[[1,2,3], -[4,5,6]]` has 2 rows and 3 columns). +Two transformation types ([affine](#affine) and [rotation](#rotation)) are parametrized by matrices. +Matrices are applied to column vectors that represent points in the input coordinate system. +The first and last axes in a coordinate system correspond to the top and bottom entries in the column vector, respectively. +Matrices are stored as two-dimensional arrays, either as json or in a zarr array. +When stored as a 2D zarr array, the first dimension indexes rows and the second dimension indexes columns +(e.g., an array of `"shape":[3,4]` has 3 rows and 4 columns). +When stored as a 2D json array, the inner array contains rows (e.g. `[[1,2,3], [4,5,6]]` has 2 rows and 3 columns). +#### Transformation types -### Transformation types +Input and output dimensionality may be determined by the coordinate system referred to by the `input` and `output` fields, respectively. +If the value of `input` is a path to an array, its shape gives the input dimension, +otherwise it is given by the length of `axes` for the coordinate system with the name of the `input`. +If the value of `output` is an array, its shape gives the output dimension, +otherwise it is given by the length of `axes` for the coordinate system with the name of the `output`. -Input and output dimensionality may be determined by the value of the "input" and "output" fields, respectively. If the value -of "input" is an array, it's length gives the input dimension, otherwise the length of "axes" for the coordinate -system with the name of the "input" value gives the input dimension. If the value of "input" is an array, it's -length gives the input dimension, otherwise it is given by the length of "axes" for the coordinate system with -the name of the "input". If the value of "output" is an array, its length gives the output dimension, -otherwise it is given by the length of "axes" for the coordinate system with the name of the "output". +##### identity -#### identity +`identity` transformations map input coordinates to output coordinates without modification. +The position of the i-th axis of the output coordinate system +is set to the position of the ith axis of the input coordinate system. +`identity` transformations are invertible. -`identity` transformations map input coordinates to output coordinates without modification. The position of -the ith axis of the output coordinate system is set to the position of the ith axis of the input coordinate -system. `identity` transformations are invertible. +The `input` and `output` fields MAY be omitted if wrapped in another transformation that provides `input`/`output` +(e.g., [`sequence`](#sequence), [`inverseOf`](#inverseof), ['byDimension](#bydimension) or [`bijection`](#bijection)). -#### mapAxis +##### mapAxis -`mapAxis` transformations describe axis permutations as a mapping of axis names. Transformations MUST include a `mapAxis` field -whose value is an object, all of whose values are strings. If the object contains `"x":"i"`, then the transform sets the value -of the output coordinate for axis "x" to the value of the coordinate of input axis "i" (think `x = i`). For every axis in its output coordinate -system, the `mapAxis` MUST have a corresponding field. For every value of the object there MUST be an axis of the input -coordinate system with that name. Note that the order of the keys could be reversed. +`mapAxis` transformations describe axis permutations as a transpose vector of integers. +Transformations MUST include a `mapAxis` field +whose value is an array of integers that specifies the new ordering in terms of indices of the old order. +The length of the array MUST equal the number of dimensions in both the input and output coordinate systems. +Each integer in the array MUST be a valid zero-based index into the input coordinate system's axes +(i.e., between 0 and N-1 for an N-dimensional input). +Each index MUST appear exactly once in the array. +The value at position `i` in the array indicates which input axis becomes the `i`-th output axis. +`mapAxis` transforms are invertible. +The `input` and `output` fields MAY be omitted if wrapped in another transformation that provides `input`/`output` +(e.g., [`sequence`](#sequence), [`inverseOf`](#inverseof), ['byDimension](#bydimension) or [`bijection`](#bijection)). -#### translation +##### translation -`translation` transformations are special cases of affine transformations. When possible, a -translation transformation should be preferred to its equivalent affine. Input and output dimensionality MUST be -identical and MUST equal the the length of the "translation" array (N). `translation` transformations are -invertible. +`translation` transformations are special cases of affine transformations. +When possible, a translation transformation should be preferred to its equivalent affine. +Input and output dimensionality MUST be identical +and MUST equal the the length of the "translation" array (N). +`translation` transformations are invertible. + +The `input` and `output` fields MAY be omitted if wrapped in another transformation that provides `input`/`output` +(e.g., [`sequence`](#sequence), [`inverseOf`](#inverseof), ['byDimension](#bydimension) or [`bijection`](#bijection)). path -: The path to a zarr-array containing the translation parameters. The array at this path MUST be 1D, and its length MUST be `N`. +: The path to a zarr-array containing the translation parameters. +The array at this path MUST be 1D, and its length MUST be `N`. translation -: The translation parameters stored as a JSON list of numbers. The list MUST have length `N`. +: The translation parameters stored as a JSON list of numbers. +The list MUST have length `N`. +##### scale -#### scale +`scale` transformations are special cases of affine transformations. +When possible, a scale transformation SHOULD be preferred to its equivalent affine. +Input and output dimensionality MUST be identical +and MUST equal the the length of the "scale" array (N). +Values in the `scale` array SHOULD be non-zero; +in that case, `scale` transformations are invertible. -`scale` transformations are special cases of affine transformations. When possible, a scale transformation -SHOULD be preferred to its equivalent affine. Input and output dimensionality MUST be identical and MUST equal -the the length of the "scale" array (N). Values in the `scale` array SHOULD be non-zero; in that case, `scale` -transformations are invertible. +The `input` and `output` fields MAY be omitted if wrapped in another transformation that provides `input`/`output` +(e.g., [`sequence`](#sequence), [`inverseOf`](#inverseof), ['byDimension](#bydimension) or [`bijection`](#bijection)). path -: The path to a zarr-array containing the scale parameters. The array at this path MUST be 1D, and its length MUST be `N`. +: The path to a zarr-array containing the scale parameters. +The array at this path MUST be 1D, and its length MUST be `N`. scale -: The scale parameters stored as a JSON list of numbers. The list MUST have length `N`. +: The scale parameters are stored as a JSON list of numbers. +The list MUST have length `N`. +##### affine -#### affine +`affine`s are [matrix transformations](#matrix-transformations) from N-dimensional inputs to M-dimensional outputs. +They are represented as the upper `(M)x(N+1)` sub-matrix of a `(M+1)x(N+1)` matrix in [homogeneous +coordinates](https://en.wikipedia.org/wiki/Homogeneous_coordinates) (see examples). +This transformation type may be (but is not necessarily) invertible +when `N` equals `M`. +The matrix MUST be stored as a 2D array either as json or as a zarr array. -`affine`s are [matrix transformations](#matrix-transformations) from N-dimensional inputs to M-dimensional outputs are -represented as the upper `(M)x(N+1)` sub-matrix of a `(M+1)x(N+1)` matrix in [homogeneous -coordinates](https://en.wikipedia.org/wiki/Homogeneous_coordinates) (see examples). This transformation type may be (but is not necessarily) -invertible when `N` equals `M`. The matrix MUST be stored as a 2D array either as json or as a zarr array. +The `input` and `output` fields MAY be omitted if wrapped in another transformation that provides `input`/`output` +(e.g., [`sequence`](#sequence), [`inverseOf`](#inverseof), ['byDimension](#bydimension) or [`bijection`](#bijection)). path -: The path to a zarr-array containing the affine parameters. The array at this path MUST be 2D whose shape MUST be `(M)x(N+1)`. +: The path to a zarr-array containing the affine parameters. +The array at this path MUST be 2D whose shape MUST be `(M)x(N+1)`. affine -: The affine parameters stored in JSON. The matrix MUST be stored as 2D nested array where the outer array MUST be length `M` and the inner arrays MUST be length `N+1`. +: The affine parameters stored in JSON. +The matrix MUST be stored as 2D nested array (an array of arrays of numbers) +where the outer array MUST be length `M` and the inner arrays MUST be length `N+1`. +##### rotation -#### rotation +`rotation`s are [matrix transformations](#matrix-transformations) that are special cases of affine transformations. +When possible, a rotation transformation SHOULD be used instead of an equivalent affine. +Input and output dimensionality (N) MUST be identical. +Rotations are stored as `NxN` matrices, see below, +and MUST have determinant equal to one, with orthonormal rows and columns. +The matrix MUST be stored as a 2D array either as json or in a zarr array. +`rotation` transformations are invertible. -`rotation`s are [matrix transformations](#matrix-transformations) that are special cases of affine transformations. When possible, a rotation -transformation SHOULD be preferred to its equivalent affine. Input and output dimensionality (N) MUST be identical. Rotations -are stored as `NxN` matrices, see below, and MUST have determinant equal to one, with orthonormal rows and columns. The matrix -MUST be stored as a 2D array either as json or in a zarr array. `rotation` transformations are invertible. +The `input` and `output` fields MAY be omitted if wrapped in another transformation that provides `input`/`output` +(e.g., [`sequence`](#sequence), [`inverseOf`](#inverseof), ['byDimension](#bydimension) or [`bijection`](#bijection)). path -: The path to an array containing the affine parameters. The array at this path MUST be 2D whose shape MUST be `N x N`. +: The path to an array containing the affine parameters. +The array at this path MUST be 2D whose shape MUST be `N x N`. rotation -: The parameters stored in JSON. The matrix MUST be stored as a 2D nested array where the outer array MUST be length `N` and the inner arrays MUST be length `N`. +: The parameters stored in JSON. +The matrix MUST be stored as a 2D nested array (an array of arrays of numbers) where the outer array MUST be length `N` +and the inner arrays MUST be length `N`. + +##### inverseOf -#### inverseOf +An `inverseOf` transformation contains another transformation (often non-linear), +and indicates that transforming points from output to input coordinate systems +is possible using the contained transformation. +Transforming points from the input to the output coordinate systems +requires the inverse of the contained transformation (if it exists). -An `inverseOf` transformation contains another transformation (often non-linear), and indicates that -transforming points from output to input coordinate systems is possible using the contained transformation. -Transforming points from the input to the output coordinate systems requires the inverse of the contained -transformation (if it exists). +The `input` and `output` fields MAY be omitted for `inverseOf` transformations +if those fields may be omitted for the transformation it wraps. ```{note} -Software libraries that perform image registration often return the transformation from fixed image -coordinates to moving image coordinates, because this "inverse" transformation is most often required -when rendering the transformed moving image. Results such as this may be enclosed in an `inverseOf` -transformation. This enables the "outer" coordinate transformation to specify the moving image coordinates -as `input` and fixed image coordinates as `output`, a choice that many users and developers find intuitive. +Software libraries that perform image registration +often return the transformation from fixed image coordinates to moving image coordinates, +because this "inverse" transformation is most often required +when rendering the transformed moving image. +Results such as this may be enclosed in an `inverseOf` transformation. +This enables the "outer" coordinate transformation to specify the moving image coordinates +as `input` and fixed image coordinates as `output`, +a choice that many users and developers find intuitive. ``` +##### sequence -#### sequence - -A `sequence` transformation consists of an ordered array of coordinate transformations, and is invertible if every coordinate -transform in the array is invertible (though could be invertible in other cases as well). To apply a sequence transformation -to a point in the input coordinate system, apply the first transformation in the list of transformations. Next, apply the second -transformation to the result. Repeat until every transformation has been applied. The output of the last transformation is the -result of the sequence. - -The transformations included in the `transformations` array may omit their `input` and `output` fields under the conditions -outlined below: - -- The `input` and `output` fields MAY be omitted for the following transformation types: - - `identity`, `scale`, `translation`, `rotation`, `affine`, `displacements`, `coordinates` -- The `input` and `output` fields MAY be omitted for `inverseOf` transformations if those fields may be omitted for the - transformation it wraps -- The `input` and `output` fields MAY be omitted for `bijection` transformations if the fields may be omitted for - both its `forward` and `inverse` transformations -- The `input` and `output` fields MAY be omitted for `sequence` transformations if the fields may be omitted for - all transformations in the sequence after flattening the nested sequence lists. -- The `input` and `output` fields MUST be included for transformations of type: `mapAxis`, and `byDimension` (see the note - below), and under all other conditions. +A `sequence` transformation consists of an ordered array of coordinate transformations, +and is invertible if every coordinate transform in the array is invertible +(though could be invertible in other cases as well). +To apply a sequence transformation to a point in the input coordinate system, +apply the first transformation in the list of transformations. +Next, apply the second transformation to the result. +Repeat until every transformation has been applied. +The output of the last transformation is the result of the sequence. +A sequence transformation MUST NOT be part of another sequence transformation. +The `input` and `output` fields MUST be included for sequence transformations. transformations : A non-empty array of transformations. -#### coordinates and displacements +##### coordinates and displacements -`coordinates` and `displacements` transformations store coordinates or displacements in an array and interpret them as a vector -field that defines a transformation. The arrays must have a dimension corresponding to every axis of the input coordinate -system and one additional dimension to hold components of the vector. Applying the transformation amounts to looking up the -appropriate vector in the array, interpolating if necessary, and treating it either as a position directly (`coordinates`) or a -displacement of the input point (`displacements`). +`coordinates` and `displacements` transformations store coordinates or displacements in an array +and interpret them as a vector field that defines a transformation. +The arrays must have a dimension corresponding to every axis of the input coordinate system +and one additional dimension to hold components of the vector. +Applying the transformation amounts to looking up the appropriate vector in the array, +interpolating if necessary, +and treating it either as a position directly (`coordinates`) +or a displacement of the input point (`displacements`). -These transformation types refer to an array at location specified by the `"path"` parameter. The input and output coordinate -systems for these transformations ("input / output coordinate systems") constrain the array size and the coordinate system -metadata for the array ("field coordinate system"). +These transformation types refer to an array at location specified by the `"path"` parameter. +The input and output coordinate systems for these transformations ("input / output coordinate systems") +constrain the array size and the coordinate system metadata for the array ("field coordinate system"). -* If the input coordinate system has `N` axes, the array at location path MUST have `N+1` dimensions -* The field coordinate system MUST contain an axis identical to every axis of its input coordinate system in the same order. -* The field coordinate system MUST contain an axis with type `coordinate` or `displacement` respectively for transformations of type `coordinates` or `displacements`. +The `input` and `output` fields MAY be omitted if wrapped in another transformation that provides `input`/`output` +(e.g., [`sequence`](#sequence), [`inverseOf`](#inverseof), ['byDimension](#bydimension) or [`bijection`](#bijection)). + +* If the input coordinate system has `N` axes, + the array at location path MUST have `N+1` dimensions +* The field coordinate system MUST contain an axis identical to every axis + of its input coordinate system in the same order. +* The field coordinate system MUST contain an axis with type `coordinate` or `displacement`, respectively, + for transformations of type `coordinates` or `displacements`. * This SHOULD be the last axis (contiguous on disk when c-order). -* If the output coordinate system has `M` axes, the length of the array along the `coordinate`/`displacement` dimension MUST be of length `M`. +* If the output coordinate system has `M` axes, + the length of the array along the `coordinate`/`displacement` dimension MUST be of length `M`. The `i`th value of the array along the `coordinate` or `displacement` axis refers to the coordinate or displacement of the `i`th output axis. See the example below. -`coordinates` and `displacements` transformations are not invertible in general, but implementations MAY approximate their -inverses. Metadata for these coordinate transforms have the following field: +`coordinates` and `displacements` transformations are not invertible in general, +but implementations MAY approximate their inverses. +Metadata for these coordinate transforms have the following fields:
path
The location of the coordinate array in this (or another) container.
interpolation
-
The interpolation attributes MAY be provided. It's value indicates - the interpolation to use if transforming points not on the array's discrete grid. +
The interpolation attributes MAY be provided. + Its value indicates the interpolation to use + if transforming points not on the array's discrete grid. Values could be:
-For both `coordinates` and `displacements`, the array data at referred to by `path` MUST define coordinate system and coordinate transform metadata: +For both `coordinates` and `displacements`, +the array data at referred to by `path` MUST define coordinate system +and coordinate transform metadata: -* Every axis name in the `coordinateTransform`'s `input` MUST appear in the coordinate system -* The array dimension corresponding to the `coordinate` or `displacement` axis MUST have length equal to the number of dimensions of the `coordinateTransform` `output` -* If the input coordinate system `N` axes, then the array data at `path` MUST have `(N + 1)` dimensions. +* Every axis name in the `coordinateTransform`'s `input` + MUST appear in the coordinate system. +* The array dimension corresponding to the `coordinate` or `displacement` axis + MUST have length equal to the number of dimensions of the `coordinateTransform` `output` +* If the input coordinate system `N` axes, + then the array data at `path` MUST have `(N + 1)` dimensions. * SHOULD have a `name` identical to the `name` of the corresponding `coordinateTransform`. For `coordinates`: @@ -526,91 +703,220 @@ For `displacements`: * `input` and `output` MUST have an equal number of dimensions. -#### byDimension +##### byDimension -`byDimension` transformations build a high dimensional transformation using lower dimensional transformations -on subsets of dimensions. +`byDimension` transformations build a high dimensional transformation +using lower dimensional transformations on subsets of dimensions. +The `input` and `output` fields MUST always be included for this transformations type.
transformations
-
A list of transformations, each of which applies to a (non-strict) subset of input and output dimensions (axes). - The values of input and output fields MUST be an array of strings. - Every axis name in input MUST correspond to a name of some axis in this parent object's input coordinate system. - Every axis name in the parent byDimension's output MUST appear in exactly one of its child transformations' output. +
Each child transformation MUST contain input_axes and output_axes fields + whose values are arrays of strings. + Every axis name in a child transformation's input_axes + MUST correspond to a name of some axis in this parent object's input coordinate system. + Every axis name in the parent byDimension's output coordinate system + MUST appear in exactly one child transformation's output_axes array. + Each child transformation's input_axes and output_axes arrays + MUST have the same length as that transformation's parameter arrays.
+##### bijection + +A bijection transformation is an invertible transformation in +which both the `forward` and `inverse` transformations are explicitly defined. +Each direction SHOULD be a transformation type that is not closed-form invertible. +Its input and output spaces MUST have equal dimension. +The input and output dimensions for the both the forward and inverse transformations +MUST match bijection's input and output space dimensions. + +`input` and `output` fields MAY be omitted for the `forward` and `inverse` transformations, +in which case the `forward` transformation's `input` and `output` are understood to match the bijection's, +and the `inverse` transformation's `input` (`output`) matches the bijection's `output` (`input`), +see the example below. + +The `input` and `output` fields MAY be omitted for `bijection` transformations +if the fields may be omitted for both its `forward` and `inverse` transformations + +Practically, non-invertible transformations have finite extents, +so bijection transforms should only be expected to be correct / consistent for points that fall within those extents. +It may not be correct for any point of appropriate dimensionality. + +### "multiscales" metadata + +Metadata about an image can be found under the `multiscales` key in the group-level OME-Zarr Metadata. +Here, "image" refers to 2 to 5 dimensional data representing image +or volumetric data with optional time or channel axes. +It is stored in a multiple resolution representation. + +`multiscales` contains a list of dictionaries where each entry describes a multiscale image. + +Each `multiscales` dictionary MUST contain the field "coordinateSystems", +whose value is an array containing coordinate system metadata +(see [coordinate systems](#coordinatesystems-metadata)). +The last entry of this array is the "intrinsic" coordinate system +and MUST contain axis information pertaining to physical coordinates. +It should be used for viewing and processing unless a use case dictates otherwise. +It will generally be a representation of the image in its native physical coordinate system. + +The following MUST hold for all coordinate systems inside multiscales metadata. +The length of "axes" must be between 2 and 5 +and MUST be equal to the dimensionality of the Zarr arrays storing the image data (see "datasets:path"). +The "axes" MUST contain 2 or 3 entries of "type:space" +and MAY contain one additional entry of "type:time" +and MAY contain one additional entry of "type:channel" or a null / custom type. +In addition, the entries MUST be ordered by "type" where the "time" axis must come first (if present), +followed by the "channel" or custom axis (if present) and the axes of type "space". +If there are three spatial axes where two correspond to the image plane ("yx") +and images are stacked along the other (anisotropic) axis ("z"), +the spatial axes SHOULD be ordered as "zyx". + +Each `multiscales` dictionary MUST contain the field `datasets`, +which is a list of dictionaries describing the arrays storing the individual resolution levels. +Each dictionary in `datasets` MUST contain the field `path`, +whose value is a string containing the path to the Zarr array for this resolution relative to the current Zarr group. +The `path`s MUST be ordered from largest (i.e. highest resolution) to smallest. +Every Zarr array referred to by a `path` MUST have the same number of dimensions +and MUST NOT have more than 5 dimensions. +The number of dimensions and order MUST correspond to number and order of `axes`. + +Each dictionary in `datasets` MUST contain the field `coordinateTransformations`, +whose value is a list of dictionaries that define a transformation +that maps Zarr array coordinates for this resolution level to the "intrinsic" coordinate system +(the last entry of the `coordinateSystems` array). +The transformation is defined according to [transformations metadata](#transformation-types). +The transformation MUST take as input points in the array coordinate system +corresponding to the Zarr array at location `path`. +The value of "input" MUST equal the value of `path`, +implementations should always treat the value of `input` as if it were equal to the value of `path`. +The value of the transformation’s `output` MUST be the name of the "intrinsic" [coordinate system](#coordinatesystems-metadata). + +This transformation MUST be one of the following: + +* A single scale or identity transformation +* A sequence transformation containing one scale and one translation transformation. + +In these cases, the scale transformation specifies the pixel size in physical units or time duration. +If scaling information is not available or applicable for one of the axes, +the value MUST express the scaling factor between the current resolution +and the first resolution for the given axis, +defaulting to 1.0 if there is no downsampling along the axis. +This is strongly recommended +so that the the "intrinsic" coordinate system of the image avoids more complex transformations. + +If applications require additional transformations, +each `multiscales` dictionary MAY contain the field `coordinateTransformations`, +describing transformations that are applied to all resolution levels in the same manner. +The value of `input` MUST equal the name of the "intrinsic" coordinate system. +The value of `output` MUST be the name of the output coordinate System +which is different from the "intrinsic" coordinate system. + +Each `multiscales` dictionary SHOULD contain the field `name`. + +Each `multiscales` dictionary SHOULD contain the field `type`, +which gives the type of downscaling method used to generate the multiscale image pyramid. +It SHOULD contain the field "metadata", +which contains a dictionary with additional information about the downscaling method. + +````{admonition} Example + +A complete example of json-file for a 5D (TCZYX) multiscales with 3 resolution levels could look like this: -#### bijection - -A bijection transformation is an invertible transformation in which both the `forward` and `inverse` transformations -are explicitly defined. Each direction SHOULD be a transformation type that is not closed-form invertible. -Its' input and output spaces MUST have equal dimension. The input and output dimensions for the both the forward -and inverse transformations MUST match bijection's input and output space dimensions. - -`input` and `output` fields MAY be omitted for the `forward` and `inverse` transformations, in which case -the `forward` transformation's `input` and `output` are understood to match the bijection's, and the `inverse` -transformation's `input` (`output`) matches the bijection's `output` (`input`), see the example below. - -Practically, non-invertible transformations have finite extents, so bijection transforms should only be expected -to be correct / consistent for points that fall within those extents. It may not be correct for any point of -appropriate dimensionality. - -## Specific feedback requested - -We ask the reviewers for one specific piece of feedback. Specifically about whether parameters for transformations should -be written as they are currently in the draft pull request, with named parameters at the "top level" e.g.: - -``` -{ - "type": "affine", - "affine": [[1, 2, 3], [4, 5, 6]], - "input": "ji", - "output": "yx" -} -``` - -or alternatively in a `parameters` field: - -``` -{ - "type": "affine", - "parameters": { - "matrix": [[1, 2, 3], [4, 5, 6]] - }, - "input": "ji", - "output": "yx" -} -``` - -In discussions, some authors preferred the latter because it will make the "top-level" keys for transformation -objects all identical, which could make serialization / validation simpler. One downside is that this change -is breaking for the existing `scale` and `translation` transformations - -``` +```json { - "type": "scale", - "scale": [2, 3], - "input": "ji", - "output": "yx" + "zarr_format": 3, + "node_type": "group", + "attributes": { + "ome": { + "version": "0.5", + "multiscales": [ + { + "name": "example", + "coordinateSystems": [ + { + "name": "intrinsic", + "axes": [ + { "name": "t", "type": "time", "unit": "millisecond" }, + { "name": "c", "type": "channel" }, + { "name": "z", "type": "space", "unit": "micrometer" }, + { "name": "y", "type": "space", "unit": "micrometer" }, + { "name": "x", "type": "space", "unit": "micrometer" } + ] + } + ], + "datasets": [ + { + "path": "0", + "coordinateTransformations": [ + { + // the voxel size for the first scale level (0.5 micrometer) + // and the time unit (0.1 milliseconds), which is the same for each scale level + "type": "scale", + "scale": [0.1, 1.0, 0.5, 0.5, 0.5], + "input": "0", + "output": "intrinsic" + } + ] + }, + { + "path": "1", + "coordinateTransformations": [ + { + // the voxel size for the second scale level (downscaled by a factor of 2 -> 1 micrometer) + // and the time unit (0.1 milliseconds), which is the same for each scale level + "type": "scale", + "scale": [0.1, 1.0, 1.0, 1.0, 1.0], + "input": "1", + "output": "intrinsic" + } + ] + }, + { + "path": "2", + "coordinateTransformations": [ + { + // the voxel size for the third scale level (downscaled by a factor of 4 -> 2 micrometer) + // and the time unit (0.1 milliseconds), which is the same for each scale level + "type": "scale", + "scale": [0.1, 1.0, 2.0, 2.0, 2.0], + "input": "2", + "output": "intrinsic" + } + ] + } + ], + "type": "gaussian", + "metadata": { + "description": "the fields in metadata depend on the downscaling implementation. Here, the parameters passed to the skimage function are given", + "method": "skimage.transform.pyramid_gaussian", + "version": "0.16.1", + "args": "[true]", + "kwargs": { "multichannel": true } + } + } + ] + } + } } ``` - -would change to: - -``` -{ - "type": "scale", - "parameters": { - "scale": [2, 3], - }, - "input": "ji", - "output": "yx" -} +```` + +If only one multiscale is provided, use it. +Otherwise, the user can choose by name, +using the first multiscale as a fallback: + +```python +datasets = [] +for named in multiscales: + if named["name"] == "3D": + datasets = [x["path"] for x in named["datasets"]] + break +if not datasets: + # Use the first by default. Or perhaps choose based on chunk size. + datasets = [x["path"] for x in multiscales[0]["datasets"]] ``` -The authors would be interested to hear perspectives from the reviewers on this matter. - ## Requirements @@ -648,16 +954,19 @@ issues or unknown unknowns prior to writing any real code. ## Drawbacks, risks, alternatives, and unknowns -Adopting this proposal will add an implementation burden because it adds more transformation types. Though this drawback is -softened by the fact that implementations will be able to choose which transformations to support (e.g., implementations may choose -not to support non-linear transformations). +Adopting this proposal will add an implementation burden because it adds more transformation types. +Though this drawback is softened by the fact that implementations +will be able to choose which transformations to support +(e.g., implementations may choose not to support non-linear transformations). -An alternative to this proposal would be not to add support transformations directly and instead recommend software use an -existing format (e.g., ITK's). The downside of that is that alternative formats will not integrate well with OME-NGFF as they do -not use JSON or Zarr. +An alternative to this proposal would be not to add support transformations directly +and instead recommend software use an existing format (e.g., ITK's). +The downside of that is that alternative formats will not integrate well with OME-NGFF +as they do not use JSON or Zarr. -In all, we believe the benefits of this proposal (outlined in the Background section) far outweigh these drawbacks, and will -better promote software interoperability than alternatives. +In all, we believe the benefits of this proposal (outlined in the Background section) +far outweigh these drawbacks, +and will better promote software interoperability than alternatives. ## Prior art and references @@ -713,7 +1022,7 @@ Adds coordinate systems, these contain axes which are backward-compatible with t ## Testing -Public examples of transformations with expected input/output pairs will be provided. +Public examples of transformations with expected input/output pairs are provided [here](https://github.com/bogovicj/ngff-rfc5-coordinate-transformation-examples/releases/tag/0.6-dev1-rev1) ## UI/UX @@ -722,8 +1031,8 @@ non-linear transformations), and inform users what action will be taken. The det application dependent, but ignoring the unsupported transformation or falling back to a simpler transformation are likely to be common choices. -Implementations MAY choose to communicate if and when an image can be displayed in multiple coordinate systems. Users might -choose between different options, or software could choose a default (e.g. the first listed coordinate system). The +Implementations SHOULD communicate if and when an image can be displayed in multiple coordinate systems. Users might +choose between different options, or software could choose a default (e.g. the first or last listed coordinate system). The [`multiscales` in version 0.4](https://ngff.openmicroscopy.org/0.4/#multiscale-md) has a similar consideration. @@ -731,4 +1040,4 @@ choose between different options, or software could choose a default (e.g. the f | Date | Description | Link | | ---------- | ---------------------------- | ---------------------------------------------------------------------------- | -| 2024-10-08 | RFC assigned and published | [https://github.com/ome/ngff/pull/255](https://github.com/ome/ngff/pull/255) | +| 2024-10-08 | RFC assigned and published | [https://github.com/ome/ngff/pull/255](https://github.com/ome/ngff/pull/255) | \ No newline at end of file diff --git a/rfc/5/responses/1/index.md b/rfc/5/responses/1/index.md new file mode 100644 index 00000000..bf966a7e --- /dev/null +++ b/rfc/5/responses/1/index.md @@ -0,0 +1,553 @@ +# RFC-5: Response 1 (2025-10-07 version) + +The authors extend their most sincere thanks and appreciation to all the reviewers of this RFC. + + + +## General comments + +We have added many motivating examples for common use cases, but also for many edge-cases. +The metadata are mirrored as a versioned [zenodo repository](https://zenodo.org/records/17313420/latest) + +As well, [we provide instructions](https://github.com/bogovicj/ngff-rfc5-coordinate-transformation-examples/blob/main/bigwarp/README.md) +for viewing these examples with BigWarp, a reference implementation. +These examples will not be made a part of the specification repository but can still be accessed later as part of the RFC-process. + +## Review 1 + +Daniel Toloudis, David Feng, Forrest Collman, and Nathalie Gaudreault at the Allen Institute provided +[Review 1](rfcs:rfc5:review1). + +### Axes metadata + +We agree that the issue raised here with respect to axis alignment and orientation is important and additional motivating examples would be helpful. +However, until RFC-4 has officially been made a part of the spec we feel it would be out of scope to reference such examples in this RFC. +We do think, though, that the suggested `coordinateSystems` group can provide the necessary structure to remove the ambiguity of orientation addressed in RFC4. +An orientation field could easily be added there in the following manner: + +```json +{ + "name" : "volume_micrometers", + "axes" : [ + {"name": "z", "type": "space", "unit": "micrometer", "orientation": "superior-to-inferior"}, + {"name": "y", "type": "space", "unit": "micrometer", "orientation": "anterior-to-posterior"}, + {"name": "x", "type": "space", "unit": "micrometer", "orientation": "left-to-right"} + ] +} +``` + + +### CoordinateSystem + +> Is the 'name' of the coordinate system where users should try to +> standardize strings that let multiple datasets that live in the same +> space be visualized together? + +Yes, exactly. + +> Can we add to the spec a location or manner where such 'common' spaces are written down. + +We agree that this would be valuable, but feel it is out-of-scope for this +RFC. We can imagine a future in which the spec itself does not contain the +location of the common spaces but rather defines a way for that location to +be written down. The point being that we hope the details of this idea for +'common' spaces can be discussed and agreed upon in the future. + +> there are applications, ... where the +> values of the array reflect the height of an object, and so are spatial, +> so it's not clear where such a relationship or 3d coordinate + +We agree that a better annotating for the value(s) stored in an image would be valuable, but +feel is that this is also out-of-scope for this RFC. A discussion has [begun on github](https://github.com/ome/ngff/issues/203) +that we hope continues. + + +### “Array” coordinate system + +Thank you for the feedback, we have edited this section with motivation and clarity. +We hope that the edits along with motivating examples will help to make the applications for array coordinate systems clearer. + +### Units + +> Why have these explicit listed units and not just follow SI and specify the exponent on the SI unit that you are going for? + +This RFC does change some parts of the Axes metadata, but not the specification of units which remains unchanged relative to [v0.4](https://github.com/ome/ngff/blob/5067681721cc73ddf8b64692456cdda604cc659a/0.4/index.bs#L227-L229). +As such, this is out of the scope of this RFC. + +This is an interesting and valuable idea though, and revisiting units would be good topic for a new RFC in my opinion, +since they seem not to have been reconsidered since they [were first introduced in 2021](https://github.com/ome/ngff/commit/0661115b93026f197d3787d99b74ec4d01614c99). + +### coordinateTransformations + +> We think that conforming reading MUST be able to parse Affine transformations. + +This is an important point to address. +Some transformations (i.e., scale, translations, rotations) can be expressed in terms of an affine transformation. +However, displaying affine transformations is implemented to different degrees in the field. +For some applications (e.g., image registration), affine transformations are a quintessential part of the metadata whereas other fields rarely encounter it. + +We have added statements to the proposal, recommending that writers SHOULD prefer less expressive transformations (e.g., a sequence of scale and translation) over affine transformations if possible. +If encountering a transformation that cannot be parsed, readers/viewers SHOULD display a warning to inform the user of how metadata is handled. + +> Rotation: there appears to be inconsistency in the doc +> ... +> Should it be `List[List[number]]`? + +Yes, thank you for catching this. It is now correct. + + +### Parameters + +The feedback is much appreciated. Your point of view that the spec should + +> place the named parameters field at the same level as the 'type' field, as is written in the current draft. + +is the consensus, so the parameters will remain as they are. + +### Process + +> Should it be customary to provide a sample implementation? + +We believe that sammple implementations are outside the scope of RFCs, even if the changes are substantial as in this case. +Writing implementations would certainly fail to address the variety of programming languages and tools in the community +and thus inadvertently prioritize some tools over others. +However, one could consider the json-schemas as a sort of implementation that allows implementers to test their written data against a common baseline to ensure the integrity of written data. + +> Is it okay for an RFC to link out to other things, rather than being +> completely self-contained? If it's not there is a danger of it +> effectively changing without it being properly versioned. + +This is an important point. Exactly how and if linking to outside artifacts is allowed may belong in a broader community +discussion. To keep resources relevant to, but "outside of" this RFC versioned and stable, they will be posted and archived +in a permanent repository, and assigned a DOI. +Since the RFC and revision texts (such as this one), are ultimately historic artifacts and not authoritative, we think +that example collections can be part of the RFC text, but should be kept outside the core specification document. + +## Review 2 + +[Review 2](rfcs:rfc5:review2) was written by William Moore, Jean-Marie Burel, Jason +Swedlow from the University of Dundee. + +### Clarifications + +>> “Coordinate transformations from array to physical coordinates MUST be stored in multiscales, and MUST be duplicated in the attributes of the zarr array” +>> +> Why is this duplication necessary and what does the array zarr.json look like? + +The requirement for this duplication originates from the fact that some implementations do not provide a native way of opening and displaying multiscales. +Currently, such implementations need to chose a specific scale level to open and then "look up" in the parent level to discover the corresponding metadata. +The suggested duplication would allow easier metadata discovery for such implementation. +However, we realized that this may be out of scope for this RFC and have removed the respective statement. + +We have refined the statements regarding where (and how) `coordinateTransformations` can be stored: + +- **Inside `multiscales > datasets`**: `coordinateTransformations` herein MUST be restricted to a single `scale`, `identity` or sequence of `translation` and `scale` transformations. + The output of these `coordinateTransformations` MUST be the default coordinate system, which is the last entry in the list of coordinate systems. +- **Inside `multiscales > coordinateTransformations`**: One MAY store additional transformations here. + The `input` to these transformations MUST be the default coordinate system and the `output` can be another coordinate system defined under `multiscales > coordinateSystems`. +- **Parent-level `coordinateTransformations`**: Transformations between two or more images MUST be stored in the parent-level `coordinateTransformations` group. + The `input` to these transformations MUST be paths to the respective images. The `output` can be a path to an image or the name of a coordinate system. + If both path and name exist, the name (i.e., the corresponding `coordinateSystem`) takes precedence. + The authoritiative coordinate system under `path` is the *first* coordinate system in the list. + +This separation of transformations (inside `multiscales > datasets`, under `multiscales > coordinateTransformations` and under parent-level `coordinateTransformations`) +provides flexibility for different usecases while still maintaining a level of rigidity for implementations. + +````{admonition} Example + +Consider the following example for the use of all possible places to store `coordinateTransformations`. +It comes from the SCAPE microscopy context, where lightsheet stacks are acquired under a skew angle, +and need to be *deskewed* with an affine transformation. The acquired stack also comes with a set of relevant microscope motor coordinates, +which place the object in world coordinates. +One may wish to attach the affine transformation to the multiscales itself, without having to read the parent-level zarr group that defines the world coordinate system. + +The `multiscales` metadata contains this: +```json +{ + "multiscales": [ + { + "version": "0.6dev2", + "name": "example", + "coordinateSystems": [ + { + "name": "deskewed", + "axes": [ + {"name": "z", "type": "space", "unit": "micrometer"}, + {"name": "y", "type": "space", "unit": "micrometer"}, + {"name": "x", "type": "space", "unit": "micrometer"} + ] + }, + { + "name": "physical", + "axes": [ + {"name": "z", "type": "space", "unit": "micrometer"}, + {"name": "y", "type": "space", "unit": "micrometer"}, + {"name": "x", "type": "space", "unit": "micrometer"} + ] + } + ], + "datasets": [ + { + "path": "0", + // the transformation of other arrays are defined relative to this, the highest resolution, array + "coordinateTransformations": [{ + "type": "identity", + "input": "0", + "output": "physical" + }] + }, + { + "path": "1", + "coordinateTransformations": [{ + // the second scale level (downscaled by a factor of 2 relative to "0" in zyx) + "type": "scale", + "scale": [2, 2, 2], + "input": "1", + "output": "physical" + }] + }, + { + "path": "2", + "coordinateTransformations": [{ + // the third scale level (downscaled by a factor of 4 relative to "0" in zyx) + "type": "scale", + "scale": [4, 4, 4], + "input": "2", + "output": "physical" + }] + } + ], + "coordinateTransformations": [ + { + "type": "affine", + "name": "deskew-transformation", + "input": "physical", + "output": "deskewed", + "affine": [ + [1, 0, 0, 0], + [0, 1, 0, 0], + [0, 0.785, 1, 0], + [0, 0, 0, 1] + ] + } + ] + } + ] +} +``` + +The metadata on a parent-level zarr group would then look as follows - the `input` to the translation transform that locates the stack at the correct location in world coordinates is the path to the input image. +The output is a defined coordinate system: + +```json +"ome": { + "coordinateSystems": [ + { + "name": "world", + "axes": [ + {"name": "z", "type": "space", "unit": "micrometer"}, + {"name": "y", "type": "space", "unit": "micrometer"}, + {"name": "x", "type": "space", "unit": "micrometer"} + ] + } + ], + "coordinateTransformations": [ + { + "name": "stack0-to-world", + "type": "translation", + "translation": [0, 10234, 41232], + "input": "path/to/stack", + "output": "world" + } + ] +} + +``` +The first coordinate system defined under the `multiscales` group above (`deskewed`) serves as the authoritative coordinate system for the multiscales image. +```` + +> In what format or structure is this data stored? + +In a simplified way, the root level of the store should resemble this structure: + + ``` + root + ├─── imageA + ├─── imageB + └─── zarr.json + ``` + + If the inputs and outputs to the transformations to the top-level `zarr.json` are images, the content MUST provide information about the spatial relationship (`coordinateTransformations`) between them: + + ```json + ... + "ome": { + "version": "0.6", + "coordinateTransformations": [ + { + "type": "affine", + "input": "path/to/image_A", + "output": "path/to/image_B", + "affine": ["actual affine matrix"], + "name": "Transform that aligns imageA with imageB" + } + ] + } + ``` + +> Does the parent zarr group contain the paths to the child images? + +In the present form, the spec states that "[...] *the coordinate system's name may be the path to an array.*" +We agree that this lacks clarity. +We therefore added that the `input` of a `coordinateTransformation` entry in the parent-level zarr group MUST be the path to the input image. + +> Do the top-level `coordinateTransformations` refer to coordinateSystems that are in child images? + +Yes, although they do so implicitly. +If a `coordinateTransformation` in the parent-level group refers to child images through its `input`/`output` fields, the authoritiative coordinate transformation of the linked (multiscales) image is the *first* `coordinateSystem` therein. +This formalism also provides enough distinction from an image's "default" (aka "physical") coordinate system, +which is the *last* `coordinateSystem` inside the image. +If no additional `coordinateTransformations` are defined under `multiscales > coordinateTransformations`, only one `coordinateSystem` needs to be defined in the multiscales. +This coordinate system then serves as "default" coordinate system as well as authoritative coordinate system for a parent-level reference. + + > Are these child images referred to via a /path/to/volume/zarr.json? + +Yes. We have changed the spec, so that `"input": /path/to/volume/"` is the required way of referencing an input image/coordinateSystem. + + > What is the expectation for a conforming viewer when opening the top-level group? Should the viewer also open and display all the child images? + + This is an important and valid point to raise. The *in silico* behavior that comes to mind, is to open all present images along with their + correct transformations as separate layers, which can be toggled on or off. + For more complex transformations, viewer could let users decide the coordinate system in which the data should be displayed, + which may provide useful views on the data for bijection transforms in the registration research field. + The spec currently requires conforming readers to read scale and translation transformations, which has remained unchanged. + However, readers/viewers are now recommended to output an informative warning if a transformation is encountered that cannot be parsed. + + > It seems like the top-level zarr group with "coordinate transformations describing the relationship between two image coordinate systems" introduces a + “Collection” of images. The discussion on adding support for Collection to the specification has been captured in Collections Specification but it has + not been introduced yet. + > + > Are you also proposing to introduce support for Collection as part of this RFC? In our opinion, this is probably out of scope at this stage, but an example might clarify the importance in the authors’ view. + +It is true that storing several images under the same root-store as proposed here, resembles the proposed [Collections](https://github.com/ome/ngff/issues/31). +However, the spatial relationship between images in a root zarr provides a distinctively meaningful kind of collections of images, namely their spatial relationship. +Other solutions like storing images with spatial relationships into separate files along with references to each other come at the risk of putting the spatial reationship at the mercy of tidy file management. +With this proposal, we seek a self-contained solution to spatial relationships, which require storage in a common location. +Moreover, while images with coordinate transforms in a root zarr provide a kind of collection, they don't do so more than a multiscale, a plate or a well object does. + +In the future, we envision the transformations metadata to be moved under the `attributes` key of a `Collection` metadata field. +A collection of images bound to each other by spatial relationship would then become merely a particular type of a Collection. + + +### Implementation section + +## Comment-1 + +> Why not require a value for units and then make “arbitrary” or some sentinel the value that people must specify to say “no coordinates?” + +I agree that having some fixed value to mean "no units" would be a reasonable choice. In my opinion, having no units key reflects that "there are no units" better than a placeholder (and it avoids having to choose the value of the placeholder). + +> I wonder why `Arrays are inherently discrete (see Array coordinate systems, below) but are often used to store discrete samples +> of a continuous variable.` isn’t true of everything? Aren’t the images themselves samplings? In general I wasn’t totally clear +> on how interpolation works - I understand it is a user-applied “transformation” in which case I think that should be clear. + +I agree that digital images always contain samples. The purpose of this distinction is to communicate, to humans and software, a property of *the signal that is being sampled,* not the representation that is stored. That is the reason that "array coordinate systems" have discrete axes - because they have no additional interpretation. + +Some clarifying text was added. + + +## Comment-2 + +> Rebasing to take into account the change to zarr v3 (e.g. remove references to .zarray and replace with zarr.json) / “ome” top-level key would be helpful for clarity. + +Done. + +> Axis type of “array” is a bit confusing. It basically means “unknown”? + +It could mean "unknown" if there are no other coordinate systems that label it. +More importantly, it serves as a placeholder for operations that work in "pixel coordinates," not "physical coordinates." +Some clarifying text under the `Array coordinate systems` section was added. + +> arrayCoordinateSystem specifying dimension names is now redundant with zarr v3 dimension names + +Good point, and I agree; but the zarr spec is more permissive than the ngff spec, specifically because it [allows null or duplicate `dimension_names`](https://zarr-specs.readthedocs.io/en/latest/v3/core/v3.0.html#dimension-names). +As a result, we will need require adding additional constraints to the dimension names that they be unique and not null. +This is currently a requirement for the [names of axes](https://ngff.openmicroscopy.org/0.5/index.html#axes-md). + +> It is a bit unfortunate that coordinateTransforms now require that the input and output spaces be named for multiscale datasets, +> where it was previously implicit, since (a) it is redundant and (b) it means existing multiscale metadata is no longer valid +> under this new version. + +To our knowledge, every version in the past introduced breaking changes to the specification with the result that ome-zarr files +of newer version could not be read anymore. As for the redundancy of specifying input- and output-spaces in multiscales transformations, +we agree in principle. However, we also see no harm in additional explicitness. + +## Comment-3 + +Thank you for the additional comments in inquiries to this rfc. + +> In our opinion, it is clearer to interpret the transformation metadata when it refers explicitly to axes names instead of indices, so we recommend adapting “translation”, “scale”, “affine”, “rotation”, “coordinates”, and “displacements”. + +It is true that there is some inconsistency regarding how transformation parameters are specified with regard to the different axes of the coordinate systems. +The decision to express the mentioned transforms (`mapAxis` and `byDimension`) originated from discussions at previous hackathons, +which we regret aren't reflect in this rfc process. +We have changed the `mapAxis` transformation towards being expressed as a transpose vector of integers that refer to the axis orderings of the input coordinate system. + +we decided against this as it would require additional sets of constraints for the axes ordering of the `input_axes` and `output_axes` fields for matrix transformations. +For example, the same rotation matrix could be expressed with different axis ordering, +which would correspond to a reordering of the column/row vectors of the corresponding transformation matrix. +Simply referring to the axes ordering specified in the `coordinateSystems` seemed like a simple solution for this. + +This is not to say that we do not see merrit in introducing the proposed axis ordering in 0.6dev3 if there is sufficient consensus in the community about it. + +> [...] We recommend that `byDimension` instead has a consistent treatment of the input/output fields to store the input and output coordinate system names, and new fields (input_axes, output_axes) are added to specify the input/output axes. + +We agree with the raised recommendation +We have added the fields `input_axes` and `output_axes` to the `by_dimension` transformation. +This harmonizes the interface of the transformations across all transformations. + +However, we specify that the fields `input_axes` and `output_axes` need to be present for all *wrapped* transformations instead of the parent `byDimension` transformation. +Otherwise, allowing `byDimension` to wrap a list of transformations would require complicated mappings to subsets of coordinate systems. +The proposed format offers a clear interface, which allows transformations to be written as follows: + +```` {admonition} Example +```json +{ + "type": "byDimension", + "input": "high_dimensional_coordinatesystem_A", + "output": "high_dimensional_coordinatesystem_B", + "transformations": [ + { + "type": "scale", + "input_axes": ["x", "y"], + "output_axes": ["x", "y"], + "scale": [0.5, 0.5] + }, + { + "type": "translation", + "input_axes": ["z"], + "output_axes": ["z"], + "translation": [10.0] + } + ] +} +``` +```` + +## inverseOf + +> It is not clear what inverseOf achieves, that can’t be achieved by defining the same transformation but simply swapping the values of the input and output coordinate system names. [...] + +The RFC motivates the `inverseOf` transformation: + +> When rendering transformed images and interpolating, implementations may need the "inverse" transformation - from the output +> to the input coordinate system. Inverse transformations will not be explicitly specified when they can be computed in closed +> form from the forward transformation. Inverse transformations used for image rendering may be specified using the inverseOf +> transformation type... + +though we appreciate that more clarity could be helpful. The storage of transformations for _images_ (not point coordinates), is +a main motivator of this transformation. + +There is no consensus among image registration algorithms on whether their output transformation takes points from the moving +to fixed image ("forward") or fixed to moving ("inverse"), when the transformation type is closed-form invertible. When +the transformation type is not closed-form invertible, the algorithms are obliged to output the inverse transformation. + +We would like to recommend that registration algorithms store the "forward" transformation (where the input is moving image's +coordinate system, and the output is the fixed image's coordinate system) because this matches the intuition of users and +practitioners. Given an "inverse" transformation, that is not closed-form invertible, the `inverseOf` wrapper +enables their storage as if they were a "forward" transformations while informing implementations how to treat them. + +It is true that we could remove `inverseOf` and swap the input / output coordinate systems. In that case we do one of + +* not recommend which direction to store for the transformations + * one downside is that implementations will not know what to expect and could not distinguish moving from fixed coordinate + system from the transformation +* recommend that the "inverse" transformation is stored + * one downside is that this does not align with intuition + +In our opinion, the cost of adding of this simple transformation type is worth avoiding one of these downsides. + +> In the sequence section constraints on whether input/output must be specified are listed that apply to transformations other than “sequence”. +> For clarity we recommend these constraints are moved to the relevant transformations in the RFC, or to their own distinct section. + +Thank you for the suggestion, it was changed accordingly. +We also realized that the `sequence` section previously permitted nested sequences. +This possibility was removed to avoid complex, nested transformations. + + +## Other + +This section contains other changes to the specification following the reviews as well as other discussions + +### Multiscales constraints + +In conversations, at the OME community meeting and hackathon in April 2025, +several attendees expressed confusion about how to specify situations with +many coordinate systems, specifically when there exist more than one +physical coordinate system. + +The main questions had to do with whether there were any constraints +regarding the coordinate transforms inside the multiscales' dataset metadata +and those outside the datasets (that were previously said to apply to all +scale levels). Implementers were concerned that if the transformation +corresponding to a particular coordinate system could be found anywhere, +that there would be a large number of valid ways to describe the same set of +coordinate systems and transformations. +This would be an undue burden. +We agreed with and shared this concern. + +As a result, a group of hackathon attendees agreed to a set of constraints +that would decrease the burden on implementers, without reducing the +expressibility (see `"multiscales" metadata` section). + +A series of follow-up discussions further refined these constraints and metadata design layouts. + +To summarize the constraints *inside* multiscales: + +* The last coordinate system in the list is a *default* coordinate system + * usually an image's "native" physical coordinate system +* There MUST be exactly one coordinate transformation per dataset in the multiscales whose output is the *default* coordinate system. + This transformation SHOULD be simple (defined precisely in the spec). +* Any other transformations belong outside the `datasets` (i.e., under `multiscales > coordinateTransformations`) + * the `input`s of these transformations MUST be the *default* coordinate system + * the `output`s of these transformations are the other coordinate systems + +### Preferred less expressive transforms + +Transformations are powerful and as written before, there are often many different ways to specify the same transform. +For example, a sequence of a rotation and a translation can be combined into a single affine transform. +This was [discussed on github](https://github.com/ome/ngff/issues/331). +As a result of discussion, we recommend writers to use sequences of less expressive transforms +(i.e. `sequence[rotation, translation]` instead of a single affine containing these) to ensure a level of simplicity for image readers. + + +### Parameters in zarr arrays + +This RFC allows for the parameters of most transformations to be stored either as zarr arrays or as a JSON array. There has +been a [debate on github](https://github.com/ome/ngff/pull/138) as to whether the parameters of "simple" transformaions (scale, +translation, affines) should be restricted to _only_ be in the JSON metadata and not in zarr arrays. +We appreciate the in-depth discussion around the nature, fidelity and efficiency of using zarr-arrays for parameter storage. +Hence, we would like to thank all participants for valuable insights and pointing out the different technical, +legal and other aspects to this topic. + +In summary, we feel that the benefits of the zarr array representation for those who choose to use it is worth the additional +costs at this time. + +We agree there are compelling reasons to prefer / require that the parameters are stored in JSON. Specifically that +implementations could be simpler because the parameters can be in exactly one place, and it would save IO. As well, there are +good reasons to allow storage of parameters in zarr arrays. Specifically, that the decoding of floating point numbers +from arrays is more precise and robust that from JSON, and that the array ordering for multidimensional arrays is clearer, among +others. + +First, the issue of floating point precision is a critical one. In principle, it is possible to decode floating point numbers from +their JSON representation reliably, precisely, and consistently across programming languages. We feel that the mechanism for +this should be specified by Zarr (not by OME-Zarr), and while +[a proposal exists at this time](https://github.com/zarr-developers/zarr-extensions/issues/22) for a relevant zarr extension, +it has not been adopted, nor tested across languages. We should revisit this proposal in the future if and when it is adopted. + +Second, regarding additional code complexity: any complete implementation of this RFC requires that parameters be read from zarr +arrays for some transformations (coordinate and displacement fields). As a result, many implementations will necessarily accept +the implementation burden, while others are free not to. + +This is why we feel that, at this time, the implementation burden of storing the parameters in zarr arrays is small enough to justify +their benefits. diff --git a/rfc/5/reviews/1/index.md b/rfc/5/reviews/1/index.md index 5539fc33..bf785f71 100644 --- a/rfc/5/reviews/1/index.md +++ b/rfc/5/reviews/1/index.md @@ -1,4 +1,5 @@ # RFC-5: Review 1 +(rfcs:rfc5:review1)= ## Review authors This review was written by: Daniel Toloudis1, David Feng2, Forrest Collman3, Nathalie Gaudreault1 diff --git a/rfc/5/reviews/2/index.md b/rfc/5/reviews/2/index.md index 5c90c15e..89688545 100644 --- a/rfc/5/reviews/2/index.md +++ b/rfc/5/reviews/2/index.md @@ -1,4 +1,5 @@ # RFC-5: Review 2 +(rfcs:rfc5:review2)= ## Review authors This review was written by: William Moore1, Jean-Marie Burel1, Jason Swedlow1 diff --git a/rfc/5/versions/1/index.md b/rfc/5/versions/1/index.md new file mode 100644 index 00000000..9200139e --- /dev/null +++ b/rfc/5/versions/1/index.md @@ -0,0 +1,725 @@ +# RFC-5 Coordinate systems and transformations (2024-07-30 version) + +Add named coordinate systems and expand and clarify coordinate transformations. + +## Status + +This RFC is currently in RFC state `R1` (send for review). + +```{list-table} Record +:widths: 8, 20, 20, 20, 15, 10 +:header-rows: 1 +:stub-columns: 1 + +* - Role + - Name + - GitHub Handle + - Institution + - Date + - Status +* - Author + - John Bogovic + - @bogovicj + - HHMI Janelia + - 2024-07-30 + - Implemented +* - Author + - Davis Bennett + - @d-v-b + - + - 2024-07-30 + - Implemented validation +* - Author + - Luca Marconato + - @LucaMarconato + - EMBL + - 2024-07-30 + - Implemented +* - Author + - Matt McCormick + - @thewtex + - ITK + - 2024-07-30 + - Implemented +* - Author + - Stephan Saalfeld + - @axtimwalde + - HHMI Janelia + - 2024-07-30 + - Implemented (with JB) +* - Endorser + - Norman Rzepka + - @normanrz + - Scalable Minds + - 2024-08-22 + - +* - Reviewer + - Dan Toloudis, David Feng, Forrest Collman, Nathalie GAudreault, Gideon Dunster + - toloudis, dyf, fcollman + - Allen Institutes + - 2024-11-28 + - [Review](rfcs:rfc5:review1) +* - Reviewer + - Will Moore, Jean-Marie Burel, Jason Swedlow + - will-moore, jburel, jrswedlow + - University of Dundee + - 2025-01-22 + - [Review](rfcs:rfc5:review2) +``` + +## Overview + +This RFC provides first-class support for spatial and coordinate transformations in OME-Zarr. + +## Background + +Coordinate and spatial transformation are vitally important for neuro and bio-imaging and broader scientific imaging practices +to enable: + +1. Reproducibility and Consistency: Supporting spatial transformations explicitly in a file format ensures that transformations + are applied consistently across different platforms and applications. This FAIR capability is a cornerstone of scientific + research, and having standardized formats and tools facilitates verification of results by independent + researchers. +2. Integration with Analysis Workflows: Having spatial transformations as a first-class citizen within file formats allows for + seamless integration with various image analysis workflows. Registration transformations can be used in subsequent image + analysis steps without requiring additional conversion. +3. Efficiency and Accuracy: Storing transformations within the file format avoids the need for re-sampling each time the data is + processed. This reduces sampling errors and preserves the accuracy of subsequent analyses. Standardization enables on-demand + transformation, critical for the massive volumes collected by modern microscopy techniques. +4. Flexibility in Analysis: A file format that natively supports spatial transformations allows researchers to apply, modify, or + reverse transformations as needed for different analysis purposes. This flexibility is critical for tasks such as + longitudinal studies, multi-modal imaging, and comparative analysis across different subjects or experimental conditions. + +Toward these goals, this RFC expands the set of transformations in the OME-Zarr spec covering many of the use cases +requested in [this github issue](https://github.com/ome/ngff/issues/84). It also adds "coordinate systems" - named +sets of "axes." Related the relationship of discrete arrays to physical coordinates and the interpretation and motivation for +axis types. + + +## Proposal + +Below is a slightly abridged copy of the proposed changes to the specification (examples are omitted), the full set of changes +including all examples are publicly available on the [github pull request](https://github.com/ome/ngff/pull/138). + + +### "coordinateSystems" metadata + +A "coordinate system" is a collection of "axes" / dimensions with a name. Every coordinate system: +- MUST contain the field "name". The value MUST be a non-empty string that is unique among `coordinateSystem`s. +- MUST contain the field "axes", whose value is an array of valid "axes" (see below). + + +The order of the `"axes"` list matters and defines the index of each array dimension and coordinates for points in that +coordinate system. The "dimensionality" of a coordinate system +is indicated by the length of its "axes" array. The "volume_micrometers" example coordinate system above is three dimensional (3D). + +The axes of a coordinate system (see below) give information about the types, units, and other properties of the coordinate +system's dimensions. Axis `name`s may contain semantically meaningful information, but can be arbitrary. As a result, two +coordinate systems that have identical axes in the same order may not be "the same" in the sense that measurements at the same +point refer to different physical entities and therefore should not be analyzed jointly. Tasks that require images, annotations, +regions of interest, etc., SHOULD ensure that they are in the same coordinate system (same name, with identical axes) or can be +transformed to the same coordinate system before doing analysis. See the example below. + + +### "axes" metadata + +"axes" describes the dimensions of a coordinate systems. It is a list of dictionaries, where each dictionary describes a dimension (axis) and: +- MUST contain the field "name" that gives the name for this dimension. The values MUST be unique across all "name" fields. +- SHOULD contain the field "type". It SHOULD be one of the strings "array", "space", "time", "channel", "coordinate", or "displacement" but MAY take other string values for custom axis types that are not part of this specification yet. +- MAY contain the field "discrete". The value MUST be a boolean, and is `true` if the axis represents a discrete dimension. +- SHOULD contain the field "unit" to specify the physical unit of this dimension. The value SHOULD be one of the following strings, which are valid units according to UDUNITS-2. + - Units for "space" axes: 'angstrom', 'attometer', 'centimeter', 'decimeter', 'exameter', 'femtometer', 'foot', 'gigameter', 'hectometer', 'inch', 'kilometer', 'megameter', 'meter', 'micrometer', 'mile', 'millimeter', 'nanometer', 'parsec', 'petameter', 'picometer', 'terameter', 'yard', 'yoctometer', 'yottameter', 'zeptometer', 'zettameter' + - Units for "time" axes: 'attosecond', 'centisecond', 'day', 'decisecond', 'exasecond', 'femtosecond', 'gigasecond', 'hectosecond', 'hour', 'kilosecond', 'megasecond', 'microsecond', 'millisecond', 'minute', 'nanosecond', 'petasecond', 'picosecond', 'second', 'terasecond', 'yoctosecond', 'yottasecond', 'zeptosecond', 'zettasecond' +- MAY contain the field "longName". The value MUST be a string, and can provide a longer name or description of an axis and its properties. + +If part of multiscales metadata, the length of "axes" MUST be equal to the number of dimensions of the arrays that contain the image data. + +Arrays are inherently discrete (see Array coordinate systems, below) but are often used to store discrete samples of a +continuous variable. The continuous values "in between" discrete samples can be retrieved using an *interpolation* method. If an +axis is continuous (`"discrete" : false`), it indicates that interpolation is well-defined. Axes representing `space` and +`time` are usually continuous. Similarly, joint interpolation across axes is well-defined only for axes of the same `type`. In +contrast, discrete axes (`"discrete" : true`) may be indexed only by integers. Axes of representing a `channel`, `coordinate`, or `displacement` are +usually discrete. + +Note: The most common methods for interpolation are "nearest neighbor", "linear", "cubic", and "windowed sinc". Here, we refer +to any method that obtains values at real-valued coordinates using discrete samples as an "interpolator". As such, label images +may be interpolated using "nearest neighbor" to obtain labels at points along the continuum. + + +### Array coordinate systems + +Every array has a default coordinate system whose parameters need not be explicitly defined. Its name is the path to the array +in the container, its axes have `"type":"array"`, are unitless, and have default "name"s. The ith axis has `"name":"dim_i"` +(these are the same default names used by [xarray](https://docs.xarray.dev/en/stable/user-guide/terminology.html)). + + +The dimensionality of each array coordinate system equals the dimensionality of its corresponding zarr array. The axis with +name `"dim_i"` is the ith element of the `"axes"` list. The axes and their order align with the `shape` +attribute in the zarr array attributes (in `.zarray`), and whose data depends on the byte order used to store +chunks. As described in the [zarr array metadata](https://zarr.readthedocs.io/en/stable/spec/v2.html#arrays), +the last dimension of an array in "C" order are stored contiguously on disk or in-memory when directly loaded. + + +The name and axes names MAY be customized by including a `arrayCoordinateSystem` field in +the user-defined attributes of the array whose value is a coordinate system object. The length of +`axes` MUST be equal to the dimensionality. The value of `"type"` for each object in the +axes array MUST equal `"array"`. + + +### Coordinate convention + +**The pixel/voxel center is the origin of the continuous coordinate system.** + +It is vital to consistently define relationship between the discrete/array and continuous/interpolated +coordinate systems. A pixel/voxel is the continuous region (rectangle) that corresponds to a single sample +in the discrete array, i.e., the area corresponding to nearest-neighbor (NN) interpolation of that sample. +The center of a 2d pixel corresponding to the origin `(0,0)` in the discrete array is the origin of the continuous coordinate +system `(0.0, 0.0)` (when the transformation is the identity). The continuous rectangle of the pixel is given by the +half-open interval `[-0.5, 0.5) x [-0.5, 0.5)` (i.e., -0.5 is included, +0.5 is excluded). See chapter 4 and figure 4.1 of the ITK Software Guide. + + +### "coordinateTransformations" metadata + +"coordinateTransformations" describe the mapping between two coordinate systems (defined by "axes"). +For example, to map an array's discrete coordinate system to its corresponding physical coordinates. +Coordinate transforms are in the "forward" direction. They represent functions from *points* in the +input space to *points* in the output space. + + +- MUST contain the field "type". +- MUST contain any other fields required by the given "type" (see table below). +- MUST contain the field "output", unless part of a `sequence` or `inverseOf` (see details). +- MUST contain the field "input", unless part of a `sequence` or `inverseOf` (see details). +- MAY contain the field "name". Its value MUST be unique across all "name" fields for coordinate transformations. +- Parameter values MUST be compatible with input and output space dimensionality (see details). + + + +
identity + + The identity transformation is the default transformation and is typically not explicitly defined. +
mapAxis + "mapAxis":Dict[String:String] + A maxAxis transformation specifies an axis permutation as a map between axis names. +
translation + one of:
"translation":List[number],
"path":str +
translation vector, stored either as a list of numbers ("translation") or as binary data at a location + in this container (path). +
scale + one of:
"scale":List[number],
"path":str +
scale vector, stored either as a list of numbers (scale) or as binary data at a location in this + container (path). +
affine + one of:
"affine": List[List[number]],
"path":str +
affine transformation matrix stored as a flat array stored either with json uing the affine field + or as binary data at a location in this container (path). If both are present, the binary values at path should be used. +
rotation + one of:
"rotation":List[number],
"path":str +
rotation transformation matrix stored as an array stored either + with json or as binary data at a location in this container (path). + If both are present, the binary parameters at path are used. +
sequence + "transformations":List[Transformation] + A sequence of transformations, Applying the sequence applies the composition of all transforms in the list, in order. +
displacements + "path":str
"interpolation":str +
Displacement field transformation located at (path). +
coordinates + "path":str
"interpolation":str +
Coordinate field transformation located at (path). +
inverseOf + "transform":Transform + The inverse of a transformation. Useful if a transform is not closed-form invertible. See Forward and inverse for details and examples. +
bijection + "forward":Transform
"inverse":Transform +
Explicitly define an invertible transformation by providing a forward transformation and its inverse. +
byDimension + "transformations":List[Transformation] + Define a high dimensional transformation using lower dimensional transformations on subsets of + dimensions. +
typefieldsdescription +
+ + +Conforming readers: +- MUST parse `identity`, `scale`, `translation` transformations; +- SHOULD parse `mapAxis`, `affine` transformations; +- SHOULD be able to apply transformations to points; +- SHOULD be able to apply transformations to images; + +Coordinate transformations from array to physical coordinates MUST be stored in multiscales, +and MUST be duplicated in the attributes of the zarr array. Transformations between different images MUST be stored in the +attributes of a parent zarr group. For transformations that store data or parameters in a zarr array, those zarr arrays SHOULD +be stored in a zarr group `"coordinateTransformations"`. + +
+store.zarr                      # Root folder of the zarr store
+│
+├── .zattrs                     # coordinate transformations describing the relationship between two image coordinate systems
+│                               # are stored in the attributes of their parent group.
+│                               # transformations between 'volume' and 'crop' coordinate systems are stored here.
+│
+├── coordinateTransformations   # transformations that use array storage go in a "coordinateTransformations" zarr group.
+│   └── displacements           # for example, a zarr array containing a displacement field
+│       ├── .zattrs
+│       └── .zarray
+│
+├── volume
+│   ├── .zattrs                 # group level attributes (multiscales)
+│   └── 0                       # a group containing the 0th scale
+│       └── image               # a zarr array
+│           ├── .zattrs         # physical coordinate system and transformations here
+│           └── .zarray         # the array attributes
+└── crop
+    ├── .zattrs                 # group level attributes (multiscales)
+    └── 0                       # a group containing the 0th scale
+        └── image               # a zarr array
+            ├── .zattrs         # physical coordinate system and transformations here
+            └── .zarray         # the array attributes
+
+ +### Additional details + +Most coordinate transformations MUST specify their input and output coordinate systems using `input` and `output` with a string value +corresponding to the name of a coordinate system. The coordinate system's name may be the path to an array, and therefore may +not appear in the list of coordinate systems. + +Exceptions are if the the coordinate transformation appears in the `transformations` list of a `sequence` or is the +`transformation` of an `inverseOf` transformation. In these two cases input and output could, in some cases, be omitted (see below for +details). + +Transformations in the `transformations` list of a `byDimensions` transformation MUST provide `input` and `output` as arrays +of strings corresponding to axis names of the parent transformation's input and output coordinate systems (see below for +details). + + +Coordinate transformations are functions of *points* in the input space to *points* in the output space. We call this the "forward" direction. +Points are ordered lists of coordinates, where a coordinate is the location/value of that point along its corresponding axis. +The indexes of axis dimensions correspond to indexes into transformation parameter arrays. For example, the scale transformation above +defines the function: + +``` +x = 0.5 * i +y = 1.2 * j +``` + +i.e., the mapping from the first input axis to the first output axis is determined by the first scale parameter. + +When rendering transformed images and interpolating, implementations may need the "inverse" transformation - from the output to +the input coordinate system. Inverse transformations will not be explicitly specified when they can be computed in closed form from the +forward transformation. Inverse transformations used for image rendering may be specified using the `inverseOf` +transformation type, for example: + +```json +{ + "type": "inverseOf", + "transformation" : { + "type": "displacements", + "path": "/path/to/displacements", + } +} +``` + +Implementations SHOULD be able to compute and apply the inverse of some coordinate transformations when they +are computable in closed-form (as the [Transformation types](#transformation-types) section below indicates). If an +operation is requested that requires the inverse of a transformation that can not be inverted in closed-form, +implementations MAY estimate an inverse, or MAY output a warning that the requested operation is unsupported. + +#### Matrix transformations + +Two transformation types ([affine](#affine) and [rotation](#rotation)) are parametrized by matrices. Matrices are applied to +column vectors that represent points in the input coordinate system. The first (last) axis in a coordinate system is the top +(bottom) entry in the column vector. Matrices are stored as two-dimensional arrays, either as json or in a zarr array. When +stored as a 2D zarr array, the first dimension indexes rows and the second dimension indexes columns (e.g., an array of +`"shape":[3,4]` has 3 rows and 4 columns). When stored as a 2D json array, the inner array contains rows (e.g. `[[1,2,3], +[4,5,6]]` has 2 rows and 3 columns). + + +### Transformation types + +Input and output dimensionality may be determined by the value of the "input" and "output" fields, respectively. If the value +of "input" is an array, it's length gives the input dimension, otherwise the length of "axes" for the coordinate +system with the name of the "input" value gives the input dimension. If the value of "input" is an array, it's +length gives the input dimension, otherwise it is given by the length of "axes" for the coordinate system with +the name of the "input". If the value of "output" is an array, its length gives the output dimension, +otherwise it is given by the length of "axes" for the coordinate system with the name of the "output". + +#### identity + +`identity` transformations map input coordinates to output coordinates without modification. The position of +the ith axis of the output coordinate system is set to the position of the ith axis of the input coordinate +system. `identity` transformations are invertible. + + +#### mapAxis + +`mapAxis` transformations describe axis permutations as a mapping of axis names. Transformations MUST include a `mapAxis` field +whose value is an object, all of whose values are strings. If the object contains `"x":"i"`, then the transform sets the value +of the output coordinate for axis "x" to the value of the coordinate of input axis "i" (think `x = i`). For every axis in its output coordinate +system, the `mapAxis` MUST have a corresponding field. For every value of the object there MUST be an axis of the input +coordinate system with that name. Note that the order of the keys could be reversed. + + +#### translation + +`translation` transformations are special cases of affine transformations. When possible, a +translation transformation should be preferred to its equivalent affine. Input and output dimensionality MUST be +identical and MUST equal the the length of the "translation" array (N). `translation` transformations are +invertible. + +path +: The path to a zarr-array containing the translation parameters. The array at this path MUST be 1D, and its length MUST be `N`. + +translation +: The translation parameters stored as a JSON list of numbers. The list MUST have length `N`. + + +#### scale + +`scale` transformations are special cases of affine transformations. When possible, a scale transformation +SHOULD be preferred to its equivalent affine. Input and output dimensionality MUST be identical and MUST equal +the the length of the "scale" array (N). Values in the `scale` array SHOULD be non-zero; in that case, `scale` +transformations are invertible. + +path +: The path to a zarr-array containing the scale parameters. The array at this path MUST be 1D, and its length MUST be `N`. + +scale +: The scale parameters stored as a JSON list of numbers. The list MUST have length `N`. + + +#### affine + +`affine`s are [matrix transformations](#matrix-transformations) from N-dimensional inputs to M-dimensional outputs are +represented as the upper `(M)x(N+1)` sub-matrix of a `(M+1)x(N+1)` matrix in [homogeneous +coordinates](https://en.wikipedia.org/wiki/Homogeneous_coordinates) (see examples). This transformation type may be (but is not necessarily) +invertible when `N` equals `M`. The matrix MUST be stored as a 2D array either as json or as a zarr array. + +path +: The path to a zarr-array containing the affine parameters. The array at this path MUST be 2D whose shape MUST be `(M)x(N+1)`. + +affine +: The affine parameters stored in JSON. The matrix MUST be stored as 2D nested array where the outer array MUST be length `M` and the inner arrays MUST be length `N+1`. + + +#### rotation + +`rotation`s are [matrix transformations](#matrix-transformations) that are special cases of affine transformations. When possible, a rotation +transformation SHOULD be preferred to its equivalent affine. Input and output dimensionality (N) MUST be identical. Rotations +are stored as `NxN` matrices, see below, and MUST have determinant equal to one, with orthonormal rows and columns. The matrix +MUST be stored as a 2D array either as json or in a zarr array. `rotation` transformations are invertible. + +path +: The path to an array containing the affine parameters. The array at this path MUST be 2D whose shape MUST be `N x N`. + +rotation +: The parameters stored in JSON. The matrix MUST be stored as a 2D nested array where the outer array MUST be length `N` and the inner arrays MUST be length `N`. + + +#### inverseOf + +An `inverseOf` transformation contains another transformation (often non-linear), and indicates that +transforming points from output to input coordinate systems is possible using the contained transformation. +Transforming points from the input to the output coordinate systems requires the inverse of the contained +transformation (if it exists). + +```{note} +Software libraries that perform image registration often return the transformation from fixed image +coordinates to moving image coordinates, because this "inverse" transformation is most often required +when rendering the transformed moving image. Results such as this may be enclosed in an `inverseOf` +transformation. This enables the "outer" coordinate transformation to specify the moving image coordinates +as `input` and fixed image coordinates as `output`, a choice that many users and developers find intuitive. +``` + + +#### sequence + +A `sequence` transformation consists of an ordered array of coordinate transformations, and is invertible if every coordinate +transform in the array is invertible (though could be invertible in other cases as well). To apply a sequence transformation +to a point in the input coordinate system, apply the first transformation in the list of transformations. Next, apply the second +transformation to the result. Repeat until every transformation has been applied. The output of the last transformation is the +result of the sequence. + +The transformations included in the `transformations` array may omit their `input` and `output` fields under the conditions +outlined below: + +- The `input` and `output` fields MAY be omitted for the following transformation types: + - `identity`, `scale`, `translation`, `rotation`, `affine`, `displacements`, `coordinates` +- The `input` and `output` fields MAY be omitted for `inverseOf` transformations if those fields may be omitted for the + transformation it wraps +- The `input` and `output` fields MAY be omitted for `bijection` transformations if the fields may be omitted for + both its `forward` and `inverse` transformations +- The `input` and `output` fields MAY be omitted for `sequence` transformations if the fields may be omitted for + all transformations in the sequence after flattening the nested sequence lists. +- The `input` and `output` fields MUST be included for transformations of type: `mapAxis`, and `byDimension` (see the note + below), and under all other conditions. + + +transformations +: A non-empty array of transformations. + + +#### coordinates and displacements + +`coordinates` and `displacements` transformations store coordinates or displacements in an array and interpret them as a vector +field that defines a transformation. The arrays must have a dimension corresponding to every axis of the input coordinate +system and one additional dimension to hold components of the vector. Applying the transformation amounts to looking up the +appropriate vector in the array, interpolating if necessary, and treating it either as a position directly (`coordinates`) or a +displacement of the input point (`displacements`). + +These transformation types refer to an array at location specified by the `"path"` parameter. The input and output coordinate +systems for these transformations ("input / output coordinate systems") constrain the array size and the coordinate system +metadata for the array ("field coordinate system"). + +* If the input coordinate system has `N` axes, the array at location path MUST have `N+1` dimensions +* The field coordinate system MUST contain an axis identical to every axis of its input coordinate system in the same order. +* The field coordinate system MUST contain an axis with type `coordinate` or `displacement` respectively for transformations of type `coordinates` or `displacements`. + * This SHOULD be the last axis (contiguous on disk when c-order). +* If the output coordinate system has `M` axes, the length of the array along the `coordinate`/`displacement` dimension MUST be of length `M`. + +The `i`th value of the array along the `coordinate` or `displacement` axis refers to the coordinate or displacement +of the `i`th output axis. See the example below. + +`coordinates` and `displacements` transformations are not invertible in general, but implementations MAY approximate their +inverses. Metadata for these coordinate transforms have the following field: + +
+
path
+
The location of the coordinate array in this (or another) container.
+
interpolation
+
The interpolation attributes MAY be provided. It's value indicates + the interpolation to use if transforming points not on the array's discrete grid. + Values could be: +
+
+ + +For both `coordinates` and `displacements`, the array data at referred to by `path` MUST define coordinate system and coordinate transform metadata: + +* Every axis name in the `coordinateTransform`'s `input` MUST appear in the coordinate system +* The array dimension corresponding to the `coordinate` or `displacement` axis MUST have length equal to the number of dimensions of the `coordinateTransform` `output` +* If the input coordinate system `N` axes, then the array data at `path` MUST have `(N + 1)` dimensions. +* SHOULD have a `name` identical to the `name` of the corresponding `coordinateTransform`. + +For `coordinates`: + +* `coordinateSystem` metadata MUST have exactly one axis with `"type" : "coordinate"` +* the shape of the array along the "coordinate" axis must be exactly `N` + +For `displacements`: + +* `coordinateSystem` metadata MUST have exactly one axis with `"type" : "displacement"` +* the shape of the array along the "displacement" axis must be exactly `N` +* `input` and `output` MUST have an equal number of dimensions. + + +#### byDimension + +`byDimension` transformations build a high dimensional transformation using lower dimensional transformations +on subsets of dimensions. + +
+
transformations
+
A list of transformations, each of which applies to a (non-strict) subset of input and output dimensions (axes). + The values of input and output fields MUST be an array of strings. + Every axis name in input MUST correspond to a name of some axis in this parent object's input coordinate system. + Every axis name in the parent byDimension's output MUST appear in exactly one of its child transformations' output. +
+
+ + +#### bijection + +A bijection transformation is an invertible transformation in which both the `forward` and `inverse` transformations +are explicitly defined. Each direction SHOULD be a transformation type that is not closed-form invertible. +Its' input and output spaces MUST have equal dimension. The input and output dimensions for the both the forward +and inverse transformations MUST match bijection's input and output space dimensions. + +`input` and `output` fields MAY be omitted for the `forward` and `inverse` transformations, in which case +the `forward` transformation's `input` and `output` are understood to match the bijection's, and the `inverse` +transformation's `input` (`output`) matches the bijection's `output` (`input`), see the example below. + +Practically, non-invertible transformations have finite extents, so bijection transforms should only be expected +to be correct / consistent for points that fall within those extents. It may not be correct for any point of +appropriate dimensionality. + +## Specific feedback requested + +We ask the reviewers for one specific piece of feedback. Specifically about whether parameters for transformations should +be written as they are currently in the draft pull request, with named parameters at the "top level" e.g.: + +``` +{ + "type": "affine", + "affine": [[1, 2, 3], [4, 5, 6]], + "input": "ji", + "output": "yx" +} +``` + +or alternatively in a `parameters` field: + +``` +{ + "type": "affine", + "parameters": { + "matrix": [[1, 2, 3], [4, 5, 6]] + }, + "input": "ji", + "output": "yx" +} +``` + +In discussions, some authors preferred the latter because it will make the "top-level" keys for transformation +objects all identical, which could make serialization / validation simpler. One downside is that this change +is breaking for the existing `scale` and `translation` transformations + +``` +{ + "type": "scale", + "scale": [2, 3], + "input": "ji", + "output": "yx" +} +``` + +would change to: + +``` +{ + "type": "scale", + "parameters": { + "scale": [2, 3], + }, + "input": "ji", + "output": "yx" +} +``` + +The authors would be interested to hear perspectives from the reviewers on this matter. + + +## Requirements + +The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", +"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be +interpreted as described in [IETF RFC 2119][IETF RFC 2119] + + +## Stakeholders + +People who need to represent the result of image registration algorithms, or any imaging +scientist in need of affine or non-linear transformations. + +This RFC has been discussed in: + +* [PR 138](https://github.com/ome/ngff/pull/138) +* Issues [84](https://github.com/ome/ngff/issues/84), [94](https://github.com/ome/ngff/issues/94), [101](https://github.com/ome/ngff/issues/101), and [146](https://github.com/ome/ngff/issues/146) +* Several OME-Zarr community calls ([one example](https://forum.image.sc/t/ome-ngff-community-call-transforms-and-tables/71792)) + +## Implementation + +Many RFCs have an "implementation" section which details how the implementation +will work. This section should explain the rough specification changes. The +goal is to give an idea to reviewers about the subsystems that require change +and the surface area of those changes. + +This knowledge can result in recommendations for alternate approaches that +perhaps are idiomatic to the project or result in less packages touched. Or, it +may result in the realization that the proposed solution in this RFC is too +complex given the problem. + +For the RFC author, typing out the implementation in a high-level often serves +as "[rubber duck debugging][rubber duck debugging]" and you can catch a lot of +issues or unknown unknowns prior to writing any real code. + +## Drawbacks, risks, alternatives, and unknowns + +Adopting this proposal will add an implementation burden because it adds more transformation types. Though this drawback is +softened by the fact that implementations will be able to choose which transformations to support (e.g., implementations may choose +not to support non-linear transformations). + +An alternative to this proposal would be not to add support transformations directly and instead recommend software use an +existing format (e.g., ITK's). The downside of that is that alternative formats will not integrate well with OME-NGFF as they do +not use JSON or Zarr. + +In all, we believe the benefits of this proposal (outlined in the Background section) far outweigh these drawbacks, and will +better promote software interoperability than alternatives. + + +## Prior art and references + +ITK represents many [types of +transformations](https://itk.org/ITKSoftwareGuide/html/Book2/ITKSoftwareGuide-Book2ch3.html#x26-1170003.9), +and can serialize them to either plain text or to an HDF5 file. This is a practical approach that works +well for software that depend on ITK, the proposed solution encoding transformations will be more +interoperable. + +Displacement fields are typically stored in formats designed for medical imaging (e.g. [Nifti](https://nifti.nimh.nih.gov/)). +While effective, they can only describe one type of non-linear transformation. + +The Saalfeld lab at Janelia developed a [custom +format](https://github.com/saalfeldlab/template-building/wiki/Hdf5-Deformation-fields) for storing affine and displacement field +transformations in HDF5 which is similarly less interoperable than would be ideal. + +## Abandoned Ideas + +One consideration was to change (reverse) the order of parameters for transformations to match the convention used by many +libraries. We opted not to make this change for two reasons. First, to maintain backward-compatibility. Second, the convention +used by the libraries generally applies for 2D and 3D spatial transformations, but the specification should be applicable to +transformations of arbitrary dimension and axis type, where there is not a strong convention we are aware of. + +An early consideration was to use axis names to indicate correspondence across different coordinate systems (i.e. if two +coordinate systems both have the "x" axis, then it is "the same" axis. We abandoned this for several reasons. It was +restrictive - it is useful to have many coordinate systems with an "x" axis without requiring that they be "identical." Under our +early idea, every set of spatial axes would need unique names ("x1", "x2", ...), and this seemed burdensome. As well, this +approach would have also made transformations less explicit and likely would have required more complicated implementations. +For example, points in two coordinate systems with re-ordered axis names `["x","y"]` vs `["y","x"]` would need to be +axis-permuted, even if such a permutation was not explicitly specified. + + +## Future possibilities + +Additional transformation types should be added in the future. Top candidates include: +* thin-plate spline +* b-spline +* velocity fields +* by-coordinate + +## Performance + +This proposal adds new features, and has no effect on performance for existing functionality. + +## Backwards Compatibility + +Adds new transformations, but existing transformations (`scale`, `translation`) are backward-compatible. + +Adds coordinate systems, these contain axes which are backward-compatible with the [axis specification for version +0.4](https://ngff.openmicroscopy.org/0.4/#axes-md). This proposal adds new fields to the axis metadata. + + +## Testing + +Public examples of transformations with expected input/output pairs will be provided. + +## UI/UX + +Implementations SHOULD communicate if it encounters an unsupported transformation (e.g. some software may opt not to support +non-linear transformations), and inform users what action will be taken. The details of this choice should be software / +application dependent, but ignoring the unsupported transformation or falling back to a simpler transformation are likely +to be common choices. + +Implementations MAY choose to communicate if and when an image can be displayed in multiple coordinate systems. Users might +choose between different options, or software could choose a default (e.g. the first listed coordinate system). The +[`multiscales` in version 0.4](https://ngff.openmicroscopy.org/0.4/#multiscale-md) has a similar consideration. + + +## Changelog + +| Date | Description | Link | +| ---------- | ---------------------------- | ---------------------------------------------------------------------------- | +| 2024-10-08 | RFC assigned and published | [https://github.com/ome/ngff/pull/255](https://github.com/ome/ngff/pull/255) |