-
Notifications
You must be signed in to change notification settings - Fork 12
N5 Metadata Dialects
This page contains a list and description of known n5 metadata dialects.
The n5-viewer metadata style is briefly described here.
Datasets may contain a pixelResolution
field that is either
"pixelResolution": {
"unit": "um",
"dimensions": []
}
or
"pixelResolution": []
Datasets may contain a downsamplingFactors
field.
-
pixelResolution
:number[]
orobject
- physical scale factors and units (see Physical space)
-
downsamplingFactors
:number[]
- factors by which the image is downsampled (see Downsampling)
This metadata specification is used by BigCat, the predecessor to Paintera.
Example
{
"resolution": [ ],
"offset": [ ],
"downsamplingFactors": [ ]
}
-
resolution
:number[]
- physical scale factors (see Physical space)
-
offset
:number[]
- physical translation (see Physical space)
-
downsamplingFactors
:number[]
- factors by which the image is downsampled (see Downsampling)
The COSEM metadata style is described here.
Datasets shall contain a transform
field that describes how the discrete pixel grid should be arranged in physical space.
"transform": {
"axes": [ ],
"units": [ ],
"scale": [ ],
"translate": [ ]
}
All fields below must have lengths equal to the dimensionality of the n5 dataset they describe.
-
axes
:String[]
- gives a label to axis indexes
- examples:
["x","y","z"]
,["z","y","x"]
-
units
:String[]
- the physical
- examples
["nm","nm","nm"]
,["microns","microns","microns"]
-
scale
:number[]
- physical scale factors (see physical space)
-
translate
:number[]
- physical translation (see physical space)
The ImageJ metadata dialect mirrors the metadata stored in ImageJ's
ImagePlus
class. Images of up to five
dimensions are supported, i.e. 3D, multi-channel, time-series can be described with this dialect.
This standard does not support multiscale datasets.
-
name
:string
- The name or title of the image (see ImagePlus.getTitle
-
fps
:number
(float)- the frames per second (see Calibration.fps)
-
frameInterval
:number
(float)- spacing of the tim axis (see Calibration.frameInterval)
-
pixelWidth
:number
(float)- pixel spacing of the x axis (see Calibration.pixelWidth)
- corresponds to scale[0] in (Physical space)
-
pixelHeight
:number
(float)- pixel spacing of the y axis (see Calibration.pixelHeight)
- corresponds to scale[1] in (Physical space)
-
pixelDepth
:number
(float)- pixel spacing of the z axis (see Calibration.pixelDepth)
- corresponds to scale[2] in (Physical space)
-
xOrigin
:number
(float)- the origin of the x axis (see Calibration.xOrigin)
- corresponds to translate[0] in (Physical space)
-
yOrigin
:number
(float)- the origin of the x axis (see Calibration.yOrigin)
- corresponds to translate[1] in (Physical space)
-
zOrigin
:number
(float)- the origin of the x axis (see Calibration.zOrigin)
- corresponds to translate[2] in (Physical space)
-
numChannels
:number
(integer)- the number of channels (see ImagePlus.getNChannels)
-
numSlices
:number
(integer)- the number of z slices (see ImagePlus.getNSlices)
-
numFrames
:number
(integer)- the number of frames / time points (see ImagePlus.getNFrames)
-
type
:number
(integer)- the data type of the image (see ImagePlus.getType)
-
unit
:string
- the physical unit (see Calibration.getUnit)
-
properties
:object
- the image's properties (see ImagePlus.getProperties)
This information refers to Version 0.3 of the ome-ngff specification.o
Example:
{
"multiscales": [
{
"axes": [ "z", "y", "x" ],
"datasets": [
{ "path": "s0" },
{ "path": "s1" },
{ "path": "s2" }
],
"metadata": {
"order": 0,
"preserve_range": true,
"scale": [ 0.5, 0.5, 0.5 ]
},
"name": "zyx",
"type": "skimage.transform._warps.rescale",
"version": "0.3"
}
]
}
In this example the dataset pixel scales are:
-
s0
:[1.0, 1.0, 1.0]
-
s1
:[2.0, 2.0, 2.0]
-
s2
:[4.0, 4.0, 4.0]
All of these scales are at arbitrary (pixel) units.
Call the 0th, 1st, and 2nd dimensions of the dataset i
, j
, and k
, respectively.
Then if axis: ["x","y","z"]
, then
The discrete point at (i,j,k)
, should be mapped to the physical point (x,y,z)
as follows:
x = (scale[0] * i) + translate[0]
y = (scale[1] * j) + translate[1]
z = (scale[2] * k) + translate[2]
The N5-viewer and BigCat dialects assume downsampling is done in a particular way such that it affects both the
scale and translation of the physical space. Given a scale
and downsamplingFactors
discrete coordinates are
mapped to physical space with:
x = (scale[0] * downsamplingFactors[0] * i) + (scale[0] * (downsamplingFactors[0] - 1))
y = (scale[1] * downsamplingFactors[1] * j) + (scale[1] * (downsamplingFactors[1] - 1))
z = (scale[2] * downsamplingFactors[2] * k) + (scale[2] * (downsamplingFactors[2] - 1))
similar to the above.