Generic NetCDF data in Python.
Provides fast data exchange between analysis packages, and full control of storage formatting.
Especially : Ncdata exchanges data between Xarray and Iris as efficently as possible
"lossless, copy-free and lazy-preserving".
This enables the user to freely mix+match operations from both projects, getting the "best of both worlds".
import xarray
import ncdata.iris_xarray as nci
import iris.quickplot as qpltds = xarray.open_dataset(filepath)
ds_resample = ds.rolling(time=3).mean()
cubes = nci.cubes_from_xarray(ds_resample)
temp_cube = cubes.extract_cube("air_temperature")
qplt.contourf(temp_cube[0])
- Motivation
- Principles
- Working Usage Examples
- API documentation
- Installation
- Project Status
- References
- Developer Notes
Fast and efficient translation of data between Xarray and Iris objects.
This allows the user to mix+match features from either package in code.
For example:
from ncdata.iris_xarray import cubes_to_xarray, cubes_from_xarray
# Apply Iris regridder to xarray data
dataset = xarray.open_dataset("file1.nc", chunks="auto")
(cube,) = cubes_from_xarray(dataset)
cube2 = cube.regrid(grid_cube, iris.analysis.PointInCell)
dataset2 = cubes_to_xarray(cube2)
# Apply Xarray statistic to Iris data
cubes = iris.load("file1.nc")
dataset = cubes_to_xarray(cubes)
dataset2 = dataset.group_by("time.dayofyear").argmin()
cubes2 = cubes_from_xarray(dataset2)
- data conversion is equivalent to writing to a file with one library, and reading it
back with the other ..
- .. except that no actual files are written
- both real (numpy) and lazy (dask) variable data arrays are transferred directly, without copying or computing
Ncdata can also be used as a transfer layer between Iris or Xarray file i/o and the
exact format of data stored in files.
I.E. adjustments can be made to file data before loading it into Iris/Xarray; or
Iris/Xarray saved output can be adjusted before writing to a file.
This allows the user to workaround any package limitations in controlling storage aspects such as : data chunking; reserved attributes; missing-value processing; or dimension control.
For example:
from ncdata.xarray import from_xarray
from ncdata.iris import to_iris
from ncdata.netcdf4 import to_nc4, from_nc4
# Rename a dimension in xarray output
dataset = xr.open_dataset("file1.nc")
xr_ncdata = from_xarray(dataset)
dim = xr_ncdata.dimensions.pop("dim0")
dim.name = "newdim"
xr_ncdata.dimensions["newdim"] = dim
for var in xr_ncdata.variables.values():
var.dimensions = ["newdim" if dim == "dim0" else dim for dim in var.dimensions]
to_nc4(ncdata, "file_2a.nc")
# Fix chunking in Iris input
ncdata = from_nc4("file1.nc")
for var in ncdata.variables:
# custom chunking() mimics the file chunks we want
var.chunking = lambda: (100.0e6 if dim == "dim0" else -1 for dim in var.dimensions)
cubes = to_iris(ncdata)
ncdata can also be used for data extraction and modification, similar to the scope of
CDO and NCO command-line operators but without file operations.
However, this type of usage is as yet still undeveloped : There is no inbuilt support
for data consistency checking, or obviously useful operations such as indexing by
dimension.
This could be added in future, but it is also true that many such operations (like
indexing) may be better done using Iris/Xarray.
- ncdata represents NetCDF data as Python objects
- ncdata objects can be freely manipulated, independent of any data file
- ncdata variables can contain either real (numpy) or lazy (Dask) arrays
- ncdata can be losslessly converted to and from actual NetCDF files
- Iris or Xarray objects can be converted to and from ncdata, in the same way that they are read from and saved to NetCDF files
- translation between Xarray and Iris is based on conversion to ncdata, which
is in turn equivalent to file i/o
- thus, Iris/Xarray translation is equivalent to saving from one package into a file, then loading the file in the other package
- ncdata exchanges variable data directly with Iris/Xarray, with no copying of real data or computing of lazy data
- ncdata exchanges lazy arrays with files using Dask 'streaming', thus allowing transfer of arrays larger than memory
- mostly TBD
- proof-of-concept script for netCDF4 file i/o
- proof-of-concept script for iris-xarray conversions
- see the ReadTheDocs build
Install from conda-forge with conda
conda install -c conda-forge ncdata
Or from PyPI with pip
pip install ncdata
We intend to follow PEP 440 or (older) SemVer versioning principles.
Release version is at "v0.1".
This is a first complete implementation, with functional operational of all public APIs.
The code is however still experimental, and APIs are not stable (hence no major version yet).
- C.I. tests GitHub PRs and merges, against latest releases of Iris and Xarray
- compatible with iris >= v3.7.0
- see : support added in v3.7.0
Unsupported features : not planned
- user-defined datatypes are not supported
- this includes compound and variable-length types
Unsupported features : planned for future release
- groups (not yet fully supported ?)
- file output chunking control
As-of v0.1
- in conversion from iris cubes with
from_iris
, use of anunlimited_dims
key currently causes an exception
- Iris issue : SciTools/iris#4994
- planning presentation : https://github.com/SciTools/iris/files/10499677/Xarray-Iris.bridge.proposal.--.NcData.pdf
- in-Iris code workings : pp-mo/iris#75
- For a full docs-build, a simple
make html
will do for now.- The
docs/Makefile
wipes the API docs and invokes sphinx-apidoc for a full rebuild - Results are then available at
docs/_build/html/index.html
- The
- The above is just for local testing if required : We have automatic builds for releases and PRs via ReadTheDocs
- Cut a release on GitHub : this triggers a new docs version on ReadTheDocs
- Build the distribution
- if needed, get build
- run
python -m build
- Push to PyPI
- if needed, get twine
- run
python -m twine --repository testpypi upload dist/*
- this uploads to TestPyPI
- if that checks OK, remove
--repository testpypi
and repeat- --> uploads to "real" PyPI
- check that
pip install ncdata
can now find the new version
- Update conda to source the new version from PyPI
- create a PR on the ncdata feedstock
- update :
- version number
- SHA
- Note : the PyPI reference will normally look after itself
- Also : make any required changes to dependencies -- normally no change required
- get PR merged ; wait a few hours ; check the new version appears in
conda search ncdata