Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite OpenEXR python bindings using pybind11 and numpy #1756

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

cary-ilm
Copy link
Member

This introduces an entirely new python API for reading and writing OpenEXR files that supports all file features (or will soon): scanline, tiled, deep, multi-part, etc. It uses numpy arrays for pixel data, and it supports both separate arrays for each channel and interleaving of RGB data into a single composite channel. It leaves the existing binding API in place for backwards-compatibility; there's no overlap between the two APIs.

See src/wrappers/python/README.md for examples of the new API.

The API is simple: the File object holds a py::list of Part objects, each of which has a py::dict for the header and a py::dict for the channels, and each channel holds a numpy array for the pixel data. There's intentionally no support for selective scanline/time reading; to keep the API simple, reading a file reads the entire channel data for all parts. There is, however, an option to read just the headers and skip the pixel data entirely.

A few things don't work yet but should be fixed before release:

  • Reading and writing of deep data samples isn't finished.
  • ID manifest attributes aren't supported yet.
  • For mip-mapped images, it current only reads the top level.
  • Channel subsampling is not working.
  • I'm not 100% sure the memory ownership is handled properly.

This also does not (yet) properly integrate the real Imath types from the proper Imath bindings. It leaves in place for now the internal "Imath.py" module, but defines its own internal set of set of Imath classes. This needs to be resolved once the Imath bindings are distributed via pypi.org.

The test suite downloads images from openexr-images and runs them through a battery of reading/writing tests. Currently, the download is enabled via the OPENEXR_TEST_IMAGE_REPO environment variable that is off by default but is on in the python wheel CI workflow.

This also adds operator== methods to KeyCode and PreviewImage, required in order to implement operator== for the Part and File objects.

This introduces an entirely new python API for reading and writing
OpenEXR files that supports all file features (or will soon):
scanline, tiled, deep, multi-part, etc. It uses numpy arrays for pixel
data, and it supports both separate arrays for each channel and
interleaving of RGB data into a single composite channel.  It leaves
the existing binding API in place for backwards-compatibility; there's
no overlap between the two APIs.

See src/wrappers/python/README.md for examples of the new API.

The API is simple: the ``File`` object holds a py::list of ``Part``
objects, each of which has a py::dict for the header and a py::dict
for the channels, and each channels hold a numpy array for the pixel
data. There's intentionally no support for selective scanline/time
reading; reading a file reads the entire channel data for all parts.
There *is*, however, an option to read just the headers and skip the
pixel data entirely.

A few things don't work yet:

- Reading and writing of deep data isn't finished.
- ID manfest attributes aren't supported yet.
- For mipmaped images, it current only reads the top level.

This also does not (yet) properly integrate the real Imath types. It
leaves in place for now the internal "Imath.py" module, but defines
its own internal set of set of Imath classes. This needs to be resolve
once the Imath bindings are distributed via pypi.org

The test suite downloads images from openexr-images and runs them
through a battery of reading/writing tests. Currently, the download is
enabled via the ``OPENEXR_TEST_IMAGE_REPO`` environment variable that
is off by default but is on in the python wheel CI workflow.

This also adds operator== methods to ``KeyCode`` and ``PreviewImage``,
required in order to implement operator== for the ``Part`` and
``File`` objects.

Signed-off-by: Cary Phillips <[email protected]>
Signed-off-by: Cary Phillips <[email protected]>
- Add __enter__/__exit__ so "with File(name) as f" works
- Allow channel dict to reference pixel array directly
- Fix subsampling

Signed-off-by: Cary Phillips <[email protected]>
@meshula
Copy link
Contributor

meshula commented May 22, 2024

For the purposes of code review, could you elaborate a little on this ~

It leaves the existing binding API in place for backwards-compatibility; there's no overlap between the two APIs.

Specifically I'm wondering if the existing API implementation is untouched, or if the existing API is re-implemented using pybind11. There's a lot here to read and before starting I'm wondering if also we need to be verifying that existing API is fulfilled in the new implementation.

@cary-ilm
Copy link
Member Author

Thanks for looking! The existing C-python implementation remains untouched. I renamed the file to PyOpenEXR_old.cpp didn't change anything other than to import the symbols into the new pybind11-defined module. The old API implements classes InputFile and OutputFile, which continue to operate as before, while the new implementation has a single new class File, which does both reading and writing.

The test for the old API, which illustrates the usage, is in src/wrappers/python/tests/test_old.py

This seemed a reasonable approach since I wanted to change so much about the API, primarily the switch to numpy for pixel data, but not entirely break everything.

The other .py files in src/wrappers/python/tests use the new API. Definitely start there, since getting your feedback on the usage is probably more important than on the actual pybind11 implementation.

The API is simple enough that the src/wrappers/python/README.md is a pretty good summary. It doesn't illustrate everything but it covers the basics.

f.write("readme_modified.exr")

with OpenEXR.File("readme_modified.exr") as o:
assert o.header()["displayWindow"] == OpenEXR.Box2i((3,4),(5,6))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want the outward facing API to be really pythonic, probably we don't need the IMath types?

A more pythonic way would simply be o.header()["displayWindow"] == ((3 ,4), (5, 6))

I looked up src\wrappers\python\IMath.py, it doesn't seem there is a substantial difference from python tuples, except for the print (repr) method. IMath.Box did not really type it all the way down, with its min and max looks like python tuples (although the author's intention is to use IMath.point).

If we really want to enforce the dimensionality of the parameter but remain pythonic or num-pythonic, maybe set this to an ndarray and assert its shape is (2, 2). I tend to think a few classes (esp. point and box) in Imath.py is reinventing the wheel in a C-style and doesn't really read like modern numpy/scipy code. These data could all just be ndarrays, that's actually less confusion than letting the user figure out whether box.min should be plain tuple or Imath.point. The important things are this parameter's type and shape, which are all enforced in ndarry for free (up on assign, arithmetic and comparison).

what do people think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also inclined to believe that maybe we should accept and promote the built-in Python types (tuples, arrays, dictionaries) for all the usual "short" data types for which they seem a decent fit, and use Numpy arrays for blocks of image pixel data, and call it a day. The Imath types are not Pythonic, but rather look like somebody simply wrapped a C++ library. They're not friendly to people who must pass data back and forth between OpenEXR and almost any other Python library (Alembic is the only Python interface I know of that expects people to use classes from Imath's Python bindings).

Also:
- Use single-element array for DoubleAttribute
- Use fractions.Fraction for Rational
- Use 8-element tuple for chromaticities
- Add __enter__/__exit__ so "with" works with File object
- remove operator== entirely, it's way too hard to implement something that's
  actually useful for testing purposes. The tests do their own validations.

Signed-off-by: Cary Phillips <[email protected]>

pixel comparison

Signed-off-by: Cary Phillips <[email protected]>
@cary-ilm
Copy link
Member Author

cary-ilm commented Jun 2, 2024

I removed all the Imath classes and replaced them with numpy arrays and tuples. Vec 2/3 Int/Float/Double attributes appear in the header dict as 2- or 3-element numpy arrays of type int32, float32, or float64, and M33/44 Float/Double as 3x3 or 4x4 numpy arrays. Box2s are tuples of 2-element arrays.

RationalAttributes now appear as Fraction objects from the fractions module. DoubleAttributes appear as single-element numpy arrays of type float64, so they get preserved when written. Chromaticities are 8-element tuples.

I left the TimeCode and KeyCode classes in place, since they have convenient member fields that would be unintuitive in a tuple.

The File object also now implements __enter__/__exit__ so that with File(...) as f works.

The README.md examples have been updated accordingly, so they're more pythonic now.

Also works properly for separate channels and when coalescing into
RGB/RGBA arrays.

Signed-off-by: Cary Phillips <[email protected]>
@lji-ilm
Copy link

lji-ilm commented Jun 18, 2024

I am following up with our discussion about how to best represent deep data in numpy, following up our last meeting.

I also want to pay an tribute to A-buffer here since I think EXR deep is probably just thinking about the same thing :) : https://dl.acm.org/doi/pdf/10.1145/965139.807360

Now onto the real topic. array:object[array:dtype:float] seem to be the way to go. By keeping both levels to be numpy data structures, it best supports numpy's slicing and other array based operations. The following code is an excerpt from HDF5's official h5py library's vlen data type implementation (https://docs.h5py.org/en/latest/special.html#arbitrary-vlen-data)

array([array([1, 2, 3], dtype=int32), array([1, 2, 3, 4, 5], dtype=int32)], dtype=object)

A few stackoverflow posts discussed this question, and the above answer is generally prefered due to the fact that it doesn't involve native python lists, which will not support numpy operations. The following is only one example:

https://stackoverflow.com/questions/3386259/how-to-make-a-multidimension-numpy-array-with-a-varying-row-size

A special mention to the "Awkward Arrays" library, which specialize in dealing with this class of "varying data per sample" problems: https://awkward-array.org/doc/main/getting-started/index.html . I don't think we need to add a dependency, however the first two questions in FaQ did a good job in terms of a state-of-the-art review of the implementations.

I reviewed a few scientific imaging formats, including FITS (https://fits.gsfc.nasa.gov/fits_primer.html), but their data container per pixel (or per sample) are homogeneous. HDF5's vlen is the only one I spotted in a major format that deals with deep-like data formats. We might be quite on the frontier here.

@cary-ilm
Copy link
Member Author

Thanks for that consideration, @lji-ilm. I'm satisfied with the dtype=object approach, the implementation is straightforward and the structure is I think what the user would naturally expect.

And for what it's worth, ChatGPT is what originally offered this suggestion, after nobody answered my query on the pybind11 discussion page.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants