Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exrenvmap: observed imprecise conversion of cubemap to lat/lon #1675

Open
kfjahnke opened this issue Mar 15, 2024 · 11 comments
Open

exrenvmap: observed imprecise conversion of cubemap to lat/lon #1675

kfjahnke opened this issue Mar 15, 2024 · 11 comments

Comments

@kfjahnke
Copy link

Using exr2envmap with cubemaps as input, I noticed differences to what my own calculations for the resulting lat/lon environment maps produced. So I went to investigate. To see if exr2envmap might be responsible for these differences, I created a synthetic cubemap and had it processed by exr2envmap. The resulting lat/lon environment suggests to me that there might be a flaw in the conversion process. Here's my reasoning:

  • I am feeding a cubemap of six black 1000X1000 squares, each with one-pixel-wide edges in different per-cubeface colours
  • the invocation I use is exrenvmap -ci -l -w 4000 -f 0 1 -v cubemap.exr latlon.exr
  • this is using an openexr CL tools install on debian testing which is labeled 3.1.5-5.1+b2, same with a fresh build from master
  • the resulting lat/lon environment map shows inconsistencies in the width of visible vertical cube edges

Coming from a perfectly symmetrical cubemap, I would expect that the rendition of all cube edges should look alike - except for showing different colours - and that the edges should each show as two one-pixel-wide vertical lines next to each other. Instead I observe that some come out one pixel wide, and some two pixels wide. I tried raising the resolution (-w8000) and still got different widths in the rendition of the edges. Can you confirm that the conversion should indeed produce renditions of the vertical edges which are geometrically identical, and that they should look as I expect?

I have a suspicion about what may go wrong. The output is periodic, and should be treated as if it maps the 360 degrees field of view horizontally to a range from a point half a pixel to the left of the first pixel's center to a point half a pixel to the right of the last pixel's center. The output from exr2envmap I observe looks like it might have been calculated disregarding this small offset. Would you care to have a look at my findings, to see if you can reproduce my results? I can upload the cubemap I've used, and if you like, also the result I got from exr2envmap and a lat/lon environment map showing what I think the output should look like.

@meshula
Copy link
Contributor

meshula commented Mar 15, 2024

If you could upload your inputs to exr2envmap, the result, & your expected result that would be helpful. Due to the fact that cube maps inherently have larger solid angles per texel on the edges, I can't really guess if your result is as expected or not.

@kfjahnke
Copy link
Author

Okay. I scaled it down to 100X100 squares to save space. My upload environment.zip contains three images:

  • cubemap.exr has the initial cubemap
  • latlon.exr is the output from exr2envmap
  • expected.exr is what I expected as output

You can see how two of the vertical edges of the latlon rendition made by exr2envmap come out one pixel wide and two others come out two pixels wide. I would expect them to all be two pixels wide, one colour to the left, another to the right.

Looking at my expected output, what can also be seen is that the horizontal edges are rendered thinner in latlon.exr. This gives me another hint at what might cause differences. I use 'reflect' boundary conditions which look at pixels as small squares and put the point of reflection at the pixel's edge. The thinner rendition in latlon.exr looks as if the cube faces might have been 'looked at' with mirror boundary conditions, mirroring on the pixel center. That is common, but it cuts off half the marginal pixels (so to say) whereas reflect boundary conditions accommodate all pixels equally. With these two different approaches of boundary conditions for the square cube faces it's necessary beforehand to know which mode is used, because the 90 degrees fov have to be mapped correctly. I think that mapping the 90 degrees to -0.5,w-0.5 is more 'natural' than mapping it to 0,w-1. Can you say which is used in exr2envmap?

@meshula
Copy link
Contributor

meshula commented Mar 17, 2024

Florian & I wrote this nearly twenty years ago, trying to remember what we did and cross referencing it to the code :] yes, we are looking at pixel centers. when we resample, we take samples in a window, and the window does not appear to be compensated for solid angle when resampling a cube. So my intuition is that the sampling code needs correction to bias or unbias samples by projecting on the the cube. I don't think there's an issue with reflection, because the sampling simply sends out rays spherically, then fetches them from the appropriate faces.

@kfjahnke
Copy link
Author

Florian & I wrote this nearly twenty years ago

Looks like a skeleton in the cupboard coming back to haunt you ;-)

So this looks like you agree that there is an issue. Maybe my description of the issue wasn't as clear as could be, I've thought about it some, and now I'd express it like this: the three squares to the front and sides all appear in the output with visible vertical edges, but the back square is missing it's vertical edges.

Geometricaly, the output (like the flaw) is symmetric around the vertical, which indicates that there is a problem with the horizontal sampling of the sphere. To sample the sphere for the purpose at hand, you'd iterate over lat/lon coordinates. With target image width w, your step width d is 2pi / w, the first sample is at d/2 and the last at 2pi - d/2. Vertically, you start at d/2 and go to pi - d/2 (measuring from the pole at zero degrees - subtract pi/2 if you're working from the equator).

Given the sampling of the sphere, the next step is to convert to 3D rays, which is textbook stuff. Next you figure out the axis with the numerically largest coordinate value, and this plus the sign of that coordinate value yields the cube face. You divide the 3D ray by this maximal coordinate value, which gives you 2D x/y coordinates to the cube face (the third component becoming 1.0), relative to the cube face's center. Your cube-face-relative coordinates are now in (-1,1). Scale to cube-face image coordinates and interpolate at that position to yield the pixel - the precise scaling value depends on how you interpret the cube face, but with cube-face width c, I'd recommend to map the interval to (0,c). If you follow that logic, there is no way you can miss out on a one-pixel wide part of the cube face, because you 'land' right in the middle of it.

I do it like this in lux, but currently I work from six separate cube face images. I'm switching to use OIIO, and in the process I discovered that openEXR has dedicated environment map support, so I thought I might support the 1:6 stripe format as well. With several different ways to deal with environment maps (panotools, lux, OIIO and openEXR) I thought it would be interesting to compare the results in respect to sharpness - the approaches differ in what interpolators and filters they apply. But of course the results must agree in geometry before you can look at that aspect, and when I compared the output generated by exrenvmap, I noticed that the geometry was off. Hence this issue.

@meshula
Copy link
Contributor

meshula commented Mar 17, 2024

For the record in OpenEXR the vertical strip format originated from a very old DirectX convention and a need to bring HDR imagery into real time. To this day, I think everyone still uses lat long, despite lat long using half or more of the texels for the least interesting and most distorted part of the environment map!

OpenEXR cube maps are still a good place to store HDR environment data and IBL convolutions, though I feel that application didn't catch on. Exrenvmap is very old, and needs a rewrite with better math. I would consider the existing a code a reasonable reference for how to exercise the API to construct such an image, but the projection math is not exemplary, and the structure of the code is very much how we did C++ twenty years ago, and doesn't reflect modern practices, nor high performance practices.

@kfjahnke
Copy link
Author

I think there is one fundamental flaw in the cube map format as it is used in openEXR. The individual cubes are simply cut off at precisely ninety degrees, whereas proper interpolation near the edges would require a certain amount of support. This support can be built up artificially by generating it from adjoining cube faces - and, on the other hand, the artifacts arising from simply reflecting the content for interpolation purposes are not very pronounced, but all of this is a bother.

In lux, I use images for the six cube faces which can have more than ninety degrees field of view. Even with half a degree, you get plenty of 'headroom' even for interpolators with large support, and the flaws near the edges resulting from reflecting or mirroring content are no longer an issue. If you pick the 'frame' around the actual ninety degree square large enough, you can even use filters with very large support - I work with b-splines, which theoretically have infinite support in the prefilter, but you can usually neglect neighbourhood a few samples off because their effect vanishes to next to nothing. Given a lat/lon - or, as we say in panorama photography, a 'full spherical' - generating cube faces with slightly more fov is simple enough, and the resulting views are 'clean' around the edges. The only - slight - problem with the lux code is that it's using fixed mip levels, rather than the anisotropic filter OIIO uses to cater for pixels in different positions in the cube faces. lux does it for speed, so it can churn out the 60fps on a garden variety four-core, while the OIIO code is quite a mouthful and takes much longer to execute - but it should be ideal for a conversion program with high fidelity standards.

So we do have this legacy format, and it should be supported. You propose rewriting exr2envmap, which I think is a good idea. You may be interested in work I am currently doing along these lines. I have recently covered the generation of cubemaps from lat/lon environments with what you'd call 'better math'. Here is what I did:

  • To speed up the process, I am using multithreaded SIMD code provided by my own library, zimt
  • The texel data are generated using OIIO's texture system code
  • The code is available (MIT-licensed) from the examples section of the zimt repo

I am currently mulling over the reverse transformation - from a cubemap to a lat/lon environment - AFAICT OIIO does not support cubemaps as texture sources in it's texture system code, So I have to do this 'manually', and it will take me a while to figure out how best to deal with the missing support (I'll probably generate it, then use it to generate a better version, do that a few times - call it 'polishing' - just an idea). I'd also use OIIO here and just do a planar texture pickup, for which OIIO also provides code. Calculating the derivatives to properly steer the anisotropic antialiasing filter is a bit of extra work, but from what I see with using the OIIO code for the lat/lon environment lookup, the results are very nice indeed.

Using two libraries - zimt to do the 'stripmining', multithreading and SIMDization and OIIO for texel generation and I/O - the amount of code needed for the process is surprisingly little, and it relieves you of reinventing the wheel for both of these processes. Have a look if you like and tell me what you think. All my code for this program is MIT-licensed, and OIIO is 'from your own stable'.

@cary-ilm
Copy link
Member

We'd happily accept a contribution. Realistically, none of the core OpenEXR maintainers are likely to look into this any time soon. While your investigation and analysis are fresh, if you'd like to submit a PR with improvements to exrenvmap, we'd very much appreciate it.

@kfjahnke
Copy link
Author

I'd prefer not to touch your code, but I'll keep you updated on what I come up with.

@kfjahnke
Copy link
Author

kfjahnke commented Apr 9, 2024

Slow-ish progress, but now I have two programs to show:

https://github.com/kfjahnke/zimt/blob/main/examples/cubemap.cc
https://github.com/kfjahnke/zimt/blob/main/examples/latlon.cc

The first one converts a lat/lon environment map into a cubemap, and the second one does the inverse conversion. The 'better mathematics' consist in a multi-threaded implementation using SIMD and the use of OIIO's environment and texture lookup code. The problems with the cube face images being cut off at precisely ninety degrees fov are avoided be regenerating some support by interpolating from adjoining cube faces, so the internal representation can be filtered and even mip-mapped correctly. AFAICT, the results are geometrically correct and look appealing. Cubemap lookup is fast, I've thought out an access mode which avoids having to look at the cube faces as separate entities and can instead issue lookups to a single texture. Have a look! Comments welcome.

@kfjahnke
Copy link
Author

Slow-ish progress, but now I have two programs to show

I have now put together both conversions in a single program, and it's now in a new separate repository by itself. I called the program envutil. As it stands now, it can do the conversions using OIIO's quite elaborate filtering, fast bilinear, and an oversampled variant of bilinear pickup, which is quite fast and still has proper-looking output. The program will use highway, Vc or std::simd if present. It might be interesting to compare it's output withe exrenvmap, to see if it has similar scope and does what's needed - now with modern multithreaded SIMD code, which comes from my library zimt, in source. The program builds with cmake and has no external dependencies apart from OpenImageIO and, optionally, the SIMD back-end libraries, and the code is MIT-licensed.

@meshula
Copy link
Contributor

meshula commented Apr 21, 2024

Good thought to split it out, I'll give it a whirl as a replacement for what I use (which isn't exrenvmap ironically). I'd say that your program has a different scope than exrenvmap in the sense that exrenvmap doesn't have control over filtering, and conflates downsampling with luminance convolution, using a kernel that is no longer popular. I don't spot that envutil also supports convolutions to create an irradiance cascade for IBL, which exrenvmap was an early (premature) attempt at, so that might be another scope difference. Today, I think of exrenvmap as reference documentation for how to use OpenEXR's cube map interfaces, not as a canonical production tool. If you are hinting at whether envutil could replace exrenvmap, that's more a question for openimageio, although it would be nice to point to envutil from OpenEXR's documentation as a tool supporting EXR environment maps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants