Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question regarding color value range enforcement #149

Open
Sam-Izdat opened this issue Dec 17, 2023 · 51 comments
Open

question regarding color value range enforcement #149

Sam-Izdat opened this issue Dec 17, 2023 · 51 comments

Comments

@Sam-Izdat
Copy link

Hi. Quite excited about the work you're doing.

Reading over the white paper, I noticed that the acceptable range for color values is set to be [0,1]. Does this imply that nonconforming color spaces should be excluded?

For example, ACES 2065-1, CIE LAB/LUV and OKLAB can have negative color values. Now, perceptual color spaces might be a questionable choice here, but it seems conceivable that someone would want to define their materials with AP0 primaries.

@portsmouth
Copy link
Contributor

This is a non-trivial question.

I would like to think of the color values in the model as defining the properties of the physical material. If the colors were specified in a fully spectral manner, I think they would correspond essentially to "spectral albedos", i.e. physically what fraction of each particular wavelength is absorbed or transmitted on passing through some layer, say. (Except for emission_color which would be the spectrum of emitted light).

Those albedos should physically all be in the range $[0,1]$ (i.e. it would be meaningless to absorb a negative amount of energy, or more than 100%).

However since we use RGB values, the spectral interpretation is ambiguous. The albedo per color channel represents some integral over (unknown) underlying physical albedos, with weights depending on the color space. I would expect that normally the channels should be in $[0,1]$, in standard color spaces and under some reasonable assumptions about the possible underlying spectral albedo curve. But I'm not sure, in the cases of the "non-conforming" color spaces you mentioned, perhaps it could make sense to allow that the RGB albedo components can be negative, or exceed 1? It would be strange to work with negative color channels in a renderer, but perhaps could be done consistently with care (obviously physical radiances must be positive, but RGB renderers don't work directly with physical radiance).

Generalizing this properly might also require re-expressing some of the math in the model (not to mention, in other similar models). For example, in the standard Schlick model for metallic Fresnel:

image

it is assumed that $\mathbf{F_0}=(1, 1, 1)$ corresponds to perfect Fresnel reflectivity. But in these exotic color spaces, is that the case?

@KelSolaar
Copy link

KelSolaar commented Jan 26, 2024

Hello,

Thanks to @portsmouth for pointing me to this thread!

This is a great question and I will try to address it by reinstating what I think the prime objective of physically-based rendering is: Simulating the imaging of the electro-magnetic spectrum radiation incident to an observer in a way that the simulation would be indistinguishable from reality to a similar observer.

I took great care not using the words human and colour (or even light) anywhere in the previous sentence because, often, and especially in the VFX industry, the observer is a motion picture or stills camera. A camera is typically not colorimetric, i.e. its sensitivities are not a linear combination of the human cone responses. Put another way, it does not capture incident light the same way than us, and we need to process the images it captures to make them colorimetric. We might also be required to simulate an infrared camera which images a portion of the electro-magnetic spectrum that is invisible to us.

You might be wondering where I am going with that? It is simple: We should strive for the modelling of the scene in the physically-based rendering process to be agnostic of the observer. To be very specific, defining the physical properties of the objects taking part in the simulation should be independent of the observer. The act of "observing" an object does not change its characteristics in the real word, why should there be correlation in our simulations?

With that in mind, describing colour information in a renderer using a perceptually uniform space correlates the simulation to the observer, thus, changing the space will change the simulation. This is true with RGB colourspaces and especially noticeable with indirect illumination. We have shown that different spaces produce different images. See https://www.colour-science.org/anders-langlands/, https://www.colour-science.org/posts/about-rendering-engines-colourspace-agnosticism/ and https://computergraphics.stackexchange.com/questions/8152/for-shader-math-why-should-linear-rgb-keep-the-gamut-of-srgb/8163#8163 for more information.

The act of using photometric quantities is problematic, especially when describing surface characteristics: They are weighted by our very own human sensitivities.

The ideal renderer would only use radiometric quantities as input, e.g. spectral reflectance, transmittance or absorptance, spectral irradiance, etc.., unfortunately, we have created a world of problems by adopting RGB as the space for most renderers to operate in. There are good reasons for that, e.g. "apparent" simplicity and cost effectiveness. Whilst it is correct that a RGB renderer is typically faster and has readily available material to feed on, the consequential complexity in modelling the real world is problematic.

When we take a photograph and use it as a texture in a renderer, the illuminant is "baked" into it: In a simplified way, it is the result of the integration by the camera sensitivities of the multiplication of the environment irradiance with the reflectance, transmittance, and absorptance of the objects in the scene. The mechanics are similar for us:

image

Let me continue a bit as I'm also hoping to see more definitions and terminology clarification in the whitepaper: What is albedo? The CIE does not have a definition for it and I could not find any standard one. A tentative one might be as follows:

Albedo refers to the measure of the reflectivity of a surface or object. It is the ratio of the reflected light from an object to the incident light upon it. Albedo is typically expressed as a percentage or a decimal value between 0 and 1, where 0 represents complete absorption of light (no reflection) and 1 represents complete reflection (no absorption). A high albedo means that an object reflects a large portion of the incident light, whereas a low albedo indicates that most of the light is absorbed.

Why are we then surfacing our virtual objects with photographs that have illumination baked in? What illuminant should we be using when creating new textures in Mari or Substance Painter? D60, D65? Why not using the equal energy illuminant E? How should a blackbody, e.g. fire, converted to RGB for rendering, what does this mean to the physical camera white balancing?

It should hopefully be clear why I wrote "apparent" simplicity: We introduce observer correlation and coupling everywhere in the virtual scene authoring process when working with RGB values. Everything becomes extremely complicated when you start deeply thinking about it. Fortunately, all those problems disappear under the elegance of spectral rendering :)

To finally reply to your question, and to echo what Jamie said, I would limit albedo to [0, 1] because it is by "definition" a measure of reflectivity.

Cheers,

Thomas

@Sam-Izdat
Copy link
Author

Thank you both for the detailed responses.

So, if I'm understanding correctly, the design philosophy here is to limit parameters to "computation-ready" values, immediately suitable for rendering, without any additional pre-processing steps.

I'm working on an interchange file format, which will probably not see much use (if any) outside its associated projects, but I wanted make sure that I'm reading the white paper correctly. If this is the goal - I suppose I might make a version for "OpenPBR-ish" parameters, where color spaces can be arbitrary (and converted for the renderer as needed), and then a strictly-conforming one, where all color values must be in [0, 1] range, in a scene-linear color space.

@KelSolaar
Copy link

the design philosophy here is to limit parameters to "computation-ready" values, immediately suitable for rendering, without any additional pre-processing steps.

Yes, this is ideal because the renderer can focus only on rendering and does not have to deal with pre-transformations of all sort to bring the image resources into the RGB working space.

It is worth noting that your RGB working space is primarily driven by the RGB colourspace the image resources are encoded with. Some conversions require knowledge of the RGB working space, e.g. blackbody irradiance to RGB, but it is mostly an implicit condition.

Cheers,

Thomas

@Sam-Izdat
Copy link
Author

That sounds reasonable. I suppose my confusion stems from the fact that a scene may include many different materials, potentially using different color spaces. I assumed that, at some point, this would necessitate choosing a rendering space anyway, and doing the necessary transformations.

e.g. a scene-linear Rec. 2020 and ACEScg are both fine according to the spec, but you can't exactly use them together side-by-side as-is.

@KelSolaar
Copy link

that a scene may include many different materials, potentially using different color spaces

If the renderer exposes a way to conform the textures to the same working space, e.g. OpenColorIO or internal color transformation engine, it should be fine. As you point out, you would want all the input to be in the same working space otherwise you would be mixing metrics, akin to mix meters with inches.

Worth noting that BT.2020 and ACEScg are for practical purposes the same space and a human observer has a hard time making a difference between the two, so it would not be catastrophic, in this particular case to mix them! :)

@portsmouth
Copy link
Contributor

portsmouth commented Mar 1, 2024

It seems like a good idea to try to formalize the reasoning a bit to clarify what it means exactly to say that the reflectance (i.e. albedo) of a material is described by some (R, G, B) triple.

There may be different possible reasonable interpretations, so I though it was worth writing down how I interpret the discussion above from a computational/mathematical standpoint, at least to verify that I have the right mental model. (I found this exercise helpful anyway, though this is probably obvious to others). I found the paper "Physically Meaningful Rendering using Tristimulus Colours" by Meng et. al to be quite helpful in understand the relationship between physical albedos and RGB colors.

If we assume the illumination is uniform with luminance $Y$ (in nits) and has the spectral power distribution (SPD) of (say) the standard illuminant D65, we can express the incident spectral radiance as:

$$L_e(\lambda) = \left(\frac{Y}{Y_\mathrm{D65}}\right) \; S_\mathrm{D65}(\lambda)$$

where

$$Y_\mathrm{D65} = \int \bar{y}(\lambda) \; S_\mathrm{D65}(\lambda) \; \mathrm{d}\lambda \ .$$

Say for simplicity that the surface has a Lambertian BRDF with spectral reflectance (albedo) $\rho(\lambda)$, where physically, $0 \le \rho(\lambda) \le 1$ for every wavelength $\lambda$, in order to conserve energy.

Given that, we can write down the rendering equation at the surface point, for the reflected radiance $L_o$:

$$L_o(\lambda) = \frac{1}{\pi} \int \rho(\lambda) \, L_e(\lambda) \; \mathrm{d}\omega^\perp = \rho(\lambda) \, L_e(\lambda) \ .$$

Thus the tristimulus values of the reflected light (as observed by the measuring/recording apparatus, i.e. camera or eye) are given by

$$X_o = \left(\frac{Y}{Y_\mathrm{D65}}\right) \; \int \bar{x}(\lambda) \; \rho(\lambda) \; S_\mathrm{D65}(\lambda) \; \mathrm{d}\lambda$$ $$Y_o = \left(\frac{Y}{Y_\mathrm{D65}}\right) \; \int \bar{y}(\lambda) \; \rho(\lambda) \; S_\mathrm{D65}(\lambda) \; \mathrm{d}\lambda$$ $$Z_o = \left(\frac{Y}{Y_\mathrm{D65}}\right) \; \int \bar{z}(\lambda) \; \rho(\lambda) \; S_\mathrm{D65}(\lambda) \; \mathrm{d}\lambda$$

using the CIE color matching functions $(\bar{x}(\lambda), \bar{y}(\lambda), \bar{z}(\lambda))$.

Then we can interpret the albedo colors in our model (and in input textures) as corresponding to the resulting measured tristimulus $(X_0, Y_0, Z_0)$ when illuminated by the D65 illuminant normalized to luminance $Y = 1$ nit. The equations above then guarantee that $Y_o \le 1$, since $\rho(\lambda) \le 1$. Presumably this constraint corresponds to the color theory version of the BRDF energy conservation requirement.

This tristimulus can be converted to/from the RGB values $(\rho_r, \rho_g, \rho_b)$ of reflectances in the model via the usual transformation depending on the specified color space (assumed ACEScg by default, and presumably generally always linear / "scene referred"). Note though, it is not guaranteed (I think) that these RGB albedos are necessarily in the range $[0,1]$.

Given the RGB reflectance values in the model defined in that fashion, we then have a well-defined transformation to RGB albedo in the renderer, leading to the usual operations in RGB, e.g. the calculation of the R channel of the reflected light:

$$R = \int \bar{r}(\lambda) \; \rho(\lambda) \; L_e(\lambda) \; \mathrm{d}\lambda = Y \; \rho_r \ .$$

RGB values in textures can be converted to the model color space as usual, although this strictly only makes sense if the standard illuminant of the color space matches the assumed SPD of all light sources.

As Meng notes, if the illuminant of the lights can vary, then it would make more sense to stipulate e.g. that the model albedo colors are defined assuming a flat SPD (standard illuminant E). But this only seems to make sense for a spectral renderer, which does "uplifting" from RGB albedos to spectral.

For a tristimulus renderer, it makes more sense to just assume that the same illuminant (e.g. D65) is used consistently, as @anderslanglands proposed in UsdLux. This is essentially baking the illumination into the definition of the albedo which is not ideal, but if doing tristimulus rendering there are limited options (and the renders are physically inaccurate as they do not account for the true colors generated by different metameric spectra, as discussed by Meng, but unless one is doing predictive rendering with measured spectra -- not the use case of OpenPBR most likely, this is probably not a practical problem).

For OpenPBR, I'm not sure if we need to elaborate on any of this, though perhaps to be complete we should include some kind of discussion. We do say that color values are enforced to be in $[0,1]$, which may technically be violated in certain color spaces, though (as pointed out in the original question by @Sam-Izdat) -- even if the underlying spectral albedos are physical in $[0,1]$.

@anderslanglands
Copy link

anderslanglands commented Mar 1, 2024

This is essentially baking the illumination into the definition of the albedo which is not ideal

the Illuminant is already baked into the definition of whatever RGB space you’re using to store your texture data in.

@portsmouth
Copy link
Contributor

portsmouth commented Mar 1, 2024

the Illuminant is already baked into the definition of whatever RGB space you’re using to store your texture data in.

Right yes, so given texture data in a given (linear) RGB space, the R value means (I assume)

$$R = \int \bar{r}(\lambda) \; \rho(\lambda) \; S(\lambda) \; \mathrm{d}\lambda$$

given the spectral albedo $\rho$ and (normalized) standard illuminant SPD $S$ (i.e. the color of the surface is operationally defined as what you see when illuminated with a known light..).

I meant this is problematic in the sense that once this is baked in the dependence on that particular SPD can't be disentangled. If the surface is assigned that RGB albedo, but illuminated with a different SPD, there is no way to compute the integral for that to get the reflected RGB. The only way to complete the integral is if you assume that the SPD of all lights in the scene is the same.

Which seems a workable, if artificial, assumption for an RGB renderer. That's all I meant by "not ideal". (For a spectral renderer you can do everything correctly/ideally though, in principle).

@KelSolaar
Copy link

It seems like a good idea to try to formalize the reasoning a bit to clarify what it means exactly to say that the reflectance (i.e. albedo) of a material is described by some (R, G, B) triple.

I never liked talking about reflectance in that case because it is integrated reflectance for a given observer and illuminant. I don't really like albedo either because there are multiple definitions for it and I haven't seen a standard one that can be pointed at, are we talking about spectral albedo, average albedo, etc... I suggested that OpenPBR clarifies the definition it uses.

When we talk about scene-referred rendering the RGB values are colour estimates of the scene and not its irradiance. The critical aspect is the linear relationship between the scene and the scene-referred image: We know that a doubling of scene exposure yields a doubling of image luminance. In that sense, and I understand it is mouthful, but maybe "referred-reflectance" is a better term? Or this could be given as an explanation example tying the logic that the scene is in the radiometric domain and the scene-referred image is necessarily in the photometric domain.

@anderslanglands
Copy link

Do you mean we're looping back around and actually "diffuse colour" was the best nomencalture after all? :P

@anderslanglands
Copy link

anderslanglands commented Mar 2, 2024

agreed that albedo is a horrible term for this. I'm fine with reflectivity though personally as the referred- or integrated- nature being implicit in this context is ok imo.

I do prefer reflectivity over reflectance as reflectance implies it's the output quantity rather than the factor, but anyway...

@portsmouth
Copy link
Contributor

portsmouth commented Mar 2, 2024

I think "color" works ... these are colors aren't they (RGB triples)? We just need to specify a color space (as we say the metadata should provide, otherwise default to ACEScg).

The only potentially non-obvious thing is how exactly to interpret the color as a reflectance. We may want to say explicitly that it's this relationship (if you agree that is right):

$$R = \int \bar{r}(\lambda) \; \rho(\lambda) \; S(\lambda) \; \mathrm{d}\lambda$$ $$G = \int \bar{g}(\lambda) \; \rho(\lambda) \; S(\lambda) \; \mathrm{d}\lambda$$ $$B = \int \bar{b}(\lambda) \; \rho(\lambda) \; S(\lambda) \; \mathrm{d}\lambda$$

and then some brief discussion about how this works in a traditional tristimulus renderer, versus a spectral renderer (along the lines of the discussion above).

What about this (the original question):

We do say that color values are enforced to be in $[0,1]$, which may technically be violated in certain color spaces, though (as pointed out in the original question by @Sam-Izdat) -- even if the underlying spectral albedos are physical in $[0,1]$.

That does seem to be the case actually -- i.e. it could be totally legit for a material to have a diffuse color of (-0.1, 0.5, 1.0) in some color space (ACES 2065-1 was mentioned) according to the definition above, and this be meaningful, though weird. If that's allowed, we should mention it in the spec, since right now we forbid that (color ranges are said to be $[0,1]$).

@KelSolaar
Copy link

The only potentially non-obvious thing is how exactly to interpret the color as a reflectance.

What about the others cases? Including all the .*_color for completeness:

  • base_color : Integrated reflectance
  • specular_color : ?
  • subsurface_color : ?
  • transmission_color : Integrated transmittance
  • coat_color : ?
  • fuzz_color : ?
  • emission_color : Integrated irradiance

@anderslanglands
Copy link

anderslanglands commented Mar 2, 2024

ACES 2065-1 covers the entire spectral locus so you're not going to get negative RGB values there, but you could get negative values if your colours are stored in a wide gamut space (e.g. ACEScg) and your rendering space is narrower (e.g. sRGB). Negative reflectances don't make any sense, so you have to adapt the gamut somehow, and the easiest thing to do there is to clip to [0, 1).

@portsmouth
Copy link
Contributor

portsmouth commented Mar 4, 2024

Negative reflectances don't make any sense

Agreed that the spectral reflectance $\rho(\lambda)$ must be non-negative, but I think a negative RGB reflectance does make sense in principle since e.g.

$$R = \int \bar{r}(\lambda) \; \rho(\lambda) \; S(\lambda) \; \mathrm{d}\lambda$$

can be negative even if $\rho(\lambda)$ is positive for all wavelengths (e.g. make $\rho(\lambda)$ be a delta-function spike somewhere where $\bar{r}(\lambda) < 0$). I assume that for reasonable assumptions about the shape of $\rho(\lambda)$, this is unusual though. In practice one probably doesn't run into issues by clipping to [0, 1] I expect.

@portsmouth
Copy link
Contributor

portsmouth commented Mar 4, 2024

The only potentially non-obvious thing is how exactly to interpret the color as a reflectance.

What about the others cases? Including all the .*_color for completeness:

  • base_color : Integrated reflectance
  • specular_color : ?
  • subsurface_color : ?
  • transmission_color : Integrated transmittance
  • coat_color : ?
  • fuzz_color : ?
  • emission_color : Integrated irradiance

I would interpret all of these except emission_color as albedos for reflection or transmission (i.e. reflectances or transmittances). Connecting the spectral quantities (albedos/transmittances) to the RGB colors could be done via similar line of reasoning to the discussion above for a diffuse surface.

Though coat_color doesn't quite fit into that picture since we say that technically it specifies the square of the coat transmittance, so the RGB quantity is really the integral over the spectral transmittance squared.

I don't think we necessarily need to elaborate on this in the spec though.

We are discussing how to interpret emission_color in #85.

@KelSolaar
Copy link

We should not talk about "RGB reflectance" once reflectance has been integrated by the Observer, reflectance is a radiometric quantity, RGB is a photometric one.

I think that you might be confusing the RGB CMFS that were measured during the colour matching experiments those have negative lobes and the XYZ CMFS, e.g., CIE 1931 2 Degree Standard Observer which do not and are used for conversion from spectral domain to CIE XYZ tristimulus values. Thus, CIE XYZ tristimulus values cannot be negative.

Straight from my slides:

image image image image image

@portsmouth
Copy link
Contributor

portsmouth commented Mar 4, 2024

I meant that the RGB can be negative, even if the XYZ must be positive (as you pointed out).

I think the RGB components corresponding to e.g. the base_color in OpenPBR, can in principle be negative if you assume they were derived from some arbitrary underlying physical spectral reflectance $\rho(\lambda)>0$. Because of the negative lobes in the $(\bar{r}, \bar{g}, \bar{b})$ color matching functions you show. Do you see any mathematical reason that cannot happen?

As noted though, in practice it might be reasonable to just assume that negative RGB values are forbidden, and clamp to make that so if they are generated.

We should not talk about "RGB reflectance" once reflectance has been integrated by the Observer, reflectance is a radiometric quantity, RGB is a photometric one.

OK, though we do obviously informally inside RGB renderer implementations use RGB albedos and transmittances as if they are physical albedos and transmittances.. So maybe we can at least use that terminology as a shorthand. If we're dealing with RGB quantities at all, it's understood I think that they actually represent quantities being averaged over a whole band of wavelengths.

@KelSolaar
Copy link

KelSolaar commented Mar 4, 2024

Do you see any mathematical reason that cannot happen?

They cannot be negative because we never ever use the RGB CMFS in computations. The RGB space defined by the RGB CMFS is not the same one than the RGB colourspace derived from the CIE XYZ colourspace. The red negative lobe is a by-product of the method used to assemble the curves. The cone fundamentals are all but positive:

image

Thus, there cannot be anything that produce negative tristimulus values. It is also possible to define a RGB space using an identity matrix from CIE XYZ, this RGB space, which covers the visible spectrum entirely, in a wasteful way, is always positive, like ACES 2065-1.

we do obviously informally inside RGB renderer implementations use RGB albedos and transmittances

I think that this is damaging terminology and that we should not mix radiometric and photometric terms like that:
How will we explain reflectance recovery from RGB values when we have talked about RGB reflectance for years incorrectly?

https://scholar.google.com.au/scholar?hl=en&q=reflectance+recovery+rgb&btnG=&oq=reflectance+recovery

@portsmouth
Copy link
Contributor

Not sure I'm familiar enough with color science to fully understand, but don't these "cone fundamentals" mean essentially the spectral response of the different kind of cones to different frequencies, so by definition they are positive and give a positive color (in "LMS" color space)?

How does that guarantee that the corresponding RGB will not be negative in some other color space? (Maybe it does though). Should we not just say that regions of RGB space that have negative color components exist but are "out of gamut", so if they are generated we have to do some kind of correction?

I think that this is damaging terminology and that we should not mix radiometric and photometric terms like that:
How will we explain reflectance recovery from RGB values when we have talked about RGB reflectance for years incorrectly?

We do currently talk about RGB albedos in multiple places in the spec, e.g. below. So in your view this terminology is bad? What alternative do you have in mind, maybe replacing albedo with "integrated albedo" say? It is a color, representing the diffuse reflectance/albedo of the surface, so not sure I see the issue. Can we not just make a note that by RGB albedo we mean the physical albedo integrated over appropriate CMFs, etc.?

image

@KelSolaar
Copy link

so by definition they are positive and give a positive color (in "LMS" color space)?

Yes, and thus in CIE XYZ also. My point is that we never use the equation you shown pertaining to the RGB CMFS in computations:

image

and it is certainly not used to integrate reflectances so we should probably never mention it unless there is a will to discuss about the history of colour science. Modern CMFS such as the CIE 2015 XYZ CMS are actually derived from the Cone Fundamentals directly thus there is no CIE 2015 RGB CMFS.

So in your view this terminology is bad?

"RGB reflectance" is bad imho, reflectance should be left to the spectral radiometric domain, it is not a quantity we are manipulating in a RGB renderer. Albedo is very much similar, I don't like it, but easier to sacrifice under the "Totem of Approximation" as it is used a lot already in the literature. I do think though that it should be defined properly in the white paper. It is also worth noting that "Reflection albedo" in your screenshot is tautologic as albedo is a measure of reflectivity.

It is a color, representing the diffuse reflectance/albedo of the surface, so not sure I see the issue.

But this the entire problem: It is not representing the reflectance of the surface at all, it is modelling the colour of the surface for the Standard Human Observer under an illuminant, i.e., that of the encoding colourspace. We cannot even say "average reflectance" to discard the spectral component because it has been weighted by the Standard Human Observer and the illuminant.

@portsmouth
Copy link
Contributor

portsmouth commented Mar 5, 2024

Modern CMFS such as the CIE 2015 XYZ CMS are actually derived from the Cone Fundamentals directly thus there is no CIE 2015 RGB CMFS.

OK sure, but I take it you would agree that it is at least possible for there to be negative RGB values (for "referred-reflectance") in certain color spaces? For example, if base_color is (1, 0, 0) in ACEScg, this tells me that the sRGB value for that is (1.26269, -1.67573, -0.31223).

I was only saying that in principle that sRGB color is still a valid description (of the "referred-reflectance") that could be used inside the renderer. In practice though since the color is out of the sRGB gamut, it should probably be clamped to the closest in-gamut color (or something like that). Maybe we should say that explicitly in the spec, since it's a practical thing that can happen (as in the (1, 0, 0) example above).

"RGB reflectance" is bad imho, reflectance should be left to the spectral radiometric domain, it is not a quantity we are manipulating in a RGB renderer.

But this the entire problem: It is not representing the reflectance of the surface at all, it is modelling the colour of the surface for the Standard Human Observer under an illuminant, i.e., that of the encoding colourspace.

I see your point of view I think. I just think the terminology of "RGB reflectance/albedo" or "reflectance/albedo color" is not really misleading, at least as a short-hand, since it will be understood that the fact that we're working with RGB quantities at all implies that these are perceptually-based quantities, not radiometric ones.

I disagree that the RGB value is "not representing the reflectance of the surface at all", it is representing the combined effect of the physics of the light emitted and scattered from the surface+ the physics/physiology of the effect of that light on the human visual system. All that is implied whenever colors are involved, I thought that should be implicitly understood. Thus it seems reasonable to me to talk about the reflectance color of an object.

We could have a brief discussion in the spec to make sure there is no confusion about that though, perhaps.

@KelSolaar
Copy link

OK sure, but I take it you would agree that it is at least possible for there to be negative RGB values (for "referred-reflectance") in certain color spaces?

Absolutely.

since it will be understood that the fact that we're working with RGB quantities at all implies that these are perceptually-based quantities

I don't think this is true, I would think that most people not familiar with radiometry, photometry, colour science and spectral rendering don't make the difference. Also, "perceptual" is a keyword with a specific meaning in colour science, i.e. perceptual uniformity, so I would probably stick to "photometric" until not possible.

II disagree that the RGB value is "not representing the reflectance of the surface at all", it is representing ...

It is akin to say that iron is the same thing than steel, or that cacao is the same thing than a chocolate cake. Surface reflectance is one of the ingredients that contributes to the sensation in the HVS, the full, albeit simplistic, recipe requires the CMFS and the illuminant.

We really do need precise terminology and if we take shorthand, we need to explain why. I would like to avoid the "linear" situation from 20 years ago where no-one knew what linear meant.

Is there a source for the white-paper? I would be keen to try giving a crack at putting a few things to help clarifying all that.

@portsmouth
Copy link
Contributor

Is there a source for the white-paper? I would be keen to try giving a crack at putting a few things to help clarifying all that.

Sure, that would be great to see your proposal.

You would just fork the repo, make your changes (it's just an index.html with markdown), and create a PR.

@KelSolaar
Copy link

Ah, I totally missed it! https://github.com/AcademySoftwareFoundation/OpenPBR/blob/main/index.html

@portsmouth
Copy link
Contributor

portsmouth commented Mar 5, 2024

OK sure, but I take it you would agree that it is at least possible for there to be negative RGB values (for "referred-reflectance") in certain color spaces?

Absolutely.

So we currently state that color components are in $[0,1]$ (e.g. see below), but we don't restrict the choice of color space. Do you think we should require negative values to be removed (e.g. clamped), or allow negative values? (That was the original query from @Sam-Izdat at the top of the thread, I think).

image

@KelSolaar
Copy link

The colourspace is implicitly defined by the encoding used by the values so it is up to the user and a good idea that the spec does not enforce a working colourspace.

Negative values should be removed because they violate energy conservation, and I would certainly enforce that on the implementation side, e.g., base_color is clamped between [0, 1], systematically.

@portsmouth
Copy link
Contributor

portsmouth commented Mar 5, 2024

Negative values should be removed because they violate energy conservation

I'm not sure that's correct, as per the formulas above. The only thing that energy conservation (of the BRDF) guarantees seems to be that the reflected luminance is less than the input luminance, i.e. $Y_o \le Y$ (as above). The RGB color of the surface can still have negative components.

I can see that negative values should be removed practically though, as many systems will assume non-negative color channels.

@KelSolaar
Copy link

KelSolaar commented Mar 5, 2024

Which formula are you referring to specifically?

I'm talking in general term of using negative values in rendering to describe reflectance, transmittance, absorptance, and irradiance. How are we expecting a system simulating light transport, even in the photometric domain like RGB, to behave properly with negative values for the aforementioned corresponding quantities? As far as I know, there is no such a thing as an anti-photon in particle physics that would contribute to make an object invisible.

If we allow negative reflectance, by the same principle, why would not we be allowing negative irradiance values?

@portsmouth
Copy link
Contributor

Of course there can't be negative energy photons (or negative radiance), but I thought we agreed that RGB values are not directly related to physical quantities like photon energy (or radiance).

It doesn't seem inconsistent to imagine the renderer working with negative RGB albedos, generating a negative RGB stimulus, which is a correct representation of the observed color.

But anyway, I agree that in practice negative luminances in a renderer or negative "albedos" in a shader model are bad, and we need to clamp them away. We don't say in the spec currently though, so we need to add something to that effect.

@anderslanglands
Copy link

anderslanglands commented Mar 5, 2024

It doesn't seem inconsistent to imagine the renderer working with negative RGB albedos, generating a negative RGB stimulus, which is a correct representation of the observed color.

Just imagine what happens at each vertex when bouncing a diffuse colour of [-0.9, 0.9, 0.9] around multiple times

@portsmouth
Copy link
Contributor

portsmouth commented Mar 5, 2024

Yep, no doubt a negative albedo is pretty disastrous if used inside an RGB renderer, I'm not disputing that. I think that happens essentially because multiplying two RGB quantities, e.g. input luminance and albedo inside the rendering equation, does not correspond to a valid spectral calculation. (But not because negative color values make no sense in general).

So perhaps we can just say that the color components in the assumed color space must be positive (i.e. "within gamut")?

@anderslanglands
Copy link

anderslanglands commented Mar 5, 2024

Yes this goes back to @KelSolaar 's point that colours are not reflectances I think.

I think it's reasonable just to say that it's expected that colours will be in gamut.

@portsmouth
Copy link
Contributor

Currently we explicitly say all color components are in $[0,1]$. That seems more restrictive than in-gamut (is any positive-valued color in-gamut?). Presumably a color channel > 1 for an albedo is also disastrous though, in RGB rendering.

image

@anderslanglands
Copy link

Yes. I would relax that on emission_color though

@portsmouth
Copy link
Contributor

@anderslanglands and @KelSolaar This may be slightly off-topic, but do you think it is feasible to use OpenPBR in the context of a spectral rendering engine? Or would it require some changes to the parametrization and spec for that to make sense?

@anderslanglands
Copy link

anderslanglands commented Mar 6, 2024 via email

@portsmouth
Copy link
Contributor

No it all works fine.

For example, base_color=(1.0, 0.5, 0.1) in ACEScg does not define the spectral reflectance of the base uniquely (obviously). But I take it a spectral renderer would just need to do its own conversion to the spectral albedo consistent with the supplied color, according to some reasonable scheme.

@KelSolaar
Copy link

Agreeing with @anderslanglands, I don't see any issue. The renderer would recover reflectance, irradiance, etc..., i.e., up-sample, according to the input type if the input is non-spectrally represented.

@meshula
Copy link

meshula commented Mar 6, 2024

Ok, I'm gonna be That Guy.

OpenPBR is new, not preserving back compatibility with an existing standard surface. It's derived-from and inspired-by existing standards obviously, but must we be beholden to olden assumptions we made when we were young and naive?

Why not just bite the bullet, and say OpenPBR inputs are fundamentally spectral, and that tristimulus values can only be observations on values produced by OpenPBR?

Until tools produce spectral data, a hypothetical spectralizing NN could bridge existing content. Machine learning with image understanding could likely produce good enough spectral inputs from texture data, by virtue of hallucinating physical materials to motivate the observed colors. OpenPBR could provide a reference color picker (and possibly other required user interface elements) to help applications produce spectral color workflows for artists.

I release this is completely contrary to where the project started not that long ago with an idea of being simply a best-practices consolidation of well respected industry models, but are assumptions about being stuck with RGB forever etched in stone?

@KelSolaar
Copy link

Why not just bite the bullet, and say OpenPBR inputs are fundamentally spectral, and that tristimulus values can only be observations on values produced by OpenPBR?

I would LOVE to see that!

@anderslanglands
Copy link

Why not just bite the bullet, and say OpenPBR inputs are fundamentally spectral

What do you actually mean by that? That every colour input on OpenPBR should be changed so an array of [wavelength, reflectance] pairs?

@meshula
Copy link

meshula commented Mar 8, 2024

I have a feeling the answer is more interesting than that, because I think if I go ask three different people for what they think the answer is, I'll get three different answers, and they may all be correct. Thomas already suggested two standard ways to do it in another forum, and in a real time rendering forum there's a debate raging right now about it and they are all showing each other "look at how this breaks if I use a spikey illuminant!" and "yeah but look at the third bounce in my Cornell box!" and they are arguing about color spaces I've never heard of before. I didn't prompt that one, it's coincidental. It seems spectral rendering is about to be all the rage :]

@portsmouth
Copy link
Contributor

portsmouth commented Mar 8, 2024

I think we could add some sentence or two to the spec, indicating that the color parameters are specifically RGB (with a defined color space) and how a spectral renderer should interpret this (I assume doing "upsampling"/"uplifting" from RGB to spectral reflectance/emission, in a renderer-specific way).

I take it Nick you mean that some real time (and/or more offline) renderers are switching to spectral, so they do this uplifting etc. and internally work with spectra? If so, I think it still makes sense for OpenPBR to work with RGB, as in practice presumably all renderers currently have to deal with assets which are authored with RGB colors and textures, and do such uplifting. In the future, perhaps we will instead all work with assets whose pixels contain a full spectral representation (in some standard way), and maybe instead of color pickers we manipulate spectra in some way? (You could still think of RGB colors as a very natural human-artist-friendly way to work with spectra..).

At that point it would be reasonable to switch from explicit RGB colors to spectra in the OpenPBR model, but that seems like it would be over-complicated or confusing right now to state it that way.

@anderslanglands
Copy link

You don’t have to convince me that spectral rendering is the right way :) but just because you render spectrally doesn’t mean you want to specify reflectances spectrally.

At a large, spectral-enjoying VFX company I worked at, the number of times artists specified spectra was basically zero, to the point where it wasn’t even possible to specify spectra in the the artist-facing shading system because no-one had found it necessary to implement.

It turns out that for the vast majority of materials, RGB “reflectances” (sorry Thomas) work just fine. What’s more, sRGB reflectances work just fine in most cases.

More to the point: there’s no readily available tools for capturing or authoring spectral data. We’d need spectral cameras, spectral photoshop, spectral-aware shading systems etc etc. this is unlikely to happen, ever imo. The vast majority of image data out there is still 8-bit PNG and JPG!

Where I do think spectral data matters and is practical is in specifying illumination, which goes to the discussion we were having recently about adding illuminant specification to UsdLux, which is something I will absolutely drive.

@portsmouth I don’t think you need to specify anything about spectral handling: any spectral renderer is already doing uplifting, and specifying methodology would be restrictive to the point of being ignored. Colours are already defined clearly as to what space they’re in which is exactly the information that’s needed.

@meshula
Copy link

meshula commented Mar 8, 2024

Let me double check in the forum whether they are talking about uplifting or something more fundamental. I do agree of course that it makes sense to deliver an OpenPBR based on RGB. I also agree that spectrally describing lights is low hanging fruit that should probably be plucked first. I humbly disagree that no one is ever going to create spectral-anything tools, and that artists will never work that way. I'm being a bit provocative to say so but It calls to mind the moment twenty years ago when there was much argument spent on whether we could ever convince artists not to paint shading into their textures so that we could start to transition to PBR in the first place. Nonethelless to repeat, I do agree that it makes sense to first deliver a spec based on RGB.

@KelSolaar
Copy link

I cannot see a future where there won't be tools allowing you to author data spectrally, it is already trivial to define spectral input for emission, e.g., is my light a LED, tungsten or fluorescent one? We also put spectral transmission or diffusion gels and filter lights with them.

I don't see any reason why we could not extend Mari for example, and add a type channel, e.g., wood, brick, leaf, plastic, guiding the reflectance recovery model. Mari could then either output hyper spectral images (HSI) or guide the renderer during its up/sampling process.

I think that it should be possible to load an HSI directly into the renderer: a key use case would be for IBL, e.g. onset with mixed lighting such as LED wall, HMI.

@anderslanglands
Copy link

I disagree with both of you, but will be very happy to be proven wrong.

@KelSolaar
Copy link

but It calls to mind the moment twenty years ago

20 years ago, many people told me that realtime raytracing would never be a thing!

@meshula
Copy link

meshula commented Mar 9, 2024

The answer from games folks is that spectral rendering in real time engines is done to support creating training data for ML to simulate vehicle cameras have which tend to be be infrared sensitive. Spectral light sources are also in their infancy, but rendering in that case is still for the most part done in a wide gamut space like ACEScg. Still early days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants