Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Tool to create combined image by gathering channels from input images #839

Open
tksuoran opened this issue Jan 16, 2024 · 8 comments

Comments

@tksuoran
Copy link

tksuoran commented Jan 16, 2024

Use cases:

  • RGB from one image, A from second image
  • RG from one image, BA from second image. One (or both) of the images could be normal map that is encoded into two channels.

In theory also these could be possible:

  • Four individual images, each providing single channel
  • R and G from one image each, BA from third image (which could be normal map)
@MarkCallow
Copy link
Collaborator

@tksuoran are you the same person I worked with in the OpenGL ES (and M3G?) working groups a lifetime ago?

The use cases say what you want to do but not why. I'm curious why you would want to, e.g, put two normal maps in a single texture object. Please provide further explanation.

@donmccurdy
Copy link
Contributor

donmccurdy commented Jan 17, 2024

@MarkCallow I've had a few people request this feature from glTF Transform, and can share the use cases behind that. The glTF format and its material-related extensions define a significant number of texture inputs to the PBR shading model. Many of these require ≤3 channels, and so can be combined. The most common example would be occlusion/roughness/metalness, in red/green/blue channels.

Current texture inputs, and their assigned channels in glTF:

gltf texture channels

Screenshot regex'd from glTF Transform codebase.

I realize that combining uncorrelated data channels into a single texture with Basis Universal compression would require some care, so I'm unsure how practical this is, or how great the memory savings would be when all is said and done. Nevertheless it is a fairly popular request. I may eventually implement it with a pre-process merging the textures to PNGs before processing them with KTX2, but direct APIs to do this in KTX Software would be very welcome!

@tksuoran
Copy link
Author

@tksuoran are you the same person I worked with in the OpenGL ES (and M3G?) working groups a lifetime ago?

Hi Mark, yes I am, indeed it has been a while.

The use cases say what you want to do but not why. I'm curious why you would want to, e.g, put two normal maps in a single texture object. Please provide further explanation.

My initial thinking was that I would like to avoid artificial limitations, and promote generic interface, where use would have the freedom to choose inputs and outputs as they wish. One remotely plausible use case would be material/shader which uses both normal map (encoded in RG) and tangent texture (encoded in BA), while using same encoding for tangent texture and normal map.. However, I am not sure if using the normal map encoding scheme for tangent textures is the good idea or not.

@MarkCallow
Copy link
Collaborator

I may eventually implement it with a pre-process merging the textures to PNGs before processing them with KTX2, but direct APIs to do this in KTX Software would be very welcome!

The libktx API gives access to the memory where the images are stored in the ktxTexture2 object and has functions to query the offsets within that memory of each layer, level and face or slice. Using that together with the format info, from the DFD or the vkFormat field, you can write code to take an input image and write its components to the desired components of the final texture and when done use the standard libktx API functions to compress the texture and write it to disk. If you implement this way then your code could become the basis of a PR to add the feature you want. Writing it this way will will not take more time than merging the inputs to a PNG. It may even take less.

@MarkCallow
Copy link
Collaborator

Many of these require ≤3 channels, and so can be combined.

@donmccurdy does glTF require these be combined or is it optional?

@donmccurdy
Copy link
Contributor

Optional, there is no requirement that textures be packed.

@javagl
Copy link
Contributor

javagl commented Nov 12, 2024

To me, this sounds like some sort of "image creation/preprocessing" that, at its core, is fairly independent of KTX-Software itself. (When PBR was introduced in glTF ~8 years ago, I hacked together https://javagl.github.io/MetallicRoughnessCreator/MetallicRoughnessCreator.html , because I thought that something like this might be useful - but I haven't used it ever since...). But of course, it might very well a a use case that is so common (and so badly supported by other tools) that it might fit into KTX-Software.

@wasimabbas-arm
Copy link
Contributor

For the general usecase I do this packing when I load these textures into the engine. It will be nice that I don't have to do that and its done offline via KTX-Software but I find this very hard in practice because there is no easy way to save this information within a texture. You do need something like glTF to provide those semantics. Or have some coding for the texture names which isn't great, which brings me to my second point.

This on its own isn't very useful because you still have to go and edit your glTF document to point to the packed texture. I would argue this is better done in the glTF exporter or some other glTF specific post processing tool. It makes perfect sense in glTF Transform.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants