You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most image channel orders return four components for reading or accept four components for writing, but depth images are different and only support reading or writing a single scalar. Depth images are currently identified as having either the "Depth" or "DepthStencil" image channel orders. Because "Depth" images are identified by their image channel order the scalar vs. vector difference cannot be checked staticially. This is because the "Image Format" field in OpTypeImage must be "Unknown" for images used in the OpenCL environment.
Should the scalar vs. vector difference be determined by the "Depth" field in OpTypeImage instead, which would allow for static checking?
I don't think this would be a behavioral difference in practice, since I suspect most OpTypeImages with "Depth=1" also have an image channel order equal to "Depth" or "DepthStencil", but I don't see this requirement explicitly documented anywhere either. Should this be explicitly documented?
The text was updated successfully, but these errors were encountered:
In the current SPIR-V environment spec, we describe the "data format for reading and writing images":
https://registry.khronos.org/OpenCL/specs/3.0-unified/html/OpenCL_Env.html#_data_format_for_reading_and_writing_images
Most image channel orders return four components for reading or accept four components for writing, but depth images are different and only support reading or writing a single scalar. Depth images are currently identified as having either the "Depth" or "DepthStencil" image channel orders. Because "Depth" images are identified by their image channel order the scalar vs. vector difference cannot be checked staticially. This is because the "Image Format" field in OpTypeImage must be "Unknown" for images used in the OpenCL environment.
Should the scalar vs. vector difference be determined by the "Depth" field in OpTypeImage instead, which would allow for static checking?
I don't think this would be a behavioral difference in practice, since I suspect most OpTypeImages with "Depth=1" also have an image channel order equal to "Depth" or "DepthStencil", but I don't see this requirement explicitly documented anywhere either. Should this be explicitly documented?
The text was updated successfully, but these errors were encountered: