| ▲ | mandarax8 4 hours ago | |
The pixel position has to be known, how else are you rasterizing something? | ||
| ▲ | dahart 4 hours ago | parent | next [-] | |
Rasterizing and shading are two separate stages. You don’t need to know pixel position when shading. You can wire up the pixel coordinates, if you want, and they are often nearby, but it’s not necessary. This gets even more clear when you do deferred shading - storing what you need in a G-buffer, and running the shaders later, long after all rasterization is complete. | ||
| ▲ | corysama 4 hours ago | parent | prev | next [-] | |
Technically, the (pixel) fragment shader stage happens after the rasterization stage. | ||
| ▲ | LoganDark 4 hours ago | parent | prev [-] | |
The view transform doesn't necessarily have to be known to the fragment shader, though. That's usually in the realm of the geometry shader, but even the geometry shader doesn't have to know how things correspond to screen coordinates, for example if your API of choice represents coordinates as floats from [0.5, 0.5) and all you feed it is vertex positions. (I experienced that with wgpu-rs) You can rasterize things perfectly fine with just vertex positions; in fact you can even hardcode vertex positions into the geometry shader and not have to input any coordinates at all. | ||