| ▲ | GuB-42 4 hours ago | ||||||||||||||||||||||||||||||||||
That, I think, is the most unintuitive part about writing fragment shaders. The idea that you take a couple of coordinates and output a color. Compared to traditional drawing, as with a pen and paper, you have to think in reverse. For example if you want to draw a square with a pen, you put your pen where the square is, draw the outlines, than fill it up, with a shader, for each pixel, you look at where you are, calculate where the pixel is relative to the square, and output the fill color if it is inside the square. If you want to draw another square to the right, with the pen, you move your pen to the right, but with the shader, you move the reference coordinates to the left. Another way to see it is that you don't manipulate objects, you manipulate the space around the objects. Vertex shaders are more natural as the output is the position of your triangles, like the position of your pen should you be drawing on paper. | |||||||||||||||||||||||||||||||||||
| ▲ | Karliss 3 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
I'd say the unintuive part is mostly a problem only if you abuse fragment shaders for something they weren't meant to be used for. All the fancy drawings that people make on shadertoy are cool tricks but you would very rarely do something like that in any practical use case. Fragment shaders weren't meant to be used for making arbitrary drawings that's why you have high level graphic APIs and content creation software. They were meant to be means for more flexible last stage of more or less traditional GPU pipeline. Normal shader would do something like sample a pixel from texture using UV coordinates already interpolated by GPU (don't even have to convert x,y screen or world coordinates into texture UV yourself), maybe from multiple textures (normal map, bump map, roughness, ...) combine it with light direction and calculate final color for that specific pixel of triangle. But the actual drawing structure comes mostly from geometry and texture not the fragment shader. With popularity of PBR and deferred rendering large fraction of objects can share the same common PBR shader parametrized by textures and only some special effects using custom stuff. For any programmable system people will explore how far can it be pushed, but it shouldn't be surprise that things get inconvenient and not so intuitive once you go beyond normal use case. I don't think anyone is surprised that computing Fibonacci numbers using C++ templates isn't intuitive. | |||||||||||||||||||||||||||||||||||
| ▲ | dahart 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I think what you’re describing is the difference between raster and vector graphics, and doesn’t reflect on shaders directly. It always depends on your goals, of course. The goal of drawing with a pen is to draw outlines, but the goal behind rasterizing or ray tracing, and shading, is not to draw outlines, but often to render 3d scenes with physically based materials. Achieving that goal with a pen is extremely difficult and tedious and time consuming, which is why the way we render scenes doesn’t do that, it is closer to simulating bundles of light particles and approximating their statistical behavior. Of course, painting is slighty closer to shading than pen drawing is. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | spiralcoaster 2 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
If you are using fragment shaders to draw squares you're doing something wrong. Shaders would be more for something like _shading_ the square. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||