Remix.run Logo
GuB-42 4 hours ago

That, I think, is the most unintuitive part about writing fragment shaders. The idea that you take a couple of coordinates and output a color. Compared to traditional drawing, as with a pen and paper, you have to think in reverse.

For example if you want to draw a square with a pen, you put your pen where the square is, draw the outlines, than fill it up, with a shader, for each pixel, you look at where you are, calculate where the pixel is relative to the square, and output the fill color if it is inside the square. If you want to draw another square to the right, with the pen, you move your pen to the right, but with the shader, you move the reference coordinates to the left. Another way to see it is that you don't manipulate objects, you manipulate the space around the objects.

Vertex shaders are more natural as the output is the position of your triangles, like the position of your pen should you be drawing on paper.

Karliss 3 hours ago | parent | next [-]

I'd say the unintuive part is mostly a problem only if you abuse fragment shaders for something they weren't meant to be used for. All the fancy drawings that people make on shadertoy are cool tricks but you would very rarely do something like that in any practical use case. Fragment shaders weren't meant to be used for making arbitrary drawings that's why you have high level graphic APIs and content creation software.

They were meant to be means for more flexible last stage of more or less traditional GPU pipeline. Normal shader would do something like sample a pixel from texture using UV coordinates already interpolated by GPU (don't even have to convert x,y screen or world coordinates into texture UV yourself), maybe from multiple textures (normal map, bump map, roughness, ...) combine it with light direction and calculate final color for that specific pixel of triangle. But the actual drawing structure comes mostly from geometry and texture not the fragment shader. With popularity of PBR and deferred rendering large fraction of objects can share the same common PBR shader parametrized by textures and only some special effects using custom stuff.

For any programmable system people will explore how far can it be pushed, but it shouldn't be surprise that things get inconvenient and not so intuitive once you go beyond normal use case. I don't think anyone is surprised that computing Fibonacci numbers using C++ templates isn't intuitive.

dahart 4 hours ago | parent | prev | next [-]

I think what you’re describing is the difference between raster and vector graphics, and doesn’t reflect on shaders directly.

It always depends on your goals, of course. The goal of drawing with a pen is to draw outlines, but the goal behind rasterizing or ray tracing, and shading, is not to draw outlines, but often to render 3d scenes with physically based materials. Achieving that goal with a pen is extremely difficult and tedious and time consuming, which is why the way we render scenes doesn’t do that, it is closer to simulating bundles of light particles and approximating their statistical behavior.

Of course, painting is slighty closer to shading than pen drawing is.

Kiro 3 hours ago | parent [-]

I think their explanation is great. The shader is run on all the pixels within the quad and your shader code needs to figure out if the pixel is within the shape you want to draw or not. Compared to just drawing it pixel by pixel if you do it by pen or on the CPU.

For a red line between A and B:

CPU/pen: for each pixel between A and B: draw red

GPU/shader: for all pixels: draw red if it's on the intersection between A and B

dahart 3 hours ago | parent [-]

Figuring out if a pixel is within a shape, or is on the A-B intersection line, is part of the rasterizing step, not the shading. At least in the parent’s analogy. There are quite a few different ways to draw a red line between two points.

Also using CPU and GPU here isn’t correct. There is no difference in the way CPUs and GPUs draw things unless you choose different drawing algorithms.

Kiro 2 hours ago | parent [-]

While (I presume) technically correct I don't think your clarifications are helpful for someone trying to understand shaders. The only thing that made me understand (fragment) shaders was something similar to the parent's explanation. Do you have anything better?

It's not about the correct way to draw a square or a line but using something simple to illustrate the difference. How would you make a shader drawing a 10x10 pixels red square on shadertoy?

dahart an hour ago | parent [-]

You’re asking a strange question that doesn’t get at why shaders exist. If you actually want to understand them, you must understand the bigger picture of how they fit into the pipeline, and what they are designed to do.

You can do line drawing on a CPU or GPU, and you don’t need to reach for shaders to do that. Shaders are not necessarily the right tool for that job, which is why comparing shaders to pen drawing makes it seems like someone is confused about what they want.

ShaderToy is fun and awesome, but it’s fundamentally a confusing abuse of what shaders were intended for. When you ask how to make a 10x10 pixel square, you’re asking how to make a procedural texture with a red square, you’re imposing a non-standard method of rendering on your question, and failing to talk about the way shaders work normally. To draw a red square the easy way, you render a quad (pair of triangles) and you assign a shader that returns red unconditionally. You tell the rasterizer the pixel coordinate corners of your square, and it figures out which pixels are in between the corners, before the shader is ever called.

spiralcoaster 2 hours ago | parent | prev [-]

If you are using fragment shaders to draw squares you're doing something wrong.

Shaders would be more for something like _shading_ the square.

adastra22 2 hours ago | parent [-]

Ray casting shaders are a thing. A very performant thing too.