| ▲ | chuckadams 4 hours ago | |
Pipes are parallelized when you have unidirectional data flow between stages. They really kind of suck for fan-out and joining though. I do love a good long pipeline of do-one-thing-well utilities, but that design still has major limits. To me, the main advantage of pipelines is not so much the parallelism, but being streams that process "lazily". On the other hand, unix sockets combined with socat can perform some real wizardry, but I never quite got the hang of that style. | ||
| ▲ | mdavidn 2 hours ago | parent | next [-] | |
Pipelines are indeed one flow, and that works most of the time, but shell scripts make parallel tasks easy too. The shell provides tools to spawn subshells in the background and wait for their completion. Then there are utilities like xargs -P and make -j. | ||
| ▲ | Linux-Fan 2 hours ago | parent | prev [-] | |
UNIX provides the Makefile as go-to tool if a simple pipeline is not enough. GNUmake makes this even more powerful by being able to generate rules on-the-fly. If the tool of interest works with files (like the UNIX tools do) it fits very well. If the tool doesn't work with single files I have had some success in using Makefiles for generic processing tasks by creating a marker file that a given task was complete as part of the target. | ||