▲ | imiric 4 days ago | |
Hey, apologies for the late response. > using `timep -t` to run the code this increased to 38 seconds. So +10%. Thanks. I suppose this will depend on each script, as there is another commenter here claiming that the overhead is much higher. Re: the binary, that's fine. Your approach is surely easier to use than asking users to compile it themselves, but I would still prefer to have that option. After all, how do I know that that binary came from that source code? So a simple Make target to build it would put my concerns to rest. It's something that only has to be done once, anyway, so it's not a huge inconvenience. In any case, it's pretty cool that you wrote the CPU time tracking in C. I wasn't even aware that Bash was so easily extensible. You've clearly put a lot of thought and effort into this project, which is commendable. Good luck! | ||
▲ | jkool702 4 days ago | parent [-] | |
> Thanks. I suppose this will depend on each script, as there is another commenter here claiming that the overhead is much higher. The better way to think about overhead with timep is "average overhead per command run" (or more specifically per debug trap fire). this value wont change all that much between profiling different bash scripts The code that commenter was profiling was a rubix cube solver that was impressively well optimized: 100% builtins, no forking, all the expensive operations were pre-computed and saved in huge lookup tables (some of which had over 400,000 elements), vars passed by reference to avoid making copies, etc. The overhead from timep was about 230 microseconds per command, but that code was averaging a microsecond or two per command. To put it in perspective, bash's overhead any time it calls an external binary is 1-2 ms. so in a script that did nothing but call `/bin/true` repeatedly timep's overhead would probably be a little under 20%. > Re: the binary, that's fine. Your approach is surely easier to use than asking users to compile it themselves, but I would still prefer to have that option. I mean technically you can, but i'll give you that its not really documented unless you read through all the comments in the code. A makefile is probably doable and would make the process more straightforward. that said, its on my to do list to figure out how i can setup a github actions workflow to have github automatically build the .so files for all the different architectures whenever timep.c changes. Perhaps that would alleviate your concern. > After all, how do I know that that binary came from that source code? I mean, you can say that about virtually any compiled binary. sure , some of them (like from your distro's official repos) have been "signed off" on by someone you trust, but that is a leap of faith you have to make with anything you install from a 3rd party. And, in general, i feel like "compiling it yourself" doesnt really make it safer unless you personally (or someone you trust personally) look through the source to check that it doesnt do anything malicious. > I wasn't even aware that Bash was so easily extensible. bash supporting loadable builtins isnt a well known feature. Its really quite handy when you want to do something (e.g., access to a syscall) that bash doesnt support and you cant / dont want to used a external tool for it. IMO, the biggest hurdle behind using them is that they make scripts much less portable - you either need to setup a distribution system for it (and require the target system has internet access, at least briefly) and/or require the target system has a full build environment and can compile it. which are both sorta crappy options. unless, of course, you were to base64 encode the binary and directly include it inside of the script. ;) |