| ▲ | j1elo 4 hours ago | ||||||||||||||||||||||
Whenever you have this kind of impressions on some development, here are my 2 cents: just think "I'm not the target audience". And that's fine. The difference between 2ms and 0.2ms might sound unneeded, or even silly to you. But somebody, somewhere, is doing stream processing of TB-sized JSON objects, and they will care. These news are for them. | |||||||||||||||||||||||
| ▲ | alsetmusic 41 minutes ago | parent | next [-] | ||||||||||||||||||||||
I remember when I was coming up on the command line and I'd browse the forums at unix.com. Someone would ask how to do a thing and CFAJohnson would come in with a far less readable solution that was more performative (probably replacing calls to external tools with Bash internals, but I didn't know enough then to speak intelligently about it now). People would say, "Why use this when it's harder to read and only saves N ms?" He'd reply that you'd care about those ms when you had to read a database from 500 remote servers (I'm paraphrasing. He probably had a much better example.) Turns out, he wrote a book that I later purchased. It appears to have been taken over by a different author, but the first release was all him and I bought it immediately when I recognized the name / unix.com handle. Though it was over my head when I first bought it, I later learned enough to love it. I hope he's on HN and knows that someone loved his posts / book. https://www.amazon.com/Pro-Bash-Programming-Scripting-Expert... | |||||||||||||||||||||||
| ▲ | mememememememo 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Also as someone who looks at latency charts too much, what happens is a request does a lot in series and any little ms you can knock off adds up. You save 10ms by saving 10 x 1ms. And if you are a proxyish service then you are a 10ms in a chain that might be taking 200 or 300ms. It is like saving money, you have to like cut lots of small expenses to make an impact. (unless you move etc. but once you done that it is small numerous things thay add up) Also performance improvements on heavy used systems unlocks: Cost savings Stability Higher reliability Higher throughput Fewer incidents Lower scaling out requirements. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | NoSalt an hour ago | parent | prev | next [-] | ||||||||||||||||||||||
> "somebody, somewhere, is doing stream processing of TB-sized JSON objects" That's crazy to think about. My JSON files can be measured in bytes. :-D | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | tclancy 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Which is fine, but the vast majority of the things that get presented aren’t bothering to benchmark against my use (for a whole lotta mes). They come from someone scratching an itch and solving it for a target audience of one and then extrapolating and bolting on some benchmarks. And at the sizes you’re talking about, how many tooling authors have the computing power on hand to test that? | |||||||||||||||||||||||
| ▲ | 7bit an hour ago | parent | prev | next [-] | ||||||||||||||||||||||
Who is the target audience? I truly wonder who will process TB-sized data using jq? Either it's in a database already, in which case you're using the database to process the data, or you're putting it in a database. Either way, I have really big doubts that there will be ever a significant amount of people who'd choose jq for that. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | Chris2048 3 hours ago | parent | prev [-] | ||||||||||||||||||||||
But even in this example, the 2ms vs 0.2 is irrelevant - its whatever the timings are for TB-size objects. So went not compare that case directly? We'd also want to see the performance of the assumed overheads i.e. how it scales. | |||||||||||||||||||||||