| ▲ | bob1029 7 hours ago | |||||||||||||
> Logs were designed for a different era. An era of monoliths, single servers, and problems you could reproduce locally. I worked with enterprise message bus loggers in semiconductor manufacturing context wherein we had thousands of participants on the message bus. It generated something like 300-400 megabytes per hour. Despite the insane volume we made this work really well using just grep and other basic CLI tools. The logs were mere time series of events. Figuring out the detail about specific events (e.g. a list of all the tools a lot visited) required writing queries into the Oracle monster. You could derive history from the event logs if you had enough patience & disk space, but that would have been very silly given the alternative option. We used them predominantly to establish a casual chain between events when the details are still preliminary. Identifying suspects and such. Actually resolving really complicated business usually requires more than a perfectly detailed log file. | ||||||||||||||
| ▲ | fsniper 5 hours ago | parent | next [-] | |||||||||||||
At last a sane person. Logs are for identifying the event timeline, not to acquire the whole reqs/resp data. Putting every detail into the logs is -in my experience - makes undertanding issues harder. Logs tell a story. When, what happened, not how or why that happened. Why is in the code, how is in the combination of, data, logs, events, code. And loosely related, I also dislike log interfaces like elk stack. They make following track of events really hard. Most of the time you do not know what you are loooking for, just a vauge understanding of why you are looking into the logs. So a line passed 3 micro seconds ago maybe your euraka moment, where no search could identify , just intuition and following logs diligently can. | ||||||||||||||
| ▲ | iLoveOncall 5 hours ago | parent | prev [-] | |||||||||||||
> It generated something like 300-400 megabytes per hour. Despite the insane volume we made this work really well using just grep and other basic CLI tools. 400MB of logs an hour is nothing at all, that's why a naive grep can work. You don't even need to rotate your log files frequently in this situation. | ||||||||||||||
| ||||||||||||||