| ▲ | XorNot a day ago |
| The problem with this is generally that you have logs from years ago, but no way to get a live stream of logs which are happening now. (one of my immense frustrations with kubernetes - none of the commands for viewing logs seem to accept logical aggregates like "show me everything from this deployment"). |
|
| ▲ | Sayrus a day ago | parent | next [-] |
| Stern[1] does that. You can tail deployments, filter by labels and more. [1] https://github.com/stern/stern |
|
| ▲ | ofrzeta a day ago | parent | prev | next [-] |
| What about "kubectl logs deploy/mydep --all-containers=true" but I guess you want more than that? Maybe https://www.kubetail.com? |
|
| ▲ | shikhar 21 hours ago | parent | prev | next [-] |
| We have a customer using s2.dev for this capability – granular tail-able streams with granular access control (e.g. let an end user of a job tail it with a read-only access token). We'll be shipping an OTel endpoint soon to make it even easier. |
|
| ▲ | knutzui a day ago | parent | prev | next [-] |
| Maybe not via kubectl directly, but it is rather trivial to build this, by simply combining all log streams from pods of a deployment (or whatever else). k9s (k9scli.io) supports this directly. |
|
| ▲ | AlecBG a day ago | parent | prev | next [-] |
| This sounds pretty easy to hack together with 10s of lines of python |
|
| ▲ | madduci a day ago | parent | prev [-] |
| And what is the sense of keeping years of logs? I could probably understand very sensitive industries, but In general, I see a pure waste of resources. At most you need 60-90 days of logs. |
| |
| ▲ | sureglymop a day ago | parent | next [-] | | It makes sense to keep a high fidelity history of what happened and why. However, I think the issue is more that this data is not refined correctly. Even when it comes to logging in the first place, I have rarely seen developers do it well, instead logging things that make no sense just because it was convenient during development. But that touches on something else. If your logs are important data, maybe logging is the wrong way to go about it. Instead think about how to clean, refine and persist the data you need like your other application data. I see log and trace collecting in this way almost as a legacy compatibility thing, analog to how kubernetes and containerization allows you to wrap up any old legacy application process into a uniform format, just collecting all logs and traces is backwards compatible with every application. But in order to not be wasteful and only keep what is valuable, a significant effort would be required afterwards. Well, storage and memory happen to be cheap enough to never have to care about that. | |
| ▲ | Sayrus a day ago | parent | prev | next [-] | | Access logs and payment information for compliance, troubleshooting and evaluating trends of something you didn't know existed until months or years later, finding out if an endpoint got exploited in the past for a vulnerability that you only now discovered, tracking events that may span across months. Logs are a very useful tool in many non-dev or longer term uses. | |
| ▲ | fc417fc802 a day ago | parent | prev | next [-] | | My home computer has well over 20 TB of storage. I have several LLMs, easily half a TB worth. The combined logs generated by every single program on my system might total 100 GB per year but I doubt it. And that's before compression. Would you delete a text file that's a few KB from a modern device in order to save space? It just doesn't make any sense. | |
| ▲ | brazzy a day ago | parent | prev [-] | | One nice side effects of the GDPR is that you're not allowed to keep logs indefinitely if there is any chance at all that they contain personal information. The easiest way to comply is to throw away logs after a month (accepted as the maximum justifiable for general error analysis) and be more deliberate about what you keep longer. |
|