| ▲ | Bringing Observability to Claude Code: OpenTelemetry in Action(signoz.io) |
| 47 points by pranay01 3 days ago | 20 comments |
| |
|
| ▲ | CuriouslyC 3 days ago | parent | next [-] |
| Kind of amazes me how many people bitch about agent performance but don't hook their guys up to Otel, crack Phoenix and get to work, but instead randomly tweak prompts in response to team vibes. |
| |
| ▲ | chrisweekly 3 days ago | parent | next [-] | | Good point. Also (tangent), I followed your profile link to https://sibylline.dev and am thoroughly impressed. Stoked to have found your treasure trove of repos and insights. | | |
| ▲ | CuriouslyC 3 days ago | parent [-] | | Don't play with them unless you're good at debugging alpha code (claude/codex can do it fine), I haven't ironed out env specific stuff or clarified the installation/usage, and I'm still doing UI polish/optimization passes (yay async simd rust). I'll do showy releases once I've got the tools one click install ready, in the meantime please feel free to drop an issue on any of my projects if there are features or questions you have. | | |
| |
| ▲ | yahoozoo 3 days ago | parent | prev [-] | | Could you elaborate? How does knowing numerical usage metrics help? | | |
| ▲ | CuriouslyC 3 days ago | parent [-] | | With Phoenix + Clickhouse being fed from Otel, you can do queries over your traces for deep analysis. If I want to see which tool calls are failing and why (or just get tool statistics), or find common patterns in flagged/failure traces ("simpler solution") and their causes, it's one query and some wiring. | | |
|
|
|
| ▲ | oefrha 3 days ago | parent | prev | next [-] |
| Collecting detailed per-request traces and calculating user-specific metrics finer than a total cost feels about as intrusive as one of those periodic screenshot programs forced by really shitty remote jobs or freelancing contracts. It's pretty gross. |
| |
| ▲ | pranay01 3 days ago | parent [-] | | I don't think the primary goal here is "surveillance" but better understanding where in the team are tools like claude code getting adopted, what models are being used, are there best practices to learn in token usage which could make it more efficient |
|
|
| ▲ | sofia44 3 days ago | parent | prev | next [-] |
| I think this tackles a really important area - nice job. Looking forward to following. |
| |
| ▲ | pranay01 3 days ago | parent [-] | | great to hear. yes, it can help understand how developers are using Claude Code and also optimise token usage etc. |
|
|
| ▲ | N_Lens 3 days ago | parent | prev | next [-] |
| I’d like to see this leveraged for agent platforms & orchestration rather than for surveillance on human software engineers. Humans don’t perform well in panopticons, but robots do (In my humble opinion). |
| |
| ▲ | pranay01 2 days ago | parent [-] | | > leveraged for agent platforms & orchestration can you share more on what you mean by this? | | |
| ▲ | N_Lens 2 days ago | parent [-] | | Claude Code Agents can be integrated into existing platforms such as github. I can envision agents automatically handling issues with certain tags, or doing pull request reviews, or other such similar trigger based behaviour. In that kind of orchestration this observability would be invaluable. | | |
| ▲ | pranay01 a day ago | parent [-] | | interesting. So, you mean say if an agent is working on automatically doing a PR review, how many such calls to agents are failing, how much time they are taking, etc? Lot of this you can do with traces today which trace AI specific calls |
|
|
|
|
| ▲ | pdntspa 3 days ago | parent | prev | next [-] |
| aka let's spy on our devs more than we already are and give their pointy-haired bosses even more leverage to harass them with AI-usage KPI BS |
|
| ▲ | tomrod 3 days ago | parent | prev | next [-] |
| Very nice! |
|
| ▲ | dat_attack a day ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | black_13 3 days ago | parent | prev [-] |
| [dead] |