| ▲ | LudwigNagasena 11 hours ago |
| I trust that an LLM can fix a problem without the help of other agents that are barely different from it. What it lacks is the context to identify which problems are systemic and the means to fix systemic problems. For that you need aggregate data processing. |
|
| ▲ | layer8 11 hours ago | parent | next [-] |
| What I mean is, how do you identify a “problem” in the first place? |
| |
| ▲ | LudwigNagasena 10 hours ago | parent [-] | | You analyze each conversation with an LLM: summarize it, add tags, identify problematic tools, etc. The metrics go to management, some docs are auto-generated and added to the company knowledge base like all other company docs. It’s like what they do in support or sales. They have conversational data and they use it to improve processes. Now it’s possible with code without any sort of proactive inquiry from chatbots. | | |
| ▲ | layer8 10 hours ago | parent [-] | | Who is “you” in the first sentence? A human or an LLM? It seems to me that only the latter would be practical, given the volume. But then I don’t understand how you trust it to identify the problems, while simultaneously not trusting LLMs to identify pain points and roadblocks. | | |
| ▲ | LudwigNagasena 9 hours ago | parent [-] | | An LLM. A coding LLM writes code with its tools for writing files, searching docs, reading skills for specific technologies and so on; and the analysis LLM processes all interactions, summarizes them, tags issues, tracks token use for various task types, and identifies patterns across many sessions. |
|
|
|
|
| ▲ | cyanydeez 11 hours ago | parent | prev [-] |
| oh man, can youimagine having this much faith in a statistical model that can be torpedo'd cause it doesn't differentiate consistently between a template, a command, and an instruction? |