| ▲ | behat 3 hours ago | ||||||||||||||||
For the debugging workflow you described, we would be a standalone replacement for cursor or other agents. We don't yet write code so can't replace your cursor agents entirely. Re: diffentiation - yes, faster, more accurate and more consistent. Partially because of better tools and UX, and partially because we anchor on runbooks. On-call engineers can quickly map out that the AI ran so-and-so steps, and here's what it found for each, and here's the time series graph that supports this. Interesting that you have had great success with Datadog MCP. Do you mainly look at logs? | |||||||||||||||||
| ▲ | verdverm 41 minutes ago | parent [-] | ||||||||||||||||
> For the X workflow, we would be a standalone replacement for other agents. Imo, this is not what users want. They want extension to their agent. If a project tells me I have to use their interface or agentic setup, it's 95% not going to happen. Consider how many SaaS tools we already have to deal with, that many agents is not desirable, they all have their little quirks and take time to "get to know" Instead, build extensions, skills, and subagents that fit into my agentic workflow and tooling. This will also simplify what you need to do, so you can focus on your core competency. For example, you should be able to create a chat participant in VS Code / Copilot, and take advantage of the native notebook and diff rendering, sharing the MCPs (et al) the user already has for their agents for their internal systems. | |||||||||||||||||
| |||||||||||||||||