|
| ▲ | Kaliboy 14 hours ago | parent | next [-] |
| I've managed to ignore MCP servers for a long time as well, but recently I found myself creating one to help the LLM agents with my local language (Papiamentu) in the dialect I want. I made a prolog program that knows the valid words and spelling along with sentence conposition rules. Via the MCP server a translated text can be verified. If its not faultless the agent enters a feedback loop until it is. The nice thing is that it's implemented once and I can use it in opencode and claude without having to explain how to run the prolog program, etc. |
|
| ▲ | CharlieDigital 15 hours ago | parent | prev | next [-] |
| > I have no idea what I'm missing.
The questions I'd ask: - Do you work in a team context of 10+ engineers?
- Do you all use different agent harnesses?
- Do you need to support the same behavior in ephemeral runtimes (GH Agents in Actions)?
- Do you need to share common "canonical" docs across multiple repos?
- Is it your objective to ensure a higher baseline of quality and output across the eng org?
- Would your workload benefit from telemetry and visibility into tool activation?
If none of those apply, then it's not for you. Server hosted MCP over streamable HTTP benefits orgs and teams and has virtually no benefit for individuals. |
| |
| ▲ | monsieurbanana 13 hours ago | parent | next [-] | | What I want to know is what's the difference between a remote mcp and an api with an openapi.json endpoint for self-discovery? It's just as centralized | |
| ▲ | fartfeatures 15 hours ago | parent | prev [-] | | MCP is useful for the above. I work on my own more often than not and the utility of MCP goes far beyond the above. (see my other comment above). |
|
|
| ▲ | fartfeatures 15 hours ago | parent | prev | next [-] |
| I can't go into specifics about exactly what I'm doing but I can speak generically: I have been working on a system using a Fjall datastore in Rust. I haven't found any tools that directly integrate with Fjall so even getting insight into what data is there, being able to remove it etc is hard so I have used https://github.com/modelcontextprotocol/rust-sdk to create a thin CRUD MCP. The AI can use this to create fixtures, check if things are working how they should or debug things e.g. if a query is returning incorrect results and I tell the AI it can quickly check to see if it is a datastore issue or a query layer issue. Another example is I have a simulator that lets me create test entities and exercise my system. The AI with an MCP server is very good at exercising the platform this way. It also lets me interact with it using plain english even when the API surface isn't directly designed for human use: "Create a scenario that lets us exercise the bug we think we have just fixed and prove it is fixed, create other scenarios you think might trigger other bugs or prove our fix is only partial" One more example is I have an Overmind style task runner that reads a file, starts up every service in a microservice architecture, can restart them, can see their log output, can check if they can communicate with the other services etc. Not dissimilar to how the AI can use Docker but without Docker to get max performance both during compilation and usage. Last example is using off the shelf MCP for VCS servers like Github or Gitlab. It can look at issues, update descriptions, comment, code review. This is very useful for your own projects but even more useful for other peoples: "Use the MCP tool to see if anyone else is encountering similar bugs to what we just encountered" |
|
| ▲ | 8note 14 hours ago | parent | prev | next [-] |
| Its very similar to the switch from a text editor + command line, to having an IDE with a debugger. the AI gets to do two things: - expose hidden state
- do interactions with the app, and see before/after/errors it gives more time where the LLM can verify its own work without you needing to step in. Its also a bit more integration test-y than unit. if you were to add one mcp, make it Playwright or some similar browser automation mcp. Very little has value add over just being able to control a browser |
| |
| ▲ | CPLX 14 hours ago | parent [-] | | I’ve been using Chrome DevTools MCP a lot for this purpose and have been very happy with it. |
|
|
| ▲ | winrid 15 hours ago | parent | prev [-] |
| Many products provide MCP servers to connect LLMs. For example I can have claude examine things through my ahrefs account without me using the UI etc |
| |
| ▲ | 8n4vidtmkvmk 14 hours ago | parent [-] | | That's also one of the things that worries me the most. What kind of data is being sent to these random endpoints? What if they to rogue or change their behavior? A static set of tools is safer and more reliable. | | |
| ▲ | 8note 14 hours ago | parent [-] | | mcp is generally a static set of tools, where auth is handled by deterministic code and not exposed to the agent. the agent sees tools as allowed or not by the harness/your mcp config. For the most part, the same company that you're connecting to is providing the mcp, so its not having your data go to random places, but you can also just write your own. its fairly thin wrappers of a bit of code to call the remote service, and a bit of documentation of when/what/why to do so |
|
|