|
| ▲ | rco8786 18 hours ago | parent | next [-] |
| I would much, much rather provide discreet APIs directly to the LLM via MCP than just tell it to hit the api and figure it out from the docs. |
|
| ▲ | thunky a day ago | parent | prev | next [-] |
| > You don't need MCP Depends on who the user is... A difference/advantage of MCP is that it can be completely server-side. Which means that an average person can "install" MCP tools into their desktop or Web app by pointing it to a remote MCP server. This person doesn't want to install and manage skills files locally. And they definitely don't want to run python scripts locally or run a sandbox vm. |
|
| ▲ | Yeroc a day ago | parent | prev | next [-] |
| That's going to be a lot less efficient context-wise and computing-wise than using either a purpose-built MCP or skill based around executing a script. |
|
| ▲ | SV_BubbleTime a day ago | parent | prev [-] |
| Am I the only person left that is still impressed that we have a natural language understanding system so good that its own tooling and additions are natural language? |
| |
| ▲ | simonw a day ago | parent [-] | | I still can't believe we can tell a computer to "use playwright Python to test this new feature page" and it will figure it out successfully most of the time! | | |
| ▲ | qiller a day ago | parent [-] | | Impressing, but I can't believe we went from fixing bugs to coffee-grounds-divination-prompt-guessing-and-tweaking when things don't actually go well /s |
|
|