Remix.run Logo
augusteo 4 hours ago

Curious about the MCP integration. Are people using this for production workloads or mostly experimentation?

mythz 4 hours ago | parent [-]

MCP support is available via the fast_mcp extension: https://llmspy.org/docs/mcp/fast_mcp

I use llms .py as a personal assistant and MCP is required to access tools available via MCP.

MCP is a great way to make features available to AI assistants, here's a couple I've created after enabling MCP support:

- https://llmspy.org/docs/mcp/gemini_gen_mcp - Give AI Agents ability to generate Nano Banana Images or generate TTS audio

- https://llmspy.org/docs/mcp/omarchy_mcp - Manage Omarchy Desktop Themes with natural language

I will say there's a noticable delay in using MCP vs tools, where I ended up porting Anthropic's node filesystem MCP to Python [1] to speed up common AI Assistant tasks, so their not ideal for frequent access of small tasks, but are great for long running tasks like Image/Audio generation.

[1] https://github.com/ServiceStack/llms/blob/main/llms/extensio...

storystarling an hour ago | parent [-]

Does the MCP implementation make it easy to swap out the underlying image provider? I've found Gemini is still a bit hit or miss for actual print-on-demand products compared to Midjourney. Since MJ still doesn't have a real API I've been routing requests to Flux via Replicate for higher quality automated flows. Curious if I could plug that in here without too much friction.