Remix.run Logo
dannyw 12 hours ago

If you're interested, for Affinity the way we've built it is through exposing our scripting SDK via MCP. Agents like Claude can write scripts to execute actions, and these scripts can be saved and re-run later, as well have their own UI.

It is a massive SDK though (thousands of functions; feel free to poke around with it; Affinity is free) and so it really shows the ability of LLMs to effectively work across long-horizon tasks massive context windows.

Personally, really interested in Blender though. I'm working on a game as a hobby/side project and I'm very much a newbie / often struggle with learning and using Blender.

There are so many ways these integrations help humans & human creatives; your job and role shouldn't be about how skilled you are with navigating/using a tool, or if you're technically savvy to code scripts to improve your workflow.

WillAdams 12 hours ago | parent | next [-]

The thing is, ages ago, I was told by the scripting evangelist at Adobe Systems that a certain process (adding sub, sub-sub, and sub-sub-sub entries) to an index entry was impossible --- problem was, my boss had already promised a script to do that to a client....

Turns out it is possible, one just has to have the script check to see if each level of a given index entry exists or no, then if it does not yet exist, create it before making the next lower level by adding that sub-entry to the one above it.

An LLM is only going to code what it has documented as possible/working and may not be able to do what needs to be done.

dannyw 7 hours ago | parent [-]

So that was our assumption too while building it, but I'm genuinely surprised by how well frontier models can work with large and 'lightly-documented' SDKs.

I think a big part of it comes from deliberately exposing lowest-level atomic actions; not higher-level wrappers with use-case specific documentation. Instead, we supply very technical/'dry' documentation (inputs, action/effects, return values and types). We leave it to the developer (or the LLM) to write scripts that assemble these pieces together to solve problems.

If you try it with Cowork and Opus 4.7 (recommended), you'll probably see it try a few different technical approaches and iterate; as it tries to accomplish this task. While that's less token efficient, the benefit is flexibility/power, and once you have a solid script, you can just save it and use it again and again without any token costs.

WillAdams 2 hours ago | parent [-]

Right, this interaction was not documented, so would never have been found by an LLM, or are you saying that an hallucination will match up with a lacunae in the documentation often enough to make up for errors otherwise?

gedy 12 hours ago | parent | prev [-]

Thanks for the info, this might be off-topic but does the SDK allow calling out to AI like Gemini/Nano Banana for generating fill areas, etc?