| ▲ | nickstinemates 3 hours ago | |
Is this a case of doing it wrong, or you think accuracy is good enough with the amount of context you need to stuff it with often? | ||
| ▲ | kimixa 3 hours ago | parent | next [-] | |
I mean the systems I work on have enough weird custom APIs and internal interfaces just getting them working seems to take a good chunk of the context. I've spent a long time trying to minimize every input document where I can, compact and terse references, and still keep hitting similar issues. At this point I just think the "success" of many AI coding agents is extremely sector dependent. Going forward I'd love to experiment with seeing if that's actually the problem, or just an easy explanation of failure. I'd like to play with more controls on context management than "slightly better models" - like being able to select/minimize/compact sections of context I feel would be relevant for the immediate task, to what "depth" of needed details, and those that aren't likely to be relevant so can be removed from consideration. Perhaps each chunk can be cached to save processing power. Who knows. | ||
| ▲ | romanovcode 3 hours ago | parent | prev [-] | |
In my example the Figma MCP takes ~300k per medium sized section of the page and it would be cool to enable it reading it and implementing Figma designs straight. Currently I have to split it which makes it annoying. | ||