▲ | bubblyworld 5 days ago | |||||||
I suspect (to use the language of the author) current LLMs have a bit of a "reasoning dead zone" when it comes to images. In my limited experience they struggle with anything more complex than "transcribe the text" or similarly basic tasks. Like I tried to create an automated QA agent with Claude Sonnet 3.5 to catch regressions in my frontend, and it will look at an obviously broken frontend component (using puppeteer to drive and screenshot a headless browser) and confidently proclaim it's working correctly, often making up a supporting argument too. I've had much more success passing the code for the component and any console logs directly to the agent in text form. My memory is a bit fuzzy, but I've seen another QA agent that takes a similar approach of structured text extraction rather than using images. So I suspect I'm not the only one finding image-based reasoning an issue. Could also be for cost reasons though, so take that with a pinch of salt. | ||||||||
▲ | ACCount37 5 days ago | parent [-] | |||||||
LLM image frontends suck, and a lot of them suck big time. The naive approach of "use a pretrained encoder to massage the input pixels into a bag of soft tokens and paste those tokens into the context window" is good enough to get you a third of the way to humanlike vision performance - but struggles to go much further. Claude's current vision implementation is also notoriously awful. Like, "a goddamn 4B Gemma 3 beats it" level of awful. For a lot of vision-heavy tasks, you'd be better off using literally anything else. | ||||||||
|