Remix.run Logo
albertzeyer 5 days ago

This sounds interesting.

I would really like to read a full research paper made out of this, which describes the method in more detail, gives some more examples, does more analysis on it, etc.

Btw, this uses LLMs on pure text-level? Why not images? Most of these patterns are easy to detect on image-level, but I assume when presented as text, it's much harder.

> LLMs are PhD-level reasoners in math and science, yet they fail at children's puzzles. How is this possible?

I think this argument is a bit flawed. Yes, you can define AGI as being better than (average) humans in every possible task. But isn't this very arbitrary? Isn't it more reasonable to expect that different intelligent systems (including animals, humans) can have different strengths, and it is unreasonable to expect that one system is really better in everything? Maybe it's more reasonable to define ASI that way, but even for ASI, if a system is already better in a majority of tasks (but not necessarily in every task), I think this should already count as ASI. Maybe really being better in every possible task is just not possible. You could design a task that is very specifically tailored for human intelligence.

bubblyworld 5 days ago | parent [-]

I suspect (to use the language of the author) current LLMs have a bit of a "reasoning dead zone" when it comes to images. In my limited experience they struggle with anything more complex than "transcribe the text" or similarly basic tasks. Like I tried to create an automated QA agent with Claude Sonnet 3.5 to catch regressions in my frontend, and it will look at an obviously broken frontend component (using puppeteer to drive and screenshot a headless browser) and confidently proclaim it's working correctly, often making up a supporting argument too. I've had much more success passing the code for the component and any console logs directly to the agent in text form.

My memory is a bit fuzzy, but I've seen another QA agent that takes a similar approach of structured text extraction rather than using images. So I suspect I'm not the only one finding image-based reasoning an issue. Could also be for cost reasons though, so take that with a pinch of salt.

ACCount37 5 days ago | parent [-]

LLM image frontends suck, and a lot of them suck big time.

The naive approach of "use a pretrained encoder to massage the input pixels into a bag of soft tokens and paste those tokens into the context window" is good enough to get you a third of the way to humanlike vision performance - but struggles to go much further.

Claude's current vision implementation is also notoriously awful. Like, "a goddamn 4B Gemma 3 beats it" level of awful. For a lot of vision-heavy tasks, you'd be better off using literally anything else.

bubblyworld 5 days ago | parent [-]

Wild, I found it hard to believe that a 4b model could beat sonnet-3.5 at anything, but at least on the vision arena (https://lmarena.ai/leaderboard/vision) it seems like sonnet-3.5 is at the same ELO as a 27b gemma (~1150), so it's plausible. I guess that just says more about how bad vision LLMs are right now that anything else.