▲ | sebzim4500 3 days ago | ||||||||||||||||||||||
Unless I'm misunderstanding what you are asking the model to do, Gemini 2.5 pro just passed this easily. https://g.co/gemini/share/e2876d310914 | |||||||||||||||||||||||
▲ | osigurdson 3 days ago | parent | next [-] | ||||||||||||||||||||||
As I mentioned, this is not a scientific test but rather just something that I have tried from time to time and has always (shockingly in my opinion) failed but today worked. It takes a minute of two of prompting, is boring to verify and I don't remember exactly which models I have used. It is purely a personal anecdote, nothing more. However, looking at the code that Gemini wrote in the link, it does the same thing that other LLMs often do, which is to assume that we are encoding individual long values. I assume there must be a github repo or stackoverflow question in the weights somewhere that is pushing it in this direction but it is a little odd. Naturally, this isn't the kind encoder that someone would normally want. Typically it should encode a byte array and return a string (or maybe encode / decode UTF8 strings directly). Having the interface use a long is very weird and not very useful. In any case, I suspect with a bit more prompting you might be able to get gemini to do the right thing. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | AaronAPU 3 days ago | parent | prev [-] | ||||||||||||||||||||||
I’ve been using Gemini 2.5 pro side by side with o1-pro and Grok lately. My experience is they each randomly offer significant insight the other two didn’t. But generally, o1-pro listens to my profile instructions WAY better, and it seems to be better at actually solving problems the first time. More reliable. But they are all quite similar and so far these new models are similar but faster IMO. |