| ▲ | HPsquared 3 hours ago | |||||||
I wonder to what extent the Google search LLM is getting smarter, or simply more up-to-date on current hot topics. | ||||||||
| ▲ | mlazowik 3 hours ago | parent | next [-] | |||||||
It seems like the search ai results are generally misunderstood, I also misunderstood them for the first weeks/months. They are not just an LLM answer, they are an (often cached) LLM summary of web results. This is why they were often skewed by nonsensical Reddit responses [0]. Depending on the type of input it can lean more toward web summary or LLM answer. So I imagine that it can just grab the description of the „car wash” test from web results and then get it right because of that. | ||||||||
| ▲ | PaulHoule 3 hours ago | parent | prev | next [-] | |||||||
Presumably it did an actual search and summarized the results and neither answered "off the cuff" by following gradients to reproduce the text it was trained on nor by following gradients to reproduce the "logic" of reasoning. [1] [1] e.g. trained on traces of a reasoning process | ||||||||
| ||||||||
| ▲ | 2 hours ago | parent | prev | next [-] | |||||||
| [deleted] | ||||||||
| ▲ | popalchemist 3 hours ago | parent | prev [-] | |||||||
It's almost certainly just RAG powered by their crawler. | ||||||||
| ||||||||