| ▲ | mlazowik 3 hours ago | |
It seems like the search ai results are generally misunderstood, I also misunderstood them for the first weeks/months. They are not just an LLM answer, they are an (often cached) LLM summary of web results. This is why they were often skewed by nonsensical Reddit responses [0]. Depending on the type of input it can lean more toward web summary or LLM answer. So I imagine that it can just grab the description of the „car wash” test from web results and then get it right because of that. | ||