| ▲ | jorvi 2 hours ago | |
Current LLMs often produce much, much worse results than manually searching. If you need to search the internet on a topic that is full of unknown unknowns for you, they're a pretty decent way to get a lay of the land, but beyond that, off to Kagi (or Google) you go. Even worse is that the results are inconsistent. I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart. You cannot trust answers from an LLM. | ||
| ▲ | signatoremo an hour ago | parent | next [-] | |
> I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart. Are you sure? Both Gemini and ChatGPT gave me consistent answers 3 times in a row, even if the two versions are slightly different. Their answers are inline with this version: | ||
| ▲ | r0x1n1t3 an hour ago | parent | prev | next [-] | |
I created an account just to point out that this is simply not true. I just tried it! The answers were consistent across all 5 samples with both "Fast" mode and Pro (which I think is really important to mention if you're going to post comments like this - I was thinking maybe it would be inconsistent with the Flash model) | ||
| ▲ | chrysoprace 2 hours ago | parent | prev | next [-] | |
It obviously takes discipline, but using something like Perplexity as an aggregator typically gets me better results, because I can click through to the sources. It's not a perfect solution because you need the discipline/intuition to do that, and not blindly trust the summary. | ||
| ▲ | 12345hn6789 35 minutes ago | parent | prev [-] | |
Did you actually ask the model this question or are you fully strawmanning? | ||