Remix.run Logo
pram 4 days ago

It does citations (Grok and Claude etc do too) but I've found when I read the source on some stuff (GitHub discussions and so on) it sometimes actually has nothing to do with what the LLM said. I've actually wasted a lot of time trying to find the actual spot in a threaded conversation where the example was supposedly stated.

sarchertech 4 days ago | parent [-]

Same experience with Google search AI. The links frequently don’t support the assertions, they’ll just say something that might show up in a google search for the assertion.

For example if I’m asking about whether a feature exists in some library, the AI says yes it does and links to a forum where someone is asking the same question I did, but no one answered (this has happened multiple times).

Nemi 4 days ago | parent [-]

It is funny, Perplexity seems to work much better in this use case for me. When I want some sort of "conclusive answer", I use Gemini pro (just what I have available). It is good with coding and formulating thoughts, rewriting text, so on.

But when I want to actually search for content on the web for, say, product research or opinions on a topic, Perplexity is so much better than either Gemini or google search AI. It lists reference links for each block of assertions that are EASILY clicked on (unlike Gemini or search AI, where the references are just harder to click on for some reason, not the least of which is that they OPEN IN THE SAME TAB where Perplexity always opens on a new tab). This is often a reddit specific search as I want people's opinions on something.

Perplexity's UI for search specifically is the main thing it does just so much better than google's offering is the one thing going for it. I think there is some irony there.

Full disclosure, I don't use Anthropic or OpenAI, so this may not be the case for those products.