Remix.run Logo
schnitzelstoat 6 hours ago

It's quite rare that it gives a wrong answer nowadays. Even more so if you ask it to use the internet etc.

But yeah, it's not infallible and sometimes even when it gives you a source it will incorrectly summarise it, but you can double check the information in the source itself.

It just makes it a lot easier to do quickly rather than having to go and find the right Wikipedia article or dig through lots of documentation. Just like Wikipedia and online docs made it easier than having to go to the library or leaf through a 500-page manual etc.

Gigachad 6 hours ago | parent [-]

Only if you are asking surface level questions. There are also certain topics that seem to be worse than others. For asking about how to do things in software guis modern LLMs seem to have a high rate of making up features or paths to reach them. For asking advice in games I've seen an extremely high rate of hallucinations. Asking why something is broken in my codebase has about a 95% hallucination rate.

If you are just asking basic science questions or phone reviews then its pretty reliable.

schnitzelstoat 5 hours ago | parent | next [-]

I've used it for languages and studying some reinforcement learning stuff, including examples in PyTorch. I haven't had many problems with it really.

Once when I asked it some questions about a strategy game (Shadow Empire) it got them wrong, but the sources it cited had the correct information.

knowaveragejoe 4 hours ago | parent | prev [-]

> Only if you are asking surface level questions.

I find it pretty accurate well beyond that level. How much of that is actually a problem in K-12 education?