| ▲ | Gigachad 6 hours ago | |
Only if you are asking surface level questions. There are also certain topics that seem to be worse than others. For asking about how to do things in software guis modern LLMs seem to have a high rate of making up features or paths to reach them. For asking advice in games I've seen an extremely high rate of hallucinations. Asking why something is broken in my codebase has about a 95% hallucination rate. If you are just asking basic science questions or phone reviews then its pretty reliable. | ||
| ▲ | schnitzelstoat 5 hours ago | parent | next [-] | |
I've used it for languages and studying some reinforcement learning stuff, including examples in PyTorch. I haven't had many problems with it really. Once when I asked it some questions about a strategy game (Shadow Empire) it got them wrong, but the sources it cited had the correct information. | ||
| ▲ | knowaveragejoe 4 hours ago | parent | prev [-] | |
> Only if you are asking surface level questions. I find it pretty accurate well beyond that level. How much of that is actually a problem in K-12 education? | ||