| ▲ | schnitzelstoat 7 hours ago |
| I think it's better to use books and not have so many distractions in the classroom. But equally it's really helpful to be able to ask ChatGPT or whatever for a different explanation when you get stuck - but that is probably better done at home when studying the homework. It stops you getting frustrated and helps keep you making progress and in the 'flow state'. I guess a big problem for schools now will be how to get them to use AI to help them learn rather than simply getting it do to their homework so they can go and play video games or whatever. I know if I'd had it as a kid I would've been tempted to do the latter. |
|
| ▲ | tgv 5 hours ago | parent | next [-] |
| Why do you think children will learn anything from a remark on a specific problem? If it were that simple, teaching would be easy. (Notice that teaching smart kids is easy). Much of education requires making errors until you get it right a few times in a row, and paying attention of the errors. Getting an explation of your errors is only part of that process. No LLM can provide the rest of it. |
|
| ▲ | nalekberov 7 hours ago | parent | prev | next [-] |
| > But equally it's really helpful to be able to ask ChatGPT or whatever for a different explanation when you get stuck - but that is probably better done at home when studying the homework. It stops you getting frustrated and helps keep you making progress and in the 'flow state'. Yeah sure, then get a (sometimes) wrong answer with high confidence and believe it? |
| |
| ▲ | schnitzelstoat 7 hours ago | parent [-] | | It's quite rare that it gives a wrong answer nowadays. Even more so if you ask it to use the internet etc. But yeah, it's not infallible and sometimes even when it gives you a source it will incorrectly summarise it, but you can double check the information in the source itself. It just makes it a lot easier to do quickly rather than having to go and find the right Wikipedia article or dig through lots of documentation. Just like Wikipedia and online docs made it easier than having to go to the library or leaf through a 500-page manual etc. | | |
| ▲ | Gigachad 6 hours ago | parent [-] | | Only if you are asking surface level questions. There are also certain topics that seem to be worse than others. For asking about how to do things in software guis modern LLMs seem to have a high rate of making up features or paths to reach them. For asking advice in games I've seen an extremely high rate of hallucinations. Asking why something is broken in my codebase has about a 95% hallucination rate. If you are just asking basic science questions or phone reviews then its pretty reliable. | | |
| ▲ | schnitzelstoat 5 hours ago | parent | next [-] | | I've used it for languages and studying some reinforcement learning stuff, including examples in PyTorch. I haven't had many problems with it really. Once when I asked it some questions about a strategy game (Shadow Empire) it got them wrong, but the sources it cited had the correct information. | |
| ▲ | knowaveragejoe 4 hours ago | parent | prev [-] | | > Only if you are asking surface level questions. I find it pretty accurate well beyond that level. How much of that is actually a problem in K-12 education? |
|
|
|
|
| ▲ | rimliu 6 hours ago | parent | prev [-] |
| using AI for education is one of the worst ideas for education. |