Remix.run Logo
subhobroto 9 hours ago

> I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.

This is my exact experience as well and I find it frustrating.

If current technology is creating an issue for teachers - it's the teachers that need to pivot, not block current technology so they can continue what they are comfortable with.

Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly every-time, they should be teaching humans to complement those robots or do something else entirely.

It's possible in the past, that learning how to use an abacus was a critical lesson but once calculators were invented, do we continue with two semesters of abacus? Do we allow calculators into the abacus course? Should the abacus course be scrapped? Will it be a net positive on society to replace the abacus course with something else?

"AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents.

netsharc 9 hours ago | parent | next [-]

I'm also for education for AI awareness. A big point on teaching kids about AI should also be a lot about how unreliable they can be.

I had a discussion with a recruiter on Friday, and I said I guess the issue with AI vs human is, if you give a human developer who is new to your company tasks, the first few times you'll check their work carefully to make sure the quality is good. After a while you can trust they'll do a good job and be more relaxed. With AI, you can never be sure at any time. Of course a human can also misunderstand the task and hallucinate, but perhaps discussing the issue and the fix before they start coding can alleviate that. You can discuss with an AI as much as you want, but to me, not checking the output would be an insane move...

To return to the point, yeah, people will use AI anyway, so why not teach them about the risks. Also LLMs feel like Concorde: it'll get you to where you want to go very quickly, but at tremendous environmental cost (also it's very costly to the wallet, although the companies are now partially subsidizing your use with the hopes of getting you addicted)..

cheevly 5 hours ago | parent [-]

Only if you naively throw AI carelessly at it. It sounds like you havent mastered the basics like fine-tuning, semantic vector routing, agentic skills/tooling generation…dozens of other solutions that robustly solve for your claim.

netsharc 5 hours ago | parent [-]

Gosh, I really should attend LinkedIn University of Buzzwords...

cheevly 3 hours ago | parent [-]

Yes, just buzzwords, totally no backing behind any of this. Your original comment makes so much more sense now.

QuadmasterXLII 5 hours ago | parent | prev [-]

everything you learn about math is completely obsoleted by ai five years from now

everything you learn about working using chatbots is completely obsoleted by ai five years from now

both are possible, but 2 is pretty much guaranteed if we get 1, so learning about chatting with opus is pretty much always less useful than learning derivatives by hand unless you're starting job applications in less than a few months