| ▲ | English professors double down on requiring printed copies of readings(yaledailynews.com) |
| 113 points by cmsefton 10 hours ago | 149 comments |
| |
|
| ▲ | recursivedoubts 9 hours ago | parent | next [-] |
| I have mentioned this in a few comments: for my CS classes I have gone from a historical 60-80% projects / 40-20% quizzes grade split, to a 50/50 split, and have moved my quizzes from being online to being in-person, pen-on-paper with one sheet of hand-written notes Rather than banning AI, I'm showing students how to use it effectively as a personalized TA. I'm giving them this AGENTS.md file: https://gist.github.com/1cg/a6c6f2276a1fe5ee172282580a44a7ac And showing them how to use AI to summarize the slides into a quiz review sheet, generate example questions with answer walk throughs, etc. Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves: the projects are designed to draw them into the art of programming and give them decent, real-world coding experience that they will need, even if they end up working at a higher level in the future. AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations (e.g. how twos complement works) that I wouldn't have otherwise. But it is obviously extremely dangerous as well. "It is impossible to design a system so perfect that no one needs to be good." |
| |
| ▲ | j_french 3 hours ago | parent | next [-] | | I had planned to move towards projects counting towards the majority of my CS class grades until chatgpt was released, now I've stuck with a 50/50 split. This year I said they were free to use AI all they liked (as if I can do anything about it anyway) , then ran interviews with the students about their project work, asking them to explain how it works etc. Took a lot of time with a class of 60 students, but worked pretty well, plus they got some experience developing the important skull of communicating technical ideas. Would like to give them some guidance on how to get AI to help prepare them for their interviews next year, will definitely take a look at your AGENTS.md approach. What's your student feedback on it been like? | | |
| ▲ | alexpotato an hour ago | parent [-] | | > Then ran interviews with the students about their project work, asking them to explain how it works etc. Took a lot of time with a class of 60 students, but worked pretty well, plus they got some experience developing the important skull of communicating technical ideas. This is amazing and wish professors had done this back when I did CS in the late 1990s. |
| |
| ▲ | bbor 8 hours ago | parent | prev | next [-] | | You seem like a great professor(/“junior baby mini instructor who no one should respect”, knowing American academic titles…). Though as someone whose been on the other end of the podium a bit more recently, I will point out the maybe-obvious: Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves
This is the right thing to say, but even the ones who want to listen can get into bad habits in response to intense schedules. When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night… well, I certainly would’ve caved far too much for my own good.IMO the natural fix is to expand your trusting, “this is for you” approach to the broader undergrad experience, but I can’t imagine how frustrating it is to be trying to adapt while admin & senior professors refuse to reconsider the race for a “””prestigious””” place in a meta-rat race… For now, I guess I’d just recommend you try to think of ways to relax things and separate project completion from diligence/time management — in terms of vibes if not a 100% mark. Some unsolicited advice from a rando who thinks you’re doing great already :) | | |
| ▲ | SoftTalker 3 hours ago | parent | next [-] | | > When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night… Millions of students prior to the last few years figured out how to manage conflicting class requirements. | | |
| ▲ | hackyhacky 32 minutes ago | parent [-] | | > Millions of students prior to the last few years figured out how to manage conflicting class requirements. Sure, and they also didn't have an omniscient entity capable of doing all of their work for them in a minute. The point of the GP comment, in my reading, is that the temptation is too great. |
| |
| ▲ | recursivedoubts 8 hours ago | parent | prev | next [-] | | Yes, I expect that pressure will be there, and project grades will be near 100% going forward, whether the student did the work or not. This is why I'm going to in-person written quizzes to differentiate between the students who know the material and those who are just using AI to get through it. I do seven quizzes during the semester so each one is on relatively recent material and they aren't weighted too heavily. I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz. I hated the high-pressure midterms/finals of my undergrad, so I'm trying to remove that for them. | | |
| ▲ | WalterBright 7 hours ago | parent [-] | | > I hated the high-pressure midterms/finals of my undergrad The pressure was what got me to do the necessary work. Auditing classes never worked for me. > I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz. Isn't that what the lectures and homework are for? |
| |
| ▲ | analog31 7 hours ago | parent | prev [-] | | The irony is that on-time completion of is probably the #1 source of project failure in the real world. |
| |
| ▲ | softwaredoug 9 hours ago | parent | prev | next [-] | | Do you find advocating for AI literacy to be controversial amongst peers? I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into. There are some head in the sands, very emotional attitudes about this stuff. (And obviously idiotically uncritical pro AI stances, but I doubt educators risk having those stances) | | |
| ▲ | recursivedoubts 8 hours ago | parent | next [-] | | AI is extremely dangerous for students and needs to be used intentionally, so I don't blame people for just going to "ban it" when it comes to their kids. Our university is slowly stumbling towards "AI Literacy" being a skill we teach, but, frankly, most faculty here don't have the expertise and students often understand the tools better than teachers. I think there will be a painful adjustment period, I am trying to make it as painless as possible for my students (and sharing my approach and experience with my department) but I am just a lowly instructor. | | |
| ▲ | softwaredoug 8 hours ago | parent [-] | | Honestly defining what to teach is hard People need to learn to do research with LLMs, code with LLMs, how to evaluate artifacts created by AI. They need to learn how agents work at a high level, the limitations on context, that they hallucinate and become sycophantic. How they need guardrails and strict feedback mechanisms if let loose. AI Safety connecting to external systems etc etc. You're right that few high school educators would have any sense of all that. | | |
| ▲ | ndriscoll an hour ago | parent | next [-] | | The sycophancy is an artifact of how they RLHF train the popular chat models to appeal to normies, not fundamental to the tool. I can't remember encountering it at all since I've started using codex, and in fact it regularly fills in gaps in my knowledge/corrects areas that I misunderstand. The professional tool has a vastly more professional demeanor. None of the "that's the key insight!" crap. | |
| ▲ | WalterBright 7 hours ago | parent | prev [-] | | I don't know anyone who learned arithmetic from a calculator. I do know people who would get egregiously wrong answers from misusing a calculator and insisted it couldn't be wrong. | | |
| ▲ | softwaredoug 7 hours ago | parent [-] | | Yes but I was also taught to use a calculator, and particular the advanced graphing calculators. Not to mention programming is a meta skill on top of “calculators” |
|
|
| |
| ▲ | libraryofbabel 8 hours ago | parent | prev | next [-] | | Not OP, but I would imagine (or hope) that this attitude is far less common amongst peer CS educators. It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia. The best-positioned students will be the ones who can operate these tools effectively but with a critical mindset, while also being able to do without AI as needed (which of course makes them better at directing AI when they do engage it). That said I agree with all your points too: some version of this argument will apply to most white collar jobs now. I just think this is less clear to the general population and it’s much more of a touchy emotional subject, in certain circles. Although I suppose there may be a point to be made about being more slightly cautious about introducing AI at the high school level, versus college. | | |
| ▲ | hackyhacky 28 minutes ago | parent | next [-] | | > It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, That's true, but you can't use AI in coding effectively if you don't know how to code. The risk is that students will complete an undergraduate CS degree, become very proficient in using AI, but won't know how to write for loop on their own. Which means they'll be helpless to interpret AI's output or to jump in when the AI produces suboptimal results. My take: learning to use AI is not hard. They can do that on their own. Learning programming is hard, and relying on AI will only make it harder. | |
| ▲ | danaris 8 hours ago | parent | prev [-] | | > It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia. No, it's not. Nothing around AI past the next few months to a year is clear right now. It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops. None of that is certain, of course! That's the whole point: we don't know what's coming. | | |
| ▲ | verdverm 7 hours ago | parent | next [-] | | It is clear that AI had already transformed how we do our jobs in CS The genie is out of the bottle, never going back It's a fantasy to think it will "dry up" and go away Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS | | |
| ▲ | tartoran 3 hours ago | parent | next [-] | | Yes, the genie is out of the bottle but could get back right in when it starts costing more, a whole lot more. I'm sure there's an amount of money for a monthly subscription that you'd either scale back your use or consider other alternatives. LLM as technology is indeed out of the bottle and here to stay but the current business around it is is not quite clear. | | |
| ▲ | verdverm 2 hours ago | parent [-] | | I've pondered that point, using my monthly car payment and usage as a barometer. I currently spend %5 on Ai compared to my car, I get far more value out of Ai |
| |
| ▲ | oblio 7 hours ago | parent | prev | next [-] | | Yeah, like Windows in 2026 is better than Windows in 2010, Gmail in 2026 is better than Gmail in 2010, the average website in 2026 is better than in 2015, Uber is better in 2026 than in 2015, etc. Plenty of tech becomes exploitative (or more exploitative). I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs. Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads. | | |
| ▲ | verdverm 5 hours ago | parent [-] | | You're crossing products with technology, also some cherry picking of personal perspectives I personally think GSuite is much better today than it was a decade ago, but that is separate The underlying hardware has improved, the network, the security, the provenance Specific to LLMs 1. we have seen rapid improvements and there are a ton more you can see in the research that will be impacting the next round of model train/release cycle. Both algorithms and hardware are improving 2. Open weight models are within spitting distance of the frontier. Within 2 years, smaller and open models will be capable of what frontier is doing today. This has a huge democratization potential I'd rather see the Ai as an opportunity to break the Oligarchy and the corporate hold over the people. I'm working hard to make it a reality (also working on atproto) | | |
| ▲ | oblio 5 hours ago | parent [-] | | Every time I hear "democratization" from a techbro I keep thinking that the end state is technofeudalism. We can't fix social problems with technological solutions. Every scalable solution takes us closer to Extremistan, which is inherently anti democratic. Read the Black Swan by Taleb. | | |
| ▲ | verdverm 2 hours ago | parent [-] | | Jumping from someone using a word to assigning a pejoritve label to them is by definition a form of bigotry Democratization, the way I'm using it without all the bias, is simply most people having access to build with a tool or a technology. Would you also argue everyone having access to the printing press is a bad thing? The internet? Right to repair? Right to compute? Why should we consider Ai access differently? |
|
|
| |
| ▲ | danaris 7 hours ago | parent | prev [-] | | OK? Prove it. Show me actual studies that clearly demonstrate that not only does using an LLM code assistant help make code faster in the short term, it doesn't waste all that extra benefit by being that much harder to maintain in the long term. | | |
| ▲ | jjav 5 hours ago | parent | next [-] | | No such studies can exist since AI coding has not been around for a long term. Clearly AI is much faster and good enough to create new one-off bits of code. Like I tend to create small helper scripts for all kinds of things both at work and home all the time. Typically these would take me 2-4 hours and aside from a few tweaks early on, they receive no maintenance as they just do some one simple thing. Now with AI coding these take me just a few minutes, done. But I believe this is the optimal productivity sweet spot for AI coding, as no maintenance is needed. I've also been running a couple experiments vibe-coding larger apps over the span of months and while initial ramp-up is very fast, productivity starts to drop off after a few weeks as the code becomes more complex and ever more full of special case exceptions that a human wouldn't have done that way. So I spend more and more time correcting behavior and writing test cases to root out insanity in the code. How will this go for code bases which need to continuously evolve and mature over many years and decades? I guess we'll see. | |
| ▲ | shiroiuma an hour ago | parent | prev | next [-] | | >it doesn't waste all that extra benefit by being that much harder to maintain in the long term. If AI just generates piles of unmaintainable code, this isn't going to be any worse than most of the professionally-written (by humans) code I've had to work with over my career. In my experience, readable and maintainable code is unfortunately rather uncommon. | |
| ▲ | verdverm 6 hours ago | parent | prev [-] | | I'll be frank, tried this with a few other people recently and they 1. Open this line of debate similar to you (i.e. the way you ask, the tone you use) 2. Were not interested in actual debate 3. Moved the goalposts repeatedly Based on past experience entertaining inquisitors, I will not be this time. | | |
| ▲ | libraryofbabel 5 hours ago | parent [-] | | Yeah. At this point, at the start of 2026, people that are taking these sorts of positions with this sort of tone tend to have their identity wrapped up in wanting AI to fail or go away. That’s not conducive to a reasoned discussion. There are a whole range of interesting questions here that it’s possible to have a nuanced discussion about, without falling into AI hype and while maintaining a skeptical attitude. But you have to do it from a place of curiosity rather than starting with hatred of the technology and wishing for it to be somehow proved useless and fade away. Because that’s not going to happen now, even if the current investment bubble pops. | | |
| ▲ | verdverm 5 hours ago | parent [-] | | wholehearted agreement If anything, I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. The two technologies are AI and ATProto, I work on both now to give sovereignty back to we the people | | |
| ▲ | somebehemoth 3 hours ago | parent [-] | | > I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. For me, modern AI appears to be controlled entirely by oligarchs and corporate overlords already. Some of them are the same who already shackled us. This time will not be different, in my opinion. I like your optimism. |
|
|
|
|
| |
| ▲ | cirrusfan 6 hours ago | parent | prev | next [-] | | I get a slow-but-usable ~10tk/s on kimi 2.5 2b-ish quant on a high end gaming slash low end workstation desktop (rtx 4090, 256 gb ram, ryzen 7950). Right now the price of RAM is silly but when I built it it was similar in price to a high end macbook - which is to say it isn’t cheap but it’s available to just about everybody in western countries. The quality is of course worse than what the bleeding edge labs offer, especially since heavy quants are particularly bad for coding, but it is good enough for many tasks: an intelligent duck that helps with planning, generating bog standard boilerplate, google-less interactive search/stackoverflow ("I ran flamegraph and X is an issue, what are my options here?” etc). My point is, I can get somewhat-useful ai model running at slow-but-usable speed on a random desktop I had lying around since 2024. Barring nuclear war there’s just no way that AI won’t be at least _somewhat_ beneficial to the average dev. All the AI companies could vanish tomorrow and you’d still have a bunch of inference-as-a-service shops appearing in places where electricity is borderline free, like Straya when the sun is out. | | |
| ▲ | danaris 5 hours ago | parent [-] | | Then you're missing my point. Yes, you, a hobbyist, can make that work, and keep being useful for the foreseeable future. I don't doubt that. But either a majority or large plurality of programmers work in some kind of large institution where they don't have full control over the tools they use. Some percentage of those will never even be allowed to use LLM coding tools, because they're not working in tech and their bosses are in the portion of the non-tech public that thinks "AI" is scary, rather than the portion that thinks it's magic. (Or, their bosses have actually done some research, and don't want to risk handing their internal code over to LLMs to train on—whether they're actually doing that now or not, the chances that they won't in future approach nil.) And even those who might not be outright forbidden to use such tools for specific reasons like the above will never be able to get authorization to use them on their company workstations, because they're not approved tools, because they require a subscription the company won't pay for, because etc etc. So saying that clearly coding with LLM assistance is the future and it would be irresponsible not to teach current CS students how to code like that is patently false. It is a possible future, but the volatility in the AI space right now is much, much too high to be able to predict just what the future will bring. | | |
| ▲ | blackcatsec 4 hours ago | parent [-] | | I never understand anyone's push to throw around AI slop coding everywhere. Do they think in the back of their heads that this means coding jobs are going to come back on-shore? Because AI is going to make up for the savings? No, what it means is tech bro CEOs are going to replace you even more and replace at least a portion of the off-shore folks that they're paying. The promise of AI is a capitalist's dream, which is why it's being pushed so much. Do more with less investment. But the reality of AI coding is significantly more nuanced, and particularly more nuanced in spaces outside of the SRE/devops space. I highly doubt you could realistically use AI to code the majority of significant software products (like, say, an entire operating system). You might be able to use AI to add additional functionality you otherwise couldn't have, but that's not really what the capitalists desire. Not to mention, the models have to be continually trained, otherwise the knowledge is going to be dead. Is AI as useful for Rust as it is for Python? Doubtful. What about the programming languages created 10-15 years from now? What about when everyone starts hoarding their information away from the prying eyes of AI scraper bots to keep competitive knowledge in-house? Both from a user perspective and a business perspective? Lots of variability here that literally nobody has any idea how any of it's going to go. |
|
| |
| ▲ | libraryofbabel 8 hours ago | parent | prev [-] | | I agree with you that everything is changing and that we don’t know what’s coming, but I think you really have to stretch things to imagine that it’s a likely scenario that AI-assisted coding will “dry up and blow away.” You’ll need to elaborate on that, because I don’t think it’s likely even if the AI investment bubble pops. Remember that inference is not really that expensive. Or do you think that things shift on the demand side somehow? | | |
| ▲ | saltcured 6 hours ago | parent | next [-] | | I think the "genie" that is out of the bottle is that there is no broad, deeply technical class who can resist the allure of the AI agent. A technical focus does not seem to provide immunity. In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects. Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort. So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us? Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment. The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities. | | |
| ▲ | danaris 5 hours ago | parent [-] | | The first half of your post, I broadly agree with. The last part...I'm not sure. The idea that we will be able to compute-scale our way out of practically anything is so much taken for granted these days that many people seem to have lost sight of the fact that we have genuinely hit diminishing returns—first in the general-purpose computing scaling (end of Moore's Law, etc), and more recently in the ability to scale LLMs. There is no longer a guarantee that we can improve the performance of training, at the very least, for the larger models by more than a few percent, no matter how much new tech we throw at it. At least until we hit another major breakthrough (either hardware or software), and by their very nature those cannot be counted on. Even if we can squeeze out a few more percent—or a few more tens of percent—of optimizations on training and inference, to the best of my understanding, that's going to be orders of magnitude too little yet to allow for running the full-size major models on consumer-level equipment. | | |
| ▲ | cheevly 4 hours ago | parent [-] | | This is so objectively false. Sometimes I can’t believe im even on HN anymore with the level of confidently incorrect assertions made. | | |
| ▲ | danaris 3 hours ago | parent [-] | | You, uh, wanna actually back that accusation up with some data there, chief? | | |
| ▲ | cheevly 2 hours ago | parent [-] | | Compare models from one year ago (GPT-4o?) to models from this year (Opus 4.5?). There are literally hundreds of benchmarks and metrics you can find. What reality do you live in? |
|
|
|
| |
| ▲ | danaris 7 hours ago | parent | prev | next [-] | | I think that even if inference is "not really that expensive", it's not free. I think that Microsoft will not be willing to operate Copilot for free in perpetuity. I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them. I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it. I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job". | |
| ▲ | LtWorf 7 hours ago | parent | prev | next [-] | | If they start charging what it costs them for example… | | |
| ▲ | libraryofbabel 7 hours ago | parent [-] | | There is so much confusion on this topic. Please don't spread more of it; the answers are just a quick google away. To spell it out: 1) AI companies make money on the tokens they sell through their APIs. At my company we run Claude Code by buying Claude Sonnet and Opus tokens from AWS Bedrock. AWS and Anthropic make money on those tokens. The unit economics are very good here; estimates are that Anthropic and OpenAI have a gross margin of 40% on selling tokens. 2) Claude Code subscriptions are probably subsidized somewhat on a per token basis, for strategic reasons (Anthropic wants to capture the market). Although even this is complicated, as the usage distribution is such that Anthropic is making money on some subscribers and then subsidizing the ultra-heavy-usage vibe coders who max out their subscriptions. If they lowered the cap, most people with subscriptions would still not max out and they could start making money, but they'd probably upset a lot of the loudest ultra-heavy-usage influencer-types. 3) The biggest cost AI companies have is training new models. That is the reason AI companies are not net profitable. But that's a completely separate set of questions from what inference costs, which is what matters here. | | |
| ▲ | somewhereoutth 3 hours ago | parent [-] | | without training new models, existing models will become more and more out of date, until they are no longer useful - regardless of how cheap inference is. Training new models is part of the cost basis, and can't be hand waved away. |
|
| |
| ▲ | somewhereoutth 3 hours ago | parent | prev [-] | | LLMs will stop being trained, as that enormous upfront investment will have been found to not produce the required return. People will continue to use the existing models for inference, not least as the (now bankrupt) LLM labs attempt to squeeze the last juice out of their remaining assets (trained LLMs). However these models will become more and more outdated, less and less useful, until they are not worth the electricity to do the inference anymore. Thus it will end. |
|
|
| |
| ▲ | subhobroto 8 hours ago | parent | prev [-] | | > I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into. This is my exact experience as well and I find it frustrating. If current technology is creating an issue for teachers - it's the teachers that need to pivot, not block current technology so they can continue what they are comfortable with. Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly every-time, they should be teaching humans to complement those robots or do something else entirely. It's possible in the past, that learning how to use an abacus was a critical lesson but once calculators were invented, do we continue with two semesters of abacus? Do we allow calculators into the abacus course? Should the abacus course be scrapped? Will it be a net positive on society to replace the abacus course with something else? "AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents. | | |
| ▲ | netsharc 7 hours ago | parent | next [-] | | I'm also for education for AI awareness. A big point on teaching kids about AI should also be a lot about how unreliable they can be. I had a discussion with a recruiter on Friday, and I said I guess the issue with AI vs human is, if you give a human developer who is new to your company tasks, the first few times you'll check their work carefully to make sure the quality is good. After a while you can trust they'll do a good job and be more relaxed. With AI, you can never be sure at any time. Of course a human can also misunderstand the task and hallucinate, but perhaps discussing the issue and the fix before they start coding can alleviate that. You can discuss with an AI as much as you want, but to me, not checking the output would be an insane move... To return to the point, yeah, people will use AI anyway, so why not teach them about the risks. Also LLMs feel like Concorde: it'll get you to where you want to go very quickly, but at tremendous environmental cost (also it's very costly to the wallet, although the companies are now partially subsidizing your use with the hopes of getting you addicted).. | | |
| ▲ | cheevly 3 hours ago | parent [-] | | Only if you naively throw AI carelessly at it. It sounds like you havent mastered the basics like fine-tuning, semantic vector routing, agentic skills/tooling generation…dozens of other solutions that robustly solve for your claim. | | |
| ▲ | netsharc 3 hours ago | parent [-] | | Gosh, I really should attend LinkedIn University of Buzzwords... | | |
| ▲ | cheevly 2 hours ago | parent [-] | | Yes, just buzzwords, totally no backing behind any of this. Your original comment makes so much more sense now. |
|
|
| |
| ▲ | QuadmasterXLII 3 hours ago | parent | prev [-] | | everything you learn about math is completely obsoleted by ai five years from now everything you learn about working using chatbots is completely obsoleted by ai five years from now both are possible, but 2 is pretty much guaranteed if we get 1, so learning about chatting with opus is pretty much always less useful than learning derivatives by hand unless you're starting job applications in less than a few months |
|
| |
| ▲ | mmooss 7 hours ago | parent | prev | next [-] | | I think that's a great approach. I've thought about how to handle these issues and wonder how you handle several issues that come to mind: Competing with LLM software users, 'honest' students would seem strongly incentivized to use LLMs themeselves. Even if you don't grade on a curve, honest students will get worse grades which will look worse to graduate schools, grant and scholarship committees, etc., in addition to the strong emotional component that everyone feels seeing an A or C. You could give deserving 'honest' work an A but then all LLM users will get A's with ease. It seems like you need two scales, and how do you know who to put on which scale? And how do students collaborate on group projects? Again, it seems you have two different tracks of education, and they can't really work together. Edit: How do class discussions play out with these two tracks? Also, manually doing things that machines do much better has value but also takes valuable time from learning more advanced skills that machines can't handle, and from learning how to use the machines as tools. I can see learning manual statistics calculations, to understand them fundamentally, but at a certain point it's much better to learn R and use a stats package. Are the 'honest' students being shortchanged? | |
| ▲ | buckle8017 9 hours ago | parent | prev | next [-] | | hopefully you've also modified the quizzes to be handwriting compatible. I once got "implement a BCD decoder" with about a 1"x4" space to do it. | | |
| ▲ | recursivedoubts 9 hours ago | parent | next [-] | | We just had our first set of in person quizzes and I gave them one question per page, with lots of space for answers. I'm concerned about handwriting, which is a lost skill, and how hard that will be on the TAs who are grading the exams. I have stressed to students that they should write larger, slower and more carefully than normal. I have also given them examples of good answers: terse and to the point, using bulleted lists effectively, what good pseudo-code looks like, etc. It is an experiment in progress: I have rediscovered the joys of printing & the logistics moving large amounts of paper again. The printer decided half way through one run to start folding papers slightly at the corner, which screwed up stapling. I suppose this is why we are paid the big bucks. | | |
| ▲ | NitpickLawyer 8 hours ago | parent [-] | | > I have also given them examples of good answers: terse and to the point Oh man, this reminds me of one test I had in uni, back in the days when all our tests were in class, pen & paper (what's old is new again?). We had this weird class that taught something like security programming in unix. Or something. Anyway, all I remember is the first two questions being about security/firewall stuff, and the third question was "what is a socket". So I really liked the first two questions, and over-answered for about a page each. Enough text to both run out of paper and out of time. So my answer to the 3rd question was "a file descriptor". I don't know if they laughed at my terseness or just figured since I overanswered on the previous questions I knew what that was, but whoever graded my paper gave me full points. |
| |
| ▲ | logicchains 8 hours ago | parent | prev [-] | | Was it a Perl exam? |
| |
| ▲ | thenipper 9 hours ago | parent | prev [-] | | How do you handle kids w/ a learning disability who can't effectively write well? | | |
| ▲ | baubino 8 hours ago | parent | next [-] | | Reasonable accommodations have been made for students with disabilities for decades now. While there might be some cases where AI might be helpful for accommodating students, it is not, nor should it be, a universal application because different disabilities (and different students) require different treatment and support. There‘s tons of research on disability accommodations and tons of specialists who work on this. Most universities have an entire office dedicated to supporting students with disabilities, and primary and secondary schools usually have at least one person who takes on that role. So how do you handle kids who can‘t write well? The same way we‘ve been handling them all along — have them get an assessment and determine exactly where they need support and what kind of support will be most helpful to that particular kid. AI might or might not be a part of that, but it‘s a huge mistake to assume that it has to be a part of that. People who assume that AI can just be thrown at disability support betray how little they actually know about disability support. | |
| ▲ | recursivedoubts 9 hours ago | parent | prev | next [-] | | We have a testing center at Montana State for situations like this. I deliver my tests in the form of a PDF and the testing center administers it in a manner appropriate for the student. | |
| ▲ | leviathant 8 hours ago | parent | prev [-] | | >How do you handle kids w/ a learning disability who can't effectively write well? It's embarrassing to see this question downvoted on here. It's a valid question, there's a valid answer, and accessibility helps everyone. | | |
| ▲ | wredcoll 8 hours ago | parent | next [-] | | It's a question that's too vague to be usefully answered especially on a forum like this. There's not such thing as "disabled people who can't write well", there's individuals with specific problems and needs. Maybe there's jessica who lost her right hand and is learning to write with the left who gets extra time. Maybe there's joe who has some form of nerve issue and uses a specialized pen that helps cancel out tremors. Maybe sarah is blind and has an aide who writes it or is allowed to use a keyboard or or or... | | |
| ▲ | zajio1am 6 hours ago | parent [-] | | There is a specific condition called dysgraphia that pretty much fits descripion "can't write well". |
| |
| ▲ | ThrowawayR2 8 hours ago | parent | prev | next [-] | | In the context of the immediate problems of AI in education, it's not a relevant thing to bring up. Finding ways for students with disabilities to succeed in higher education has been something that institutions have been handling for many decades now. The one I attended had well defined policies for faculty and specialist full time staff plus facilities whose sole purpose was to provide appropriate accommodations to such students and that was long, long ago. There will undoubtedly be some kind of role in the future for AI as well but current students with disabilities are not being left high and dry without it. | |
| ▲ | sjwgjnj 8 hours ago | parent | prev [-] | | Because it’s another nonsensical “think of the children” argument for why nothing should ever change. Your comment really deserves nothing more than an eye roll emoji, but HN doesn’t support them. Reasonable accommodations absolutely should be made for children that need them. But also just because you’re a bad parent and think the rules don’t apply to you doesn’t mean your crappy kid gets to cheat. Parents are the absolute worst snowflakes. | | |
| ▲ | danadam 8 hours ago | parent [-] | | > Your comment really deserves nothing more than an eye roll emoji, but HN doesn’t support them. (◔_◔) | | |
|
|
|
|
|
| ▲ | ageitgey 9 hours ago | parent | prev | next [-] |
| > “Over the years I’ve found that when students read on paper they're more likely to read carefully, and less likely in a pinch to read on their phones or rely on chatbot summaries,” Shirkhani wrote to the News. “This improves the quality of class time by orders of magnitude.” This is the key part. I'm doing a part-time graduate degree at a major university right now, and it's fascinating to watch the week-to-week pressure AI is putting on the education establishment. When your job as a student is to read case studies and think about them, but Google Drive says "here's an automatic summary of the key points" before you even open the file, it takes a very determined student to ignore that and actually read the material. And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade. Schools are struggling to figure out how to let students use AI tools to be more productive while still learning how to think. The students (especially undergrads) are incredibly good at doing as little work as possible. And until you get to the end-of-PhD level, there's basically nothing you encounter in your learning journey that ChatGPT can't perfectly summarize and analyze in 1 second, removing the requirement for you to do anything. This isn't even about AI being "good" or "bad". We still teach children how to add numbers before we give them calculators because it's a useful skill. But now these AI thinking-calculators are injecting themselves into every text box and screen, making them impossible to avoid. If the answer pops up in the sidebar before you even ask the question, what kind of masochist is going to bother learning how to read and think? |
| |
| ▲ | jval43 4 hours ago | parent | next [-] | | I had to take some literature classes in high school, and had a truly exceptional teacher who facilitated great and interesting discussions. Really opened up my mind and I only later realized how lucky I was. Those summaries always existed, in the past you could buy them as little books for most of the classic literature we read. Thing is they were always the same trite points even back then. Our teacher would see right through any BS, but never call it out directly. Instead there would be 1 precise and nicely asked follow-on question or even just asking their opinion on a talking point. Not details, but a regular discussion question. If someone hadn't read the book they'd stutter and grasp at straws at that point and everyone knew they hadn't actually read it. On the other hand if you had read the book the answer was usually pretty easy, and often not what the common summaries contained as talking points. So cheating not only didn't work, the few regular cheaters we had in our class (everybody knew who those were) actually suffered badly. Only in hindsight did I realize that this is not the normal experience. Most other literature classes in fact do just focus on or repeat the same trite points, is what I've heard from many others. It takes a great teacher to make cheating not "work" while making the class easy, intellectually stimulating and
refreshing at the same time. | | |
| ▲ | canpan 3 hours ago | parent [-] | | You must have been blessed with great teachers. My experience was the exact opposite. I loved reading as a child. But I learned very fast in school that my "own opinion" on books results in bad grades, while reading and reiterating the "official summary" results in OK or even good grades. Like you say, the summaries existed long before AI. It is what the teacher and students make of the class. |
| |
| ▲ | thomasfortes 8 hours ago | parent | prev | next [-] | | Last weekend I was arguing with a friend that physical guitar pedals are better for creativity and exploration of the musical space than modelers even though modelers have way more resources for a fraction of the cost, the physical aspect of knobs and cables and everything else leads to something that's way more interactive and prone to "happy mistakes" than any digital interface can offer. In my first year of college my calculus teacher said something that stuck with me "you learn calculus getting cramps on your wrists", yeah, AI can help remember things and accelerate learning, but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening. | | |
| ▲ | subhobroto 7 hours ago | parent [-] | | > but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening. Depends. You might end up going quite far without even opening up the hood of a car even when you drive the car everyday and depend on it for your livelihood. If you're the kind that likes to argue for a good laugh, you might say "well, I don't need to know how my car works as long as the engineer who designed it does or the mechanic who fixes it does" - and this is accurate but it's also accurate not everyone ended up being either the engineer or the mechanic. It's also untrue that if it turned out it would be extremely valuable to you to actually learn how the car worked, you wouldn't put in the effort to do so and be very successful at it. All this talk about "you should learn something deeply so you can bank on it when you will need it" seems to be a bit of a hoarding disorder. Given the right materials, support and direction, most smart and motivated people can learn how to get competent at something that they had no clue about in the past. When it comes to smart and motivated people, the best drop out of education because they find it unproductive and pedantic. | | |
| ▲ | thomasfortes 7 hours ago | parent [-] | | Yes, you can and I know just enough of cars to not be scammed by people, but not to know how the whole engine works, and I also don't think that you should learn everything that you can learn, there's no time for that, that's why I made the bird view comment. My argument is that when you have at least a basic knowledge of how things work (be it as a musician, a mechanical engineer or a scientist) you are in a much better place to know what you want/need. That said, smart and motivated people thrive if they are given the conditions to thrive, and I believe that physical interfaces have way less friction than digital interfaces, turning a knob is way less work than clicking a bunch of menus to set up a slider. If I were to summarize what I think about AI it would be something like "Let it help you. Do not let it think for you" My issue is not with people using AI as a tool, bit with people delegating anything that would demand any kind of effort to AI |
|
| |
| ▲ | csa 6 hours ago | parent | prev [-] | | > And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade. If reading an AI summary of readings is all it takes to make an exercise a facade, then the exercise was bad to begin with. AI is certainly putting pressure on professors to develop better curricula and evaluations, and they don’t get enough support for this, imho. That said, good instruction and evaluation techniques are not some dark art — they can be developed, implemented, and maintained with a modest amount of effort. |
|
|
| ▲ | sashank_1509 8 hours ago | parent | prev | next [-] |
| At some level, this is a problem of unmotivated students and college mostly being just for signaling as opposed to real education. If the sole purpose of college is to rank students, and funnel them to high prestige jobs that have no use for what they actually learn in college then what the students are doing is rational. If however the student is actually there to learn, he knows that using ChatGPT accomplishes nothing. In fact all this proves is that most students in most colleges are not there to learn. Which begs the question why are they even going to college? Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies. |
| |
| ▲ | the_snooze 7 hours ago | parent | next [-] | | College is wildly useful for motivated students: the ones who go out of their way to pursue opportunities uniquely available to them like serving as TAs, doing undergrad research, rising up the ranks in clubs and organizations, etc. They graduate not just with a credential but social capital. And it's that social capital that shields you from ChatGPT. College for the "consumer" student isn't worth much in comparison. | |
| ▲ | rr808 8 hours ago | parent | prev | next [-] | | It starts at admissions where learning is not a rewarded activity. You should be making impact in the community, doing some performative task that isn't useful for anything except making you different to your class mates who naively read the books and do the classwork honestly. | |
| ▲ | testfoobar 8 hours ago | parent | prev | next [-] | | For elite colleges, it is a pithy aphorism that the hardest part is getting in. | |
| ▲ | WalterBright 7 hours ago | parent | prev | next [-] | | > Surely there is a cheaper and more time efficient way to ranking students for companies. This topic comes up all the time. Every method conceivable to rank job candidates gets eviscerated here as being counterproductive. And yet, if you have five candidates for one job, you're going to have to rank them somehow. | | |
| ▲ | jrm4 7 hours ago | parent | next [-] | | As a college instructor, one issue I find fascinating is the idea that I'm supposed to care strongly about this. I do not. This is your problem, companies. Now, I am aware that I have to give out grades and so I walk through the motions of doing this to the extent expected. But my goal is to instruct and teach all students to the best of my abilities to try to get them all to be as educated/useful to society as possible. Sure, you can have my little assessment at the end if you like, but I work for the students, not for the companies. | | |
| ▲ | WalterBright 7 hours ago | parent [-] | | I didn't suggest you should care about company selection processes. But I would have been pretty angry to have been educated in topics that did not turn out to be useful in industry. I deliberately selected courses that I figured would be the most useful in my career. | | |
| ▲ | jval43 3 hours ago | parent | next [-] | | If I could go back in time and change what courses I took for my CS degree, it would be the exact opposite. I wish I'd gone more into theoretical computer science, quantum computing, cryptography, and in general just hard math and proofs. I took a few such courses and some things have genuinely been useful to know about at work but were also mind-expanding new concepts. I would never ever have picked up those on the job. Not to say the practical stuff hasn't been useful too (it has) but I feel confident I could pick up a new language easily anytime. Not so sure about formal proofs. | |
| ▲ | jrm4 5 hours ago | parent | prev [-] | | Right, but that is the thing I pay attention to. Again, I want to hear from former students that I did right by them, not current companies asking for free screening. | | |
| ▲ | rr808 an hour ago | parent | next [-] | | As someone who interviews students for internships and grad programs I mostly agree, however I think you should listen to the best, hardest working students to hear if they're getting picked OK. I suspect the students with the best jobs are the ones who do the minimum classwork and spend their time doing leetcode and applying for jobs - I would think that is sub optimal for everyone including yourself. | |
| ▲ | WalterBright 3 hours ago | parent | prev [-] | | The GPA and course schedule should be sufficient. |
|
|
| |
| ▲ | tolerance 23 minutes ago | parent | prev [-] | | ~4-10 years of social media history; email identifiers; DNA data to confirm pedigree from alumnus. |
| |
| ▲ | subhobroto 7 hours ago | parent | prev [-] | | > At some level, this is a problem of unmotivated students and college mostly being just for signaling as opposed to real education. I think this is mostly accurate. Schools have been able to say "We will test your memory on 3 specific Shakespeares, samples from Houghton Mifflin Harcourt, etc" - the students who were able to perform on these with some creative dance, violin, piano or cello thrown in had very good chances at a scholarship from an elite college. This has been working extremely well except now you have AI agents that can do the same at a fraction of the cost. There will be a lot of arguments, handwringing and excuse making as students go through the flywheel already in motion with the current approach. However, my bet is it's going to be apparent that this approach no longer works for a large population. It never really did but there were inefficiencies in the market that kept this game going for a while. For one, college has become extremely expensive. Second, globalization has made it pretty hard for someone paying tuition in the U.S. to compete against someone getting a similar education in Asia when they get paid the same salary. Big companies have been able to enjoy this arbitrage for a long time. > Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies Now that everyone has access to labor cheaper than the cheapest English speaking country in the world, humanity will be forced to adapt, forcing us to rethink what has seemed to work in the past |
|
|
| ▲ | cbfrench 6 hours ago | parent | prev | next [-] |
| Over a decade ago now, I was teaching college English as a grad student, and my colleagues and I were always trying to come up with ways to keep kids from texting and/or being online in class. My strategy was to print out copies of an unassigned shorter poem by an author covered in lecture. Then I’d hand it out at the beginning of class, and we’d spend the whole time walking through a close reading of that poem. It kept students engaged, since it was a collaborative process of building up an interpretation on the basis of observation, and anyone is capable of noticing patterns and features that can be fed into an interpretation. They all had something to contribute, and they’d help me to notice things I’d never registered before. It was great fun, honestly. (At least for me, but also, I think, for some of them.) I’d also like to think it helped in some small way to cultivate practices of attention, at least for a couple of hours a week. Unfortunately, you can’t perform the same exercise with a longer work that necessitates reading beforehand, but you can at least break out sections for the same purpose. |
|
| ▲ | zkmon 9 hours ago | parent | prev | next [-] |
| >This academic year, some English professors have increased their preference for physical copies of readings, citing concerns related to artificial intelligence. I didn't get it. How can printing avoid AI? And more importantly is this AI-resistance sustainable? |
| |
| ▲ | coffeefirst 9 hours ago | parent | next [-] | | The students were reading AI summaries rather than the original text. Does this literally work? It adds slightly more friction, but you can still ask the robot to summarize pretty much anything that would appear on the syllabus. What it likely does it set expectations. This doesn't strike me as being anti-AI or "resistance" at all. But if you don't train your own brain to read and make thoughts, you won't have one. | | |
| ▲ | epolanski 9 hours ago | parent [-] | | I was reading summaries online 25 years ago as well. Hell, in Italy we used to have an editor called Bignami make summaries of every school topic. https://www.bignami.com/ In any case, I don't know what to think about all of this. School is for learning, if you skip the hard part you not gonna learn, your lost. | | |
| ▲ | skeptic_ai 9 hours ago | parent [-] | | Instead of learning the things that can be done by ai, learn how to use the ai as that’s the only edge you got left. |
|
| |
| ▲ | mold_aid an hour ago | parent | prev | next [-] | | >How can printing avoid AI? Every online service in the university has an AI summarization tool in it. This includes library services. >And more importantly is this AI-resistance sustainable? It can get in line. Engl academics have been talking about sustainability for decades. Nobody cared before, professors aren't going to care now. | |
| ▲ | secabeen 9 hours ago | parent | prev | next [-] | | You can't easily copy and paste from a printout into AI. Sure, you can track down the reading yourself online, and then copy and paste in, but not during class, and not without some effort. | | |
| ▲ | xigoi 9 hours ago | parent | next [-] | | LLM services have pretty much flawless OCR for printed text. | |
| ▲ | stephenbez 9 hours ago | parent | prev [-] | | It’s easy to take a picture of a printout and then ask AI about it. Not that hard even when it’s many pages. | | |
| ▲ | layer8 4 hours ago | parent [-] | | It takes more initial effort than just starting reading, or even just skim-reading the material. |
|
| |
| ▲ | Flavius 9 hours ago | parent | prev [-] | | This approach is just cheap theater. It doesn't actually stop AI, it just adds a step to the process. Any student can snap a photo, OCR the text and feed it into an LLM in seconds. All this policy accomplishes is wasting paper and forcing students to engage in digital hoop-jumping. | | |
| ▲ | mbreese 8 hours ago | parent | next [-] | | It’s not theater. It introduces friction into the process. And when there is friction in both choices (read the paper, or take a photo and upload the picture), you’ll get more people reading the physical paper copy. If students want to jump through hoops, they will, but it will require an active choice. At this point auto AI summaries are so prevalent that it is the passive default. By shifting it to require an active choice, you’ve make it more likely for students to choose to do the work. | | |
| ▲ | Flavius 8 hours ago | parent | next [-] | | That friction is trivial. You are comparing the effort of snapping a photo against the effort of actually reading and analyzing a text. If anyone chooses to read the paper, it's because they actually want to read it, not because using AI was too much hassle. | | |
| ▲ | randcraw 4 hours ago | parent [-] | | You can certainly make it harder to cheat. AIs will inevitably generate summaries that are very similarly written and formatted -- content, context, and sequence -- making it easy for a prof (and their AI) to detect the presence of AI use, especially if students are also quizzed to validate that they have knowledge of their own summary. Alternately, the prof can require that students write out notes, in longhand, as they read, and require that a photocopy of those notes be submitted, along with a handwritten outline / rough draft, to validate the essays that follow. I think it's inevitable that "show your work" soon will become the mantra of not just the math, hard science, and engineering courses. |
| |
| ▲ | blell 8 hours ago | parent | prev [-] | | Any AI app worth its salt allows you to upload a photo of something and it processes it flawlessly in the same amount of time. This is absolutely worthless teather. | | |
| ▲ | mbreese 8 hours ago | parent | next [-] | | It’s not the time that’s the friction. It’s the choice. The student has to actively take the picture and upload it. It’s a choice. It takes more effort than reading the autogenerated summary that Google Drive or Copilot helpfully made for the digital PDF of the reading they replaced. It’s not much more effort. The level of friction is minimal. But we’re talking about the activation energy of students (in an undergrad English class, likely teenagers). It doesn’t take much to swing the percentage of students who do the reading. | | |
| ▲ | blell 7 hours ago | parent [-] | | Are you really comparing the energy necessary to read something to taking a photo and having an ai read it for you. You are not comparing zero energy to some energy, you are comparing a whole lot of energy to some energy. |
| |
| ▲ | LtWorf 7 hours ago | parent | prev [-] | | The quotas for summarising text and parsing images and then summarising text aren't the same. As you surely know. | | |
|
| |
| ▲ | ulrashida 9 hours ago | parent | prev | next [-] | | Students tend to be fairly lazy, so this may simply mean another x% of the class reads the material rather than scanning in the 60 pages of reading for the assignment. | |
| ▲ | jrm4 7 hours ago | parent | prev [-] | | You fundamentally misunderstand the value of friction. The digital hoop-jumping, as you call it, is a very very useful signal for motivation. |
|
|
|
| ▲ | 2b3a51 8 hours ago | parent | prev | next [-] |
| Quote from OA "TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option." And later in OA it states that the cost to a student is $0.12 per double sided sheet of printing. In all of my teaching career here in the UK, the provision of handouts has been a central cost. Latterly I'd send a pdf file with instructions and the resulting 200+ packs of 180 sides would be delivered on a trolley printed, stapled with covers. The cost was rounding error compared to the cost of providing an hour of teaching in a classroom (wage costs, support staff costs, building costs including amortisation &c). How is this happening? |
| |
| ▲ | lokar 8 hours ago | parent [-] | | Two things Public universities are always underfunded. Universities can get more money by putting the cost on the students and then they cover it with gov grants and loans. | | |
|
|
| ▲ | anilakar 9 hours ago | parent | prev | next [-] |
| At 150 eurobucks apiece, printed freshman coursebooks were prohibitively expensive in uni. We just pirated everything as a consequence. |
| |
| ▲ | Symbiote 9 hours ago | parent | next [-] | | At my university in actual Europe, many copies of the required textbooks were available in the library. Printing was free. | |
| ▲ | Flavius 9 hours ago | parent | prev [-] | | That's the whole point. They don't care about students or education, they care about wasting resources and making a lot of money in the process. | | |
| ▲ | mistrial9 8 hours ago | parent [-] | | some do and some don't. the "outrage" button is appropriate for the first part (don't care about students; waste resources to increase profits), but destructive for the second (we do care about students; we use resources in the classroom). It is hard to discuss this important topic when things go to "yelling" immediately? > They don't care about students or education, they care about wasting resources and making a lot of money in the process. |
|
|
|
| ▲ | hilbert42 6 hours ago | parent | prev | next [-] |
| ""When you read a book or a printed course packet, you turn real pages instead of scrolling, so you have a different, more direct, and (I think) more focused relationship with the words,” Fadiman wrote." I concur completely with Fadiman's comment as that has been my experience despite that I have been using computer screens and computers for many decades and that I am totally at ease with them for reading and composing documentation. Books and printed materials have physical presence and tactility about them that are missing from display screens. It is hard to explain but handling the physical object, pointing to paragraphs on printed pages, underlining text with a pencil and sticking postit notes into page margins adds an ergonomic factor that is more conducive to learning and understanding than when one interacts with screens (including those where one can write directly to the screen with a stylus). I have no doubt about this, as I've noticed over the years if I write down what I'm thinking with my hand onto paper I am more likely to understand and remember it better than when I'm typing it. It's as if typing doesn't provide as tighter coupling with my brain as does writing by hand. There is something about handwriting and the motional feedback from my fingers that makes me have a closer and more intimate relationship with the text. That's not to say I don't use screens—I do but generally to write summaries after I've first worked out ideas on paper (this is especially relevant when mathematics is involved—I'm more cognitively involved when using pencil and paper). |
|
| ▲ | dlcarrier 9 hours ago | parent | prev | next [-] |
| In pretty much any school system, just complain that the printout is not compatible with your text-to-speech engine, and the instructor will be required to provide an electronic version, no questions asked. |
| |
| ▲ | gamblor956 an hour ago | parent | next [-] | | That's not true in any U.S. school system unless the student has a disability which requires the use of a text-to-speech engine.The ADA does allow schools to require the student to prove the disability through medical documentation, which is why the fake-disability doctor market exists. | |
| ▲ | berhunter420 8 hours ago | parent | prev [-] | | Or you can fold your tuition dollars into cranes and burn them as performance art. | | |
| ▲ | jfengel 8 hours ago | parent | next [-] | | Students have never understood the value of school work. It's a hard thing to understand. None of the assignments are asked because the teacher wants to know the answer. They already know. So it all closely resembles busy work. AI is perfectly designed to do busy work. Students have always looked for ways to minimize the work load, and often the response has been to increase the load. In some cases it has effectively become a way to tech you to get away with cheating (a lesson this even has some real-world utility). Keeping students from wasting their tuition is an age-old, Sisyphean task for parents. School is wasted on the young. Unfortunately youth is also when your brain is most receptive to it. | |
| ▲ | dlcarrier 5 hours ago | parent | prev [-] | | Isn't that the core of how universities operate, in the first place? Sure, you could get an education for cheap from a community college, or free from various online sources, or for the best education possible, get paid to learn on the job. If you attend a university though, you're getting prestige by showing how much of your money, or someone else's money, you can burn through. It's not like anyone's taking undergraduate classes at Harvard or Stanford because the teaching assistants actually instructing are going to provide above-average instruction. They aren't even concerned with tenured professors teaching performance; they put publishing metrics first. |
|
|
|
| ▲ | bko 9 hours ago | parent | prev | next [-] |
| Who is behind this over digitization of primary school? My understanding is that in the Us pretty much all homework and tests are done on computers or iPads. This obv isn’t a push by parents because I can’t imagine parents I know want their kids in front of a screen all day. At best they’re indifferent. My only guess is the teachers unions that don’t want teachers grading and creating lesson plans and all the other work they used to do. And since this trend kid scores or performance has not gotten better, so what gives? Can anyone comment if it’s as bad as this and what’s behind it. |
| |
| ▲ | el_benhameen 9 hours ago | parent | next [-] | | My kids are in elementary school in the SF area (although pretty far in the ‘burbs) and this is not my experience. The older one has a chromebook and uses it for research and production of larger written projects and presents—the kind of things you’d expect. The younger one doesn’t have any school-supplied device yet. Both kids have math exercises, language worksheets, short writing exercises, etc., all done on paper. This is the majority of homework. I’m fine with this system. I wish they’d spend a little more time teaching computer basics (I did a lot of touch typing exercises in the 90’s; my older one doesn’t seem to have those kind of lessons). But in general, there’s not too much homework, there’s good emphasis on reading, and I appreciate that the older one is learning how to plan, research, and create projects using the tool he’ll use to do so in future schooling. | |
| ▲ | michaelt 9 hours ago | parent | prev [-] | | A few decades ago: * People needed to be taught digital skills that were in growing demand in the workplace. * The kids researching things online and word-processing their homework were doing well in class (because only upper-middle-class types could afford home PCs) * Some trials of digital learning produced good results. Teaching by the world's greatest teachers, exactly the pace every student needs, with continuous feedback and infinite patience. * Blocking distractions? How hard can that be? | | |
| ▲ | nine_k 8 hours ago | parent [-] | | Reading with AI summaries jumping into your eyes is like writing in a word processor that completes sentences and paragraphs for you. Writing with a word processor that just helps you type, format, and check spelling is great. Blocking distractions on a general-purpose computer (like a phone or a tablet) is as hard as handing locked-down devices set up for the purpose, and banning personal devices. |
|
|
|
| ▲ | azinman2 9 hours ago | parent | prev | next [-] |
| Computers have not advanced education — the data shows the opposite. I think we should just go back to physical books (which can be used!), and pen and paper for notes and assignments. |
| |
| ▲ | randcraw 3 hours ago | parent | next [-] | | At the very least, every school, subject, and teacher should be obliged to conduct experiments during the school year -- A/B/C trials in which various forms of note taking are explored: handwritten, computer-typed, and neither. Then see how it affects the kids' learning speed and retention of the various subjects. Then they need to compare notes with the other teachers to learn what they did differently and what did or didn't work for them. Ideally they'd also assess how this worked for different types of students, those with good vs bad reading skills, with good vs bad grades, esp those who are underperforming their potential. | | |
| ▲ | ChadNauseam 2 hours ago | parent [-] | | The idea that we would A/B test handwritten vs typed to see what would improve retention is focusing on the wrong thing. It's like A/B testing mayo or no mayo on your big mac to see which version is a healthier meal. No part of the school system is optimized for retention. It's common for students to take a biology class in 9th grade and then never study biology again for the rest of their lives. Everyone knows they won't remember any biology by the time they graduate, and no one cares. We know what increases retention, it's active recall and (spaced) repetition. These are basic principles of cognitive science have been empirically proven many times. Please try to implement that before demanding that teachers do A/B tests over what font to write the homework assignments in. |
| |
| ▲ | ChadNauseam 2 hours ago | parent | prev [-] | | I disagree. I've been using Math Academy for learning math. It's far superior to any way that I've learned in class. |
|
|
| ▲ | bahmboo 2 hours ago | parent | prev | next [-] |
| Seems like an easy job for cheap e-ink tablets. Even with network access there's not much temptation to stray from reading and potentially scribing. |
|
| ▲ | arnavpraneet 9 hours ago | parent | prev | next [-] |
| I might be wrong but I fear this strategy might unfairly punish e-readers which imo offer the best of both worlds |
| |
| ▲ | sodality2 8 hours ago | parent | next [-] | | I've brought my kindle to even the most strict of technology-banned lectures (with punishments like dropping a letter grade after one violation, and failing you after two), and never have they given me a problem when asked. They realize the issue isn't the silicon or lithium, it's the distractions it enables. I'm sure I could connect to some LLM on it, it's just that no one ever will. | |
| ▲ | mmahemoff 8 hours ago | parent | prev | next [-] | | I’ve tried many e-readers since early Kindle but I keep coming back to two fundamental problems with e-ink, both relevant to education. First, extremely cumbersome and error-prone to type compared to swipe-typing on a soft keyboard. Even highlighting a few sentences can be problematic when spanning across a page boundary. Second, navigation is also painful compared to a physical book. When reading non-fiction, it’s vital to be able to jump around quickly, backtrack, and cross-reference material. Amazon has done some good work on the UX for this, but nothing is as simple as flipping through a physical book. Android e-readers are better insofar as open to third-party software, but still have the same hardware shortcomings. My compromise has been to settle on medium-sized (~Kindle or iPad Mini size) tablets and treat them just as an e-reader. (Similar to the “kale phone” concept ie minimal software installed on it … no distractions.) They are much more responsive, hence fairly easy to navigate and type on. | |
| ▲ | PlatoIsADisease 8 hours ago | parent | prev [-] | | Its obvious they don't care. That said, I always thought exams should be the moment of truth. I had teachers that spoke broken english, but I'd do the homework and read the textbook in class. I learned many topics without the use of a teacher. |
|
|
| ▲ | crazygringo 9 hours ago | parent | prev | next [-] |
| > TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option. This made sense a couple of decades ago. Today, it's just bizarre to be spending $150 on a phonebook-sized packet of reading materials. So much paper and toner. This is what iPads and Kindles are for. |
| |
| ▲ | nine_k 9 hours ago | parent [-] | | No, the cost of the paper, toner, and binding is the cost of providing of a provably distraction-free environment. To make it more palpable for an IT worker: "It's just bizarre to give a developer a room with a door, so much sheetrock and wood! Working with computers is what open-plan offices are for." | | |
| ▲ | crazygringo 6 hours ago | parent [-] | | What kind of distraction are you getting on your Kindle...? Also, the university isn't covering the cost here. The students are. And buying the Kindle would be cheaper than the printing cost of the packet itself. So I stand by my point. If you don't want distraction, get Kindles. And even iPads are pretty good. They tend to sit flat so you're not "hiding" your screen the way you can with a laptop or phone, and people often aren't using messaging or social apps on them so there are no incoming distractions. | | |
| ▲ | dgellow 2 hours ago | parent [-] | | Compared to paper any software is a distraction > people often aren't using messaging or social apps on [ipad] That’s so obviously wrong |
|
|
|
|
| ▲ | edge17 8 hours ago | parent | prev | next [-] |
| This is a bit off topic, but why are used books so expensive on abebooks, thriftbooks, amazon so expensive compared to booksales, etc? I recall a time when a lot of these online stores were selling them for a few cents (granted, it was a long time ago and it was still called zShops on Amazon). |
| |
| ▲ | rr808 8 hours ago | parent [-] | | Do you mean a few cents plus $5 shipping? I think they still exist but often results are ranked by total cost now which is clearer. |
|
|
| ▲ | raincole 9 hours ago | parent | prev | next [-] |
| If textbooks weren't so expensive I'd be more cheering on them. > TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option. Lol $150 for reading packets? Not even textbooks? Seriously the whole system can fuck off. |
|
| ▲ | Mathnerd314 9 hours ago | parent | prev | next [-] |
| If you are flipping through the reading to find a quote, then printed readings are hard to beat, unless you can search for a word with digital search. But speed reading RSVP presentation beats any kind of print reading by a mile, if you are aiming for comprehension. So, it is hard to say where the technology is going. Nobody has put in the work to really make reading on an iPad as smooth and fluid as print, in terms of rapid page flipping. But the potential is there. It is kind of laughable how the salesman will be saying, oh it has a fast processor, and then you open up a PDF and scroll a few pages fast and they start being blank instead of actually having text. |
|
| ▲ | globalnode 4 hours ago | parent | prev | next [-] |
| It only struck me recently how geared universities are towards careers and many of the requirements are set by industry, not necessarily by whats good for the student. If you want to enjoy learning for its own sake or enjoy a particular subject I'd suggest ditching uni and going for self learning. Ofc if you need a job you probably need certification, but not if you're learning for fun. Also if you want to make sure you're well rounded, browse uni websites and have a look at the syllabus / reading list and filter out any obvious industry requirements if they don't suit you. |
|
| ▲ | jmclnx 9 hours ago | parent | prev | next [-] |
| While I fully agree with this, this quote bothers me: >Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option Does a student need to print out multiple TYCO Packets ? If so, only the very rich could afford this. I think educations should go back to printed books and submitting you work to the Prof. on paper. But submitting printed pages back to the Prof. for homework will avoid the school saying "Submit only Word Documents". That way a student can use the method they prefer, avoiding buying expensive software. One can then use just a simple free text editor if they want. Or even a typewriter :) |
|
| ▲ | kkfx 5 hours ago | parent | prev | next [-] |
| Nothing strange nor new: the average teacher is reactionary even at top universities, generally incapable of evolving, much like the stereotypical average vegetable seller. We continue to teach children (at least in the EU) to write by hand, to do calculations manually throughout their entire schooling, when in real life, aside from the occasional scrap note, all writing is done on computers and calculations are done by machine as well. And, of course, no one teaches these latter skills. The result on a large scale is that we have an increasingly incompetent population on average, with teaching staff competing to see who can revert the most to the past and refusing to see that the more they do this, the worse the incompetent graduates they produce. The computer, desktop, FLOSS, is the quintessential epistemological tool of the present, just as paper was in the past. The world changes, and those who fall behind are selected out by history; come to terms with that. Not only, those who lag behind ensure that few push forward an evolution for their own interest, which typically conflicts with that of the majority. |
|
| ▲ | jrm4 7 hours ago | parent | prev | next [-] |
| College instructor here. One thing I'm seeing here that's kind of funny is how badly so many of you are misunderstanding the value of "friction." You see a policy, and your clever brains come up with a way to get around it, "proving" that the new methodology is not perfect and therefore not valuable. So wrong. Come on people, think about it -- to an extent ALL WE DO is "friction." Any shift towards difficulty can be gained, but also nearly all of the time it provides a valuable differentiator in terms of motivation, etc. |
|
| ▲ | 6stringmerc 8 hours ago | parent | prev | next [-] |
| My thesis paper about a course for Freshman Composition Writing to stress fundamentals by way of using quill, pencil, pen, and finally a typewriter, was written 20 YEARS AGO in response to Spell Check and Auto Predict at the time...2006... This isn't my article nor do I know this Educator but I like her approach and actions taken: https://www.npr.org/2026/01/28/nx-s1-5631779/ai-schools-teac... |
|
| ▲ | subhobroto 8 hours ago | parent | prev | next [-] |
| I have been thinking about this and it seems like it's an asset that students want to do as little work as possible to get course credits. They also love playing games of various sorts. So instead of killing trees, printing pages of materials out and having students pay substantial sums to the printing press so we can inject distance between students reading the material and ChatGPT, why not turn it around completely? 1. Instead of putting up all sorts of barriers between students and ChatGPT, have students explicitly use ChatGPT to complete the homework 2. Then compare the diversity in the ChatGPT output 3. If the ChatGPT output is extremely similar, then the game is to critique that ChatGPT output, find out gaps in ChatGPT's work, insights it missed and what it could have done better 4.If the ChatGPT output is diverse, how do we figure out which is better? What caused the diversity? Are all the outputs accurate or are there errors in some? Similarly, when it comes to coding, instead of worrying that ChatGPT can zero shot quicksort and memcpy perfectly, why not game it: 1. Write some test cases that could make that specific implementation of `quicksort` or `memcpy` fail 2. Could we design the input data such that quicksort hits its worst case runtime? 3. Is there an algorithm that would sort faster than quicksort for that specific input? 4. Could there be architectures where the assumptions that make quicksort "quick", fail to hold true? Instead, something simpler and worse on paper like a "cache aware sort" actually work faster in practice than quicksort? I have multiple paragraphs more of thought on this topic but will leave it at this for now to calibrate if my thoughts are in the minority |
|
| ▲ | everybodyknows 9 hours ago | parent | prev [-] |
| > This semester, she is requiring all students to have printed options. What could it mean for an "option" to be "required"? |