| ▲ | _jab 5 days ago |
| I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI. And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure. To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath. |
|
| ▲ | c0balt 5 days ago | parent | next [-] |
| > in that every software engineer now depends heavily on copilots That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point. |
| |
| ▲ | Raphael_Amiard 4 days ago | parent | next [-] | | Came here to say that. It’s important to remember how biased hacker news is in that regard. I’m just out of ten years in the safety critical market, and I can assure you that our clients are still a long way from being able to use those. I myself work in low level/runtime/compilers, and the output from AIs is often too erratic to be useful | | |
| ▲ | djeastm 4 days ago | parent | next [-] | | >our clients are still a long way from being able to use those So it's simply a matter of time >often too erratic to be useful So sometimes it is useful. | | |
| ▲ | layer8 4 days ago | parent [-] | | Too erratic to be net useful. | | |
| ▲ | anuramat 4 days ago | parent [-] | | Even for code reviews/test generation/documentation search? | | |
| ▲ | layer8 4 days ago | parent [-] | | Documentation search I might agree, but that wasn’t really the context, I think. Code reviews is hit and miss, but maybe doesn’t hurt too much. They aren’t better at writing good tests than at writing good code in the first place. | | |
| ▲ | anuramat 7 hours ago | parent [-] | | > wasn't the context yeah, I'm just curious about the vibe in general > good tests are there any downsides to adding "bad tests" though? as long as you keep generated tests separate, it's basically free regression testing, and if something meaningfully breaks on a refactor, you can promote it to not-actually-slop |
|
|
|
| |
| ▲ | OldfieldFund 3 days ago | parent | prev [-] | | I would say that the average Hacker News user is negatively biased against LLMs and does not use coding agents to their benefit. At least what I can tell from the highly upvoted articles and comments. |
| |
| ▲ | HDThoreaun 4 days ago | parent | prev [-] | | Im on the core sql execution team at a database company and everyone on the team is using AI coding assistants. Certainly not doing any monkey-esque web programming. | | |
| ▲ | c0balt 4 days ago | parent [-] | | > everyone on the team is using AI coding assistants. Then the tool worked for you(r team). That's great to hear and maybe gives some hope for my projects. It has just mostly been more of a time sink than an improvement ime though it appears to strongly vary by field/application. > Certainly not doing any monkey-esque web programming The point here was not to demean the user (or their usage) but rather to highlight how developers are not being dependent on LLMs as a tool. Your team presumably did the same type of work before without LLMs and won't become unable to do so if there were to become unavailable. That likely was not properly expressed in the original comment by me, sorry. |
|
|
|
| ▲ | galangalalgol 5 days ago | parent | prev | next [-] |
| Add LED lighting on there. It is easy to forget what a difference that made. The light pollution, but also just how dim houses were. CFL didn't last very long as a thing between incandescent and LED and houses lit with incandescents have a totally different feel. |
| |
|
| ▲ | atleastoptimal 5 days ago | parent | prev | next [-] |
| AI has already rendered academic take-home assignments moot. No other tech has had an impact like that, even the internet. |
| |
| ▲ | callc 5 days ago | parent | next [-] | | A pessimistic/realistic view of post high school education - credentials are proof of able to do a certain amount of hard work, used as an easy filter for companies while hiring. I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers. | | |
| ▲ | amanaplanacanal 4 days ago | parent [-] | | Maybe universities can go back to being temples of learning instead of credential mills. I can dream, can't I? |
| |
| ▲ | ZYbCRq22HbJ2y7 5 days ago | parent | prev | next [-] | | > AI has already rendered academic take-home assignments moot Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. It is just a asymmetric attack on the professor's (or whomever is grading) time to do that. IMO, credentials shouldn't be given to those who test or submit assignments without proctoring (a lot of schools allow this). | | |
| ▲ | mofeien 4 days ago | parent | next [-] | | > there are plenty of things that LLMs cannot do that a professor could make his students do. Name three? | | |
| ▲ | c0balt 4 days ago | parent | next [-] | | 1. Make the student(s) randomly have to present their results on a weekly basis. If you get caught for cheating at this point, at least in my uni with a zero tolerance policy, you instantly fail the course. 2. Make take home stuff only a requirement to be able to participate in the final exam. This effectively means cheating on them will only hinder you and not affect your grading directly. 3. Make take home stuff optional and completely detached from grading. Put everything into the final exam. My uni does a mix them on different courses. Especially two and three though have a significant negative impact on passing rates as they tend to push everything onto one single exam instead of spread out work over the semester. | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | NitpickLawyer 4 days ago | parent | prev [-] | | > Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. Could you offer some examples? I'm having a hard time thinking of what could be at the intersection of "hard enough for SotA LLMs" yet "easy enough for students (who are still learning, not experts in their fields, etc)". | | |
| ▲ | c0balt 4 days ago | parent [-] | | Present the results of your exercises (in person) in front of someone. Or really anything in person. A big downer on the online/remote Initiatives for learning but actually an advantage for older Unis that already have existing physical facilities for students. This does however also have some problems similar to code interviews . | | |
| ▲ | rootusrootus 4 days ago | parent | next [-] | | > Present the results of your exercises (in person) in front of someone I would not be surprised if we start to see a shift towards this. Interviews instead of written exams. It does not take long to figure out whether someone knows the material or not. Personally, I do not understand how students expect to succeed without learning the material these days. If anything, the prevalence of AI today only makes cheating easier in the very short term -- over the next couple years I think cheating will be harder than it ever was. I tried to leverage AI to push myself through a fairly straightforward Udacity course (in generative AI, no less), and all it did was make me feel incredibly stupid. I had to stop using it and redo the parts where I had gotten some help, so that my brain would actually learn something. But I'm Gen X, so maybe I'm too committed to old-school learning and younger people will somehow get super good at this stuff while also not having to do the hard parts. | |
| ▲ | NitpickLawyer 4 days ago | parent | prev [-] | | Sure but that's a solution to prevent students from using LLMs, not an example of something a professor can ask students that "LLMs can't do"... | | |
| ▲ | c0balt 4 days ago | parent [-] | | The main challenge is that most (all?) types of submissions can be created with LLMs and multi-model solutions. Written tasks are obvious, writing a paper, essay or answering questions is part of most LLMs advertised use-cases. The only other thing was recorded videos, effectively recorded presentations, thanks to video/audio/image generation that probably can be forged too. So the simple solution to choose something that an "LLM can't do" is to choose something were an LLM can't be applied. So we move away from a digital solution to meatspace. Assuming that the goal is to test your knowledge/understanding of a topic, it's the same with any other assistive technology. For example, if an examiner doesn't want you[1] to use a calculator to solve a certain equation, they could try to create an artificially hard problem or just exclude the calculator from the allowed tools. The first is vulnerable to more advanced technology (more compute etc.) the latter just takes the calculator out of the equation (pun intended). [1]: Because it would relieve you of understanding how to evaluate the equation. |
|
|
|
| |
| ▲ | devmor 5 days ago | parent | prev [-] | | What? The internet did that ages ago. We just pretended it didn't because some students didn't know how to use Google. | | |
| ▲ | atleastoptimal 5 days ago | parent | next [-] | | Everyone knows how to use Google. There's a difference between a corpus of data available online and an intelligent chatbot that can answer any permutation of questions with high accuracy with no manual searching or effort. | | |
| ▲ | geraneum 4 days ago | parent | next [-] | | > Everyone knows how to use Google. Everyone knows how to type questions into a chat box, yet whenever something doesn’t work as advertised with the LLMs, the response here is, “you’re holding it wrong”. | |
| ▲ | simianwords 5 days ago | parent | prev [-] | | Do you really think the jump from books to freely globally accessible data instantly available is a smaller jump than internet to ChatGPT? This is insane!! | | |
| ▲ | raincole 4 days ago | parent [-] | | It's not just smaller, but neglectable (in comparison). In the internet era you had to parse the questions with your own brain. You just didn't necessarily need to solve them yourself. In ChatGPT era you don't even need to read the questions. At all. The questions could be written in a language you don't understand, and you still are able generate plausible answers to them. | | |
| ▲ | simianwords 4 days ago | parent [-] | | To a person from the 1920's which one is more impressive? The internet or chatgpt? | | |
| ▲ | raincole 4 days ago | parent | next [-] | | Obvious ChatGPT. I don't know how it is even a question... if you showed GPT3.5 to people from < 20th centuries there would've been a worldwide religion around it. | | |
| ▲ | simianwords 4 days ago | parent [-] | | Interesting perspective. | | |
| ▲ | mdaniel 4 days ago | parent [-] | | I recall the kerfuffle about (IIRC) llama where the engineer lost his mind thinking they had spawned life in a machine and felt it was "too dangerous to release," so it's not a ludicrous take. I would hope that the first person to ask "LLM Jesus" how many Rs are in strawberry would have torpedoed the religion, but (a) I've seen dumber mind viruses (b) it hasn't yet | | |
|
| |
| ▲ | devmor 2 days ago | parent | prev [-] | | I don't think that's really a useful question, honestly. If you asked a person from the 1920's to choose what was more impressive between a Microwave oven and cellular communication (both consequences of the invention of radar), they'd probably pick the oven. |
|
|
|
| |
| ▲ | klipklop 5 days ago | parent | prev | next [-] | | You are mistaken, Google could not write a bespoke English essay for you. Complete with intentional mistakes to throw off the professor. | | |
| ▲ | a2128 4 days ago | parent [-] | | In English class we had a lot of book-reading and writing texts about those books. Sparknotes and similar sites allowed you to skip reading and get a distilled understanding of its contents, similar to interacting with an LLM |
| |
| ▲ | Davidzheng 5 days ago | parent | prev [-] | | disagree? I had to write essays in high school. I don't think the kids now need to if they don't want to. |
|
|
|
| ▲ | Davidzheng 5 days ago | parent | prev | next [-] |
| On current societal impact it might be close to the other three. But do you not think it is different in nature to other technological innovations? |
|
| ▲ | shayief 5 days ago | parent | prev | next [-] |
| > in that every software engineer now depends heavily on copilots With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns. For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort. |
|
| ▲ | thomasfromcdnjs 4 days ago | parent | prev | next [-] |
| Pretty sure I read Economnics in one lesson because of HN, he makes great arguments about how automation never ruins economies as much as people think. "Chapter 7: The Curse of Machinery" |
|
| ▲ | 5 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | 4 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | mmmore 4 days ago | parent | prev | next [-] |
| LLMs with instruction following have been around for 3 years. Your comment gives me "electricity and gas engines will never replace the horse" vibes. Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years. |
|
| ▲ | srcreigh 4 days ago | parent | prev | next [-] |
| > Could we get there? Absolutely. We just haven't yet. What else is needed then? |
| |
| ▲ | tymscar 4 days ago | parent [-] | | I don’t know what the answer to the Collatz conjecture is, but I know it’s not “carrot”. |
|
|
| ▲ | legucy 4 days ago | parent | prev [-] |
| I’m skeptical of arguments like this. If we look at most impactful technologies since the year 1980, the Web is not even in my top 3. Personal computers, spreadsheet software, and desktop publishing have all done more to alter society and daily life than has the Web.
And yes, I recognize that the Web has already created profound change, in that every researcher now depends heavily on online databases, in that commerce faces a major disruption challenge, and in that information access has been completely changed. I just don’t think those changes are on the same level as the normalization of powerful computers on everyone’s desk, as our business processes becoming increasingly digitized, nor as the enablement for small businesses to produce professional-quality documents without having to maintain expensive typesetting equipment.
To me, the treating of the Web as “different” is still unsubstantiated. Could we get there? Absolutely. We just haven’t yet. But some people start to talk about it almost in a way that’s reminiscent of Pascal’s Wager, as if the slight chance of a godly reward from investing in Web technologies means it is rational to devote our all to it. But I’m still holding my breath. |
| |