| ▲ | Waterluvian 15 hours ago |
| I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you. But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated. I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter. |
|
| ▲ | enlyth 15 hours ago | parent | next [-] |
| A software engineer with an LLM is still infinitely more powerful than a commoner with an LLM. The engineer can debug, guide, change approaches, and give very specific instructions if they know what needs to be done. The commoner can only hammer the prompt repeatedly with "this doesn't work can you fix it". So yes, our jobs are changing rapidly, but this doesn't strike me as being obsolete any time soon. |
| |
| ▲ | javier_e06 14 hours ago | parent | next [-] | | I listened to an segment on the radio where a College Teacher told their class that it was okay to use AI assist you during test provided: 1. Declare in advance that AI is being used. 2. Provided verbatim the questions and answer session. 3. Explain why the answer given by the AI is good answer. Part of the grade will include grading 1, 2, 3 Fair enough. | | |
| ▲ | chasd00 10 hours ago | parent | next [-] | | It’s better than nothing but the problem is students will figure out feeding step 2 right back to the AI logged in via another session to get 3. | |
| ▲ | bheadmaster 14 hours ago | parent | prev | next [-] | | This is actually a great way to foster the learning spirit in the age of AI. Even if the student uses AI to arrive at an answer, they will still need to, at the very least, ask the AI to give it an explanation that will teach them how it arrived to the solution. | | |
| ▲ | jdjeeee 14 hours ago | parent [-] | | No this is not the way we want learning to be - just like how students are banned from using calculators until they have mastered the foundational thinking. | | |
| ▲ | graemep 39 minutes ago | parent | next [-] | | There is research that shows that banning calculators impedes the learning of maths. It is certainly not obvious to me that calculators will have a negative effect - I certainly always allowed my kids to use them. LLMs are trickier and use needs to be restricted to stop cheating, just as my kids had restrictions on what calculators they could use in some exams. That does not mean they are all bad or even net bad if used correctly. | |
| ▲ | bheadmaster 9 hours ago | parent | prev | next [-] | | That's a fair point, but AI can do much more than just provide you with an answer like a calculator. AI can explain the underlying process of manual computation and help you learn it. You can ask it questions when you're confused, and it will keep explaining no matter how off the topic you go. We don't consider tutoring bad for learning - quite the contrary, we tutor slower students to help them catch up, and advanced students to help them fulfill their potential. If we use AI as if it was an automated, tireless tutor, it may change learning for the better. Not like it was anywhere near great as it was. | | |
| ▲ | Arainach 3 hours ago | parent [-] | | You're assuming the students are reading any of this. They're not, they're just copy/pasting it. |
| |
| ▲ | stevofolife 12 hours ago | parent | prev [-] | | Calculator don't tell you step by step. AI can. | | |
| ▲ | sethops1 5 hours ago | parent | next [-] | | Symbolic computation is a thing. How do you think wolfram alpha worked for 20 years before AI? | |
| ▲ | simianparrot 10 hours ago | parent | prev | next [-] | | And it’s making that up as well. | |
| ▲ | danaris 9 hours ago | parent | prev [-] | | Yeah; it gets steps 1-3 right, 4-6 obviously wrong, and then 7-9 subtly wrong such that a student, who needs it step by step while learning, can't tell. |
|
|
| |
| ▲ | aesch 13 hours ago | parent | prev | next [-] | | Props to the teacher for putting in the work to thoughtfully grade an AI transcript! As I typed that I wondered if a lazy teacher might then use AI to grade the students AI transcript? | |
| ▲ | moffkalast 9 hours ago | parent | prev [-] | | That's roughly what we did as well. Use anything you want, but in the end you have to be able to explain the process and the projects are harder than before. If we can do more now in a shorter time then let's teach people to get proficient at it, not arbitrarily limit them in ways they won't be when doing their job later. |
| |
| ▲ | Waterluvian 15 hours ago | parent | prev | next [-] | | I think it's a bit like the Dunning-Kruger effect. You need to know what you're even asking for and how to ask for it. And you need to know how to evaluate if you've got it. This actually reminds me so strongly of the Pakleds from Star Trek TNG. They knew they wanted to be strong and fast, but the best they could do is say, "make us strong." They had no ability to evaluate that their AI (sorry, Geordi) was giving them something that looked strong, but simply wasn't. | | |
| ▲ | JoelMcCracken 9 hours ago | parent [-] | | Oh wow this is a great reference/image/metaphor for "software engineers" who misuse these tools - "the great pakledification" of software |
| |
| ▲ | icedchai 6 hours ago | parent | prev | next [-] | | Yep, I've seen a couple of folks pretending to be junior PMs, thinking they can replace developers entirely. The problem is, they can't write a spec. They can define a feature at a very high level, on a good day. They resort to asking one AI to write them a spec that they feed to another. It's slop all the way down. | | |
| ▲ | graemep 31 minutes ago | parent [-] | | People have tried that with everything from COBOL to low code. Its even succeeded in some problem domains (e.g. thing people code with spreadsheet formula) but there is no general solution that replaces programmers entirely. |
| |
| ▲ | bambax 15 hours ago | parent | prev [-] | | Agree totally. |
|
|
| ▲ | Buttons840 14 hours ago | parent | prev | next [-] |
| My job is to make people who have money think I'm indispensable to achieving their goals. There's a good chance AI can fake this well enough to replace me. Faking it would be good enough in an economy with low levels of competition; everyone can judge for themselves if this is our economy or not. |
| |
|
| ▲ | figassis 12 hours ago | parent | prev | next [-] |
| I don’t think this is the issue “yet”. It’s that no matter what class you are, your CEO does not care. Mediocre AI work is enough to give them immense returns and an exit. He’s not looking out for the unfortunate bag holders. The world has always had tolerance for highly distributed crap. See Windows. |
| |
| ▲ | dasil003 9 hours ago | parent [-] | | This seems like a purely cynical lacking any substantive analysis. Despite whatever nasty business practices and shitty UX Windows has foisted on the world, there is no denying the tremendous value that it has brought, including impressive backwards compatibility that rivals some of the best platforms in computing history. AI shovelware pump-n-dump is an entirely different short term game that will never get anywhere near Microsoft levels of success. It's more like the fly-by-nights in the dotcom bubble that crashed and burned without having achieved anything except a large investment. | | |
| ▲ | figassis 9 hours ago | parent [-] | | You misunderstand me. While I left Windows over a decade ago, I recognize it was a great OS in some aspects. I was referring to the recent AI fueled Windows developments and Ad riddled experiences. Someone decided that is fine, and you won't see orgs or regular users drop it...tolerance. |
|
|
|
| ▲ | crazylogger 14 hours ago | parent | prev | next [-] |
| You are describing tradition (deterministic?) automation before AI. With AI systems as general as today's SOTA LLMs, they'll happily take on the job regardless of the task falling into class I or class II. Ask a robot arm "how should we improve our car design this year", it'll certainly get stuck. Ask an AI, it'll give you a real opinion that's at least on par with a human's opinion. If a company builds enough tooling to complete the "AI comes up with idea -> AI designs prototype -> AI robot physically builds the car -> AI robot test drives the car -> AI evaluates all prototypes and confirms next year's design" feedback loop, then theoretically this definitely can work. This is why AI is seen as such a big deal - it's fundamentally different from all previous technologies. To an AI, there is no line that would distinguish class I from II. |
|
| ▲ | HorizonXP 15 hours ago | parent | prev | next [-] |
| This is actually a really good description of the situation. But I will say, as someone that prided myself on being the second one you described, I am becoming very concerned about how much of my work was misclassified. It does feel like a lot of work I did in the second class is being automated where maybe previously it overinflated my ego. |
| |
| ▲ | skydhash 15 hours ago | parent [-] | | SWE is more like formula 1 where each race presents a unique combination of track, car, driver, conditions. You may have tools to build the thing, but designing the thing is the main issue. Code editor, linter, test runner, build tools are for building the thing. Understanding the requirements and the technical challenges is designing the thing. | | |
| ▲ | Waterluvian 15 hours ago | parent | next [-] | | The other day I said something along the lines of, "be interested in the class, not the instance" and I meant to try to articulate a sense of metaprogramming and metaanalysis of a problem. Y is causing Z and we should fix that. But if we stop and study the problem, we might discover that X causes the class of Y problem so we can fix the entire class, not just the instance. And perhaps W causes the class of X issue. I find my job more and more being about how far up this causality tree can I reason, how confident am I about my findings, and how far up does it make business sense to address right now, later, or ever? | |
| ▲ | altmanaltman 14 hours ago | parent | prev [-] | | is it? I really fail to see the metaphor as an F1 fan. The cars do not change that much; only the setup does, based on track and conditions. The drivers are fairly consistent through the season. Once a car is built and a pecking order is established in the season, it is pretty unrealistic to expect a team with a slower car to outcompete a team with a faster car, no matter what track it is (since the conditions affect everyone equally). Over the last 16 years, Red Bull has won 8 times, Mercedes 7 times and Mclaren 1. Which means, regardless of the change in tracks and conditions, the winners are usually the same. So either every other team sucks at "understanding the requirements and the technical challenges" on a clinical basis or the metaphor doesn't make a lot of sense. | | |
| ▲ | Waterluvian 14 hours ago | parent | next [-] | | I wonder about how true this was historically. I imagine race car driving had periods of rapid, exciting innovation. But I can see how a lot of it has probably reached levels of optimization where the rules, safety, and technology change well within the realm of diminishing returns. I'm sure there's still a ridiculous about of R&D though? (I don't really know race car driving) | | |
| ▲ | altmanaltman 13 hours ago | parent [-] | | Sure there is crazy levels of R&D but that mostly happens off season or if there is a change in regulations which happen every 4-5 years usually. Interestingly, this year the entire grid starts with new regs and we don't really know the pecking order yet. But my whole point was that race to race, it really isn't that much different for the teams as the comment implied and I am still kind of lost how it fits to SWE unless you're really stretching things. Even then, most teams dont even make their own engines etc. | | |
| ▲ | skydhash 9 hours ago | parent [-] | | Do you really think that rainy Canada is the same as Jedddah, or Singapore? And what is the purpose of the free practice sessions? You’ve got the big bet to design the car between the season (which is kinda the big architectural decisions you make at the beginning of the project). Then you got the refinement over the season, which are like bug fixings and performance tweaks. There’s the parts upgrade, which are like small features added on top of the initial software. For the next season, you either improve on the design or start from scratch depending on what you’ve learned. In the first case, It is the new version of the software. In the second, that’s the big refactor. I remember that the reserve drivers may do a lot of simulations to provide data to the engineers. |
|
| |
| ▲ | skydhash 14 hours ago | parent | prev [-] | | Most projects don’t change that much either. Head over to a big open source project, and more often you will only see tweaks. To be able to do the tweaks require a very good understanding of the whole project (Naur’s theory of programming). Also in software, we can do big refactors. F1 teams are restricted to the version they’ve put in the first race. But we do have a lot of projects that were designed well enough that they’ve never changed the initial version, just build on top of it. |
|
|
|
|
| ▲ | mips_avatar 10 hours ago | parent | prev | next [-] |
| Well a lot of managers view their employees as doing the former, but they’re really doing the latter |
|
| ▲ | raincole 15 hours ago | parent | prev [-] |
| > I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” Uh, it's not the issue. The issue is that there isn't that much demand for the second class of job. At least not yet. The first class of job is what feeds billions of families. Yeah, I'm aware of the lump of labour fallacy. |
| |
| ▲ | Waterluvian 15 hours ago | parent | next [-] | | Discussing what we should do about the automation of labour is nothing new and is certainly a pretty big deal here. But I think you're reframing/redirecting the intended topic of conversation by suggesting that "X isn't the issue, Y is." It wanders off the path like if I responded with, "that's also not the issue. The issue is that people need jobs to eat." | |
| ▲ | blktiger 14 hours ago | parent | prev [-] | | It depends a lot on the type of industry I would think. |
|