| ▲ | shaka-bear-tree 2 days ago |
| Funny the original post doesn’t mention AI replacing the coding part of his job. There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it. I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption. I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in. |
|
| ▲ | kace91 a day ago | parent | next [-] |
| >There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it. Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull). The most interesting questions are the ones that assume human equivalency. Suppose an AI can produce like a human. Are you ok with merging that code without human review? Are you ok with having a codebase that is effectively a black box? Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes? Are you ok with being dependent on the company providing this code generation? Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them? Will we be ok if the well of public technical discussion LLMs are feeding from dries up? Those are the interesting debates I think. |
| |
| ▲ | Symmetry a day ago | parent | next [-] | | > Are you ok with having a codebase that is effectively a black box? When was the last time you looked at the machine code your compiler was giving you? For me, doing embedded development on an architecture without a mature compiler the answer is last Friday but I expect that the vast majority of readers here never look at their machine code. We have abstraction layers that we've come to trust because they work in practice. To do our work we're dependent on the companies that develop our compilers where we can at least see the output, but also companies that make our CPUs which we couldn't debug without a huge amount of specialized equipment. So I expect that mostly people will be ok with it. | | |
| ▲ | kace91 a day ago | parent [-] | | >When was the last time you looked at the machine code your compiler was giving you? You could rephrase that as “when was the last time your compiler didn’t work as expected?”. Never in my whole career in my case. Can we expect that level of reliability? I’m not making the argument of “the LLM is not good enough”. that would brings us back to the boring dissuasion of “maybe it will be”. The thing is that human langauge is ambiguous and subject to interpretation, so I think we will have occasionally wrong output even with perfect LLMs. That makes black box behavior dangerous. | | |
| ▲ | Symmetry a day ago | parent [-] | | We certainly can't expect that with LLMs now but neither could compiler users back in the 1970s. I do agree that we probably won't ever have them generating code without more back and forth where the LLM complains that its instructions were ambiguous and then testing afterwards. |
|
| |
| ▲ | etherlord a day ago | parent | prev | next [-] | | I dont think it really matters if your or I or regular people are ok with it if the people with power are. There doesnt seem to be much any of us regular folks can do to stop it, especially as Ai eliminates more and more jobs thus further reducing the economic power of everyday people | | |
| ▲ | kace91 a day ago | parent [-] | | I disagree. There are personal decisions to make: Do you bet on keeping your technical skills sharpened, or stop and focus on product work and AI usage? Do you work for companies that go full AI or try to find one that stays “manual”? What advice do you offer as a technical lead when asked? Leadership ignoring technical advice is nothing new, but there is still value in figuring out those questions. | | |
| ▲ | bluefirebrand 2 hours ago | parent [-] | | > What advice do you offer as a technical lead when asked Learn to shoot a gun and grow your own food, that's my advice as a technical lead right now |
|
| |
| ▲ | listenallyall a day ago | parent | prev [-] | | Have you ever double-checked (in human fashion, not just using another calculator) the output from a calculator? When calculators were first introduced I'm sure some people such as scientists and accountants did exactly that. Calculators were new, people likely had to be slowly convinced that these magic devices could be totally accurate. But you and I were born well after the invention of calculators, our entire lives nobody has doubted that even a $2 calculator can immediately determine the square root of an 8-digit number and be totally accurate. So nobody verifies, and also, a lot of people can't do basic math. |
|
|
| ▲ | torginus 2 days ago | parent | prev | next [-] |
| I predict by March 2026, AI will be better at writing doomer articles about humans being replaced than top human experts. |
| |
|
| ▲ | twodave a day ago | parent | prev | next [-] |
| Well, I would just say to take into account the fact that we're starting to see LLMs be responsible for substantial electricity use, to the point that AI companies are lobbying for (significant) added capacity. And remember that we're all getting these sub-optimal toys at such a steep discount that it would be price gouging if everyone weren't doing it. Basically, there's an upper limit even to how much we can get out of the LLMs we have, and it's more expensive than it seems to be. Not to mention, poorly-functioning software companies won't be made any better by AI. Right now there's a lot of hype behind AI, but IMO it's very much an "emperor has no clothes" sort of situation. We're all just waiting for someone important enough to admit it. |
|
| ▲ | jakewins 2 days ago | parent | prev | next [-] |
| I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations. If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”. Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem
I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”. The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs |
| |
| ▲ | NotOscarWilde a day ago | parent | next [-] | | > Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect Fully agree, in fact, this has literally happened to me a week ago -- ChatGPT was confidently incorrect about its simple lock structure for my multithreaded C++ program, and wrote paragraphs upon paragraphs about how it works, until I pressed it twice about a (real) possibility of some operations deadlocking, and then it folded. > Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations. As an university assistant professor trying to keep up with AI while doing research/teaching as before, this also happens to me and I am dismayed by that. I am certain there are models out there that can solve IMO and generate research-grade papers, but the ones I can get easy access to as a customer routinely mess up stuff, including: * Adding extra simplifications to a given combinatorial optimization problem, so that its dynamic programming approach works. * Claiming some inequality is true but upon reflection it derived A >= B from A <= C and C <= B. (This is all ChatGPT 5, thinking mode.) You could fairly counterclaim that I need to get more funding (tough) or invest much more of my time and energy to get access to models closer to what Terrence Tao and other top people trying to apply AI in CS theory are currently using. But at least the models cheap enough for me to get access as a private person are not on par with what the same companies claim to achieve. | |
| ▲ | empiricus a day ago | parent | prev [-] | | I agree that the current models are far from perfect. But I am curious how you see the future. Do you really think/feel they will stop here? | | |
| ▲ | jakewins a day ago | parent [-] | | I mean, I'm just some guy, but in my mind: - They are not making progress, currently. The elephant-in-the-room problem of hallucinations is exactly the same or, as I said above, worse as it was 3 years ago - It's clearly possible to solve this, since we humans exist and our brains don't have this problem There's then two possible paths: Either the hallucinations are fundamental to the current architecture of LLMs, and there's some other aspect about the human brains configuration that they've yet to replicate. Or the hallucinations will go away with better and more training. The latter seems to be the bet everyone is making, that's why there's all these data centers being built right? So, either larger training will solve the problem, and there's enough training data, silica molecules and electricity on earth to perform that "scale" of training. There's 86B neurons in the human brain. Each one is a stand-alone living organism, like a biological microcontroller. It has constantly-mutating state, memory: short term through RNA and protein presence or lack thereof, long term through chromatin formation, enabling and disabling it's own DNA over time, in theory also permanent through DNA rewriting via TEs. Each one has a vast array of input modes - direct electrical stimulation, chemical signalling through a wide array of signaling molecules and electrical field effects from adjacent cells. Meanwhile, GPT-4 has 1.1T floats. No billions of interacting microcontrollers, just static floating points describing a network topology. The complexity of the neural networks that run our minds is spectacularly higher than the simulated neural networks we're training on silicon. That's my personal bet. I think the 88B interconnected stateful microcontrollers is so much more capable than the 1T static floating points, and the 1T static floating points is already nearly impossibly expensive to run. So I'm bearish, but of course, I don't actually know. We will see. For now all I can conclude is the frontier model developers lie incessantly in every press release, just like their LLMs. | | |
| ▲ | xmcqdpt2 9 hours ago | parent | next [-] | | The complexity of actual biological neural networks became clear to me when I learned about the different types of neurons. https://en.wikipedia.org/wiki/Neural_oscillation There are clock neurons, ADC neurons that transform analog intensity of signal into counts of digital spikes, there are neurons that integrate signals over time, that synchronizes together etc etc. Transformer models have none of this. | |
| ▲ | empiricus a day ago | parent | prev [-] | | Thanks, that's a reasonable argument. Some critique: based on this argument it is very surprising that LLM work so well, or at all. The fact that even small LLM do something suggests that the human substrate is quite inefficient for thinking. Compared to LLMs, it seems to me that 1. some humans are more aware of what they know; 2. humans have very tight feedback loops to regulate and correct. So I imagine we do not need much more scaling, just slightly better AI architectures. I guess we will see how it goes. |
|
|
|
|
| ▲ | botanrice a day ago | parent | prev | next [-] |
| idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients" It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet |
|
| ▲ | zwnow 2 days ago | parent | prev [-] |
| > There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it. Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves... |
| |
| ▲ | tormeh 2 days ago | parent [-] | | This is what I also see. AI is used sparingly. Mostly for information lookup and autocomplete. It's just not good enough for other things. I could use it to write code if I really babysit it and triple check everything it does? Cool cool, maybe sometime later. | | |
| ▲ | kakacik 2 days ago | parent [-] | | Who does typical code sweat shops churning out one smallish app at a time and quickly moving on? Certainly not your typical company-hired permanent dev, they (us) drown in tons of complex legacy code that keeps working for past 10-20 years and company sees no reason to throw it away. Those folks that do churn out such apps, for them its great & horrible long term. For folks like me development is maybe 10% of my work, and by far the best part - creative, problem-solving, stimulating, actually learning myself. Why would I want to mildly optimize that 10% and loose all the good stuff, while speed wouldn't visibly even improve? To really improve speed in bigger orgs, the change would have to happen in processes, office politics, management priorities and so on. No help of llms there, if anything trend-chasing managers just introduce more chaos with negative consequences. |
|
|