| ▲ | jstummbillig 5 hours ago |
| > This quote is more sinister than I think was intended; it likely applies to all frontier coding models. As they get better, we quickly come to rely on them for coding. It's like playing a game on God Mode. Engineers become dependent; it's truly addictive. What's the worst potential outcome, assuming that all models get better, more efficient and more abundant (which seems to be the current trend)? The goal of engineering has always been to build better things, not to make it harder. |
|
| ▲ | Spartan-S63 4 hours ago | parent | next [-] |
| At some point, because these models are trained on existing data, you cease significant technological advancement--at least in tech (as it relates to programming languages, paradigms, etc). You also deskill an entire group of people to the extent that when an LLM fails to accomplish a task, it becomes nearly impossible to actually accomplish it manually. It's learned-helplessness on a large scale. |
| |
| ▲ | doug_durham 15 minutes ago | parent | next [-] | | Do you think that there has been technologic advancement in coding in the last 40 years? Programming languages and “paradigms” are crutches to help humans attempt to handle complexity. They are affordances, not a property of nature. | |
| ▲ | mikestorrent 3 hours ago | parent | prev | next [-] | | There's no reason it has to be that. Imagine e.g. taking an agent and a lesser-known but technically-superior language stack - say you're an SBCL fan. You find that the LLM is less useful because it hasn't been trained on 1000000 Stack Overflow posts about Lisp and so it can't reason as well as it can about Python. So, you set up a long running agent team and give it the job of building up a very complete and complex set of examples and documentation with in-depth tests etc. that produce various kinds of applications and systems using SBCL, write books on the topic, etc. It might take a long time and a lot of tokens, but it would be possible to build a synthetic ecosystem of true, useful information that has been agentically determined through trial and error experiments. This is then suitable training data for a new LLM. This would actually advance the state of the art; not in terms of "what SBCL can do" but rather in terms of "what LLMs can directly reason about with regard to SBCL without needing to consume documentation". I imagine this same approach would work fine for any other area of scientific advancement; as long as experimentation is in the loop. It's easier in computer science because the experiment can be run directly by the agent, but there's no reason it can't farm experiments out to lab co-op students somewhere when working in a different discipline. | |
| ▲ | kenjackson 3 hours ago | parent | prev | next [-] | | > At some point, because these models are trained on existing data, you cease significant technological advancement What makes you think that they can't incrementally improve the state of the art... and by running at scale continuously can't do it faster than we as humans? The potentially sad outcome is that we continue to do less and less, because they eventually will build better and better robots, so even activities like building the datacenters and fabs are things they can do w/o us. And eventually most of what they do is to construct scenarios so that we can simulate living a normal life. | |
| ▲ | flemhans 2 hours ago | parent | prev [-] | | Provided you believe LLMs cannot perform research. | | |
| ▲ | 32df an hour ago | parent [-] | | If they could OAI would be all over it. But they shut down that prism project. So....... |
|
|
|
| ▲ | Jtarii 4 hours ago | parent | prev | next [-] |
| >What's the worst potential outcome, assuming that all models get better, more efficient and more abundant Complexity steadily rises, unencumbered by the natural limit of human understanding, until technological collapse, either by slow decay or major systems going down with increasing frequency. |
| |
| ▲ | doug_durham 13 minutes ago | parent | next [-] | | Existing software is already beyond the limits of human understanding. | |
| ▲ | motoxpro 4 hours ago | parent | prev | next [-] | | why would the systems go down if the models are better at the humans at finding bugs. Playing a bit of devils advocate here, but why would the models be worse at handling the complexity if you assume they will get better and better. All software has bugs already. | | |
| ▲ | Jtarii 4 hours ago | parent | next [-] | | Adding complexity to software has never been easier than it is right now, we really have no idea if the models will progress to the point where they can actually write large systems in a maintainable way. Taking the gamble that the models of the future will dig us out of the gigantic hole we are currently digging is bold. | |
| ▲ | cyberax 4 hours ago | parent | prev [-] | | Finding bugs does not equal being able to do good architecting. |
| |
| ▲ | simondotau 4 hours ago | parent | prev | next [-] | | It’s always been thus at lower layers of abstraction. Only a minority of programmers would understand how to write an operating system. Only a tiny number of people would know how a modern CPU logically works, and fewer still could explain the electrical physics. | | |
| ▲ | sho_hn 4 hours ago | parent [-] | | > Only a minority of programmers would understand how to write an operating system. Only a tiny number of people would know how a modern CPU logically works, and fewer still could explain the electrical physics. I'd say this is true for programmers at, say, 20, but they spend the next four decades slowly improving their understanding and mastery of all the things you name, at least the good ones. The real question is whether that growth trajectory will change for the worse or the better. To be clear, this is not an AI doomerist comment, because none of us have spent enough time with the tech yet. I've gone down multiple lanes of thought on this, and I have cause for both worry and optimism. I'm curious to see how the lives of engineers in an AI world will look like, ultimately. |
| |
| ▲ | fdsajfkldsfklds 4 hours ago | parent | prev [-] | | The Anti-Singlarity! It's coming for us all. |
|
|
| ▲ | _alternator_ 5 hours ago | parent | prev [-] |
| Worst case? I dunno, maybe the world's oldest profession becomes the world's only profession? Something along those lines. |
| |
| ▲ | FeteCommuniste 4 hours ago | parent [-] | | > the world's oldest profession becomes the world's only profession Until the sexbots come out the other side of the uncanny valley, that is. | | |
|