| ▲ | cromka 2 hours ago |
| We've literally just started, what "over and over" do you refer to? |
|
| ▲ | malfist an hour ago | parent | next [-] |
| I've been told the past four years that AI is coming for my job. And thats just not true. Its no closer to that than it was 4 years ago. |
| |
| ▲ | laughing_man 2 minutes ago | parent | next [-] | | I'm not sure how anyone would know if it's closer or not. There's been a lot of progress in LLMs over the last four years. | |
| ▲ | KronisLV 22 minutes ago | parent | prev | next [-] | | > Its no closer to that than it was 4 years ago. There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated. Since around the end of 2025 and models like Opus 4.6, the SOTA has gotten good enough to work agentically on all sorts of dev tasks with pretty good degrees of success (harnesses and how you use them still matters, ofc). | | |
| ▲ | wiseowise 5 minutes ago | parent [-] | | > There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated. And how much revenue do they generate? |
| |
| ▲ | Danox 27 minutes ago | parent | prev | next [-] | | It is the lament of every generation of humans to think that they are the pinnacle of everything that has come before, we are just at the start of the so-called AI era, many very smart people coming up still haven’t really got their hands on all of the material available from a hardware and software standpoint. We are still at the early stages. I am very optimistic. I just wish I was younger to take advantage like Junior high, high school age with my current resources damn… The oldest lament in the books. | |
| ▲ | kakacik 28 minutes ago | parent | prev [-] | | It feels its just around the corner. But when you turn 20th corner and its still behind the next one, maybe things are a bit different than they seem / clueless emotions make us believe. Long term its bleak, but short/medium term - not so much, if I get fired it won't be llm replacing me but rather company politics, budget changes etc. Which was the only real (very real) risk for past 15 years too, consistently. But it helps to not work for US company. |
|
|
| ▲ | hansmayer an hour ago | parent | prev | next [-] |
| > We've literally just started 5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"... |
|
| ▲ | ASalazarMX 2 hours ago | parent | prev | next [-] |
| Hype cycles, AI has made developers obsolete like a dozen times in the las couple of years, at least according to their developers. |
|
| ▲ | luckystarr 2 hours ago | parent | prev [-] |
| Discovery of the best solution in a problem space is not generative but only verificative. Meaning: the LLM can see if a solution is better than another, but it can't generate the best one from the start. If you trust it, you'll get sub-par solutions. This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working? |
| |
| ▲ | coldtea 2 hours ago | parent [-] | | So? Hundreds of millions of office and devel jobs are about for developing "optimal solutions" to begin with. |
|