| ▲ | TeMPOraL 20 hours ago |
| Yeah, that sounds very much like the arguments parents gave to those of us who were kids when the web became a thing. "Cool walls of text. Shame you can't tell if any of that is true. You didn't put in work getting that information, and it's the work that matters." Except it's turns out it's not a problem in practice, and "the work" matters only in less than 1% of the cases, and even then, it's much easier done with the web than without. But it was impossible to convince the older generation of this. It was all apparent from our personal experience, yet we couldn't put it into words that the critics would find credible. It took few more years and personal experience for the rest to get up to speed with reality. |
|
| ▲ | oxfordmale 19 hours ago | parent | next [-] |
| There remains a significant challenge with LLM-generated code. It can give the illusion of progress, but produce code that has many bugs, even if you craft your LLM prompt to test for such edge cases. I have had many instances where the LLM confidentially states that those edge cases and unit tests are passing, while they are failing. Three years ago, would you have hired me as a developer if I had told you I was going to copy and paste code from Stack Overflow and a variety of developer blogs, and glue it together in a spaghetti-style manner? And that I would comment out failing unit tests, as Stack Overflow can't be wrong? LLMs will change Software Engineering, but not in the way that we are envisaging it right now, and not in the way companies like OpenAI want us to believe. |
| |
| ▲ | vidarh 18 hours ago | parent [-] | | Proper coding agents can easily be set up with hooks or other means of forcing linting and tests to be run and prevent the LLMs from bypassing them already. Adding extra checks in the work flow works very well to improve quality. Use the tools properly, and while you still need to take some care, these issues are rapidly diminishing separately from improvements to the models themselves. | | |
| ▲ | scubbo 9 hours ago | parent [-] | | > Use the tools properly > (from upthread) I was being sold a "self driving car" equivalent where you didn't even need a steering wheel for this thing, but I've slowly learned that I need to treat it like automatic cruise control with a little bit of lane switching. This is, I think, the core of a lot of people's frustrations with the narrative around AI tooling. It gets hyped up as this magnificent wondrous miraculous _intelligence_ that works right-out-of-the-box; then when people use it and (correctly!) identify that that's not the case, they get told that it's their own fault for holding it wrong. So which is it - a miracle that "just works", or a tool that people need to learn to use correctly? You (impersonal "you", here, not you-`vidarh`) don't get to claim the former and then retreat to the latter. If this was just presented as a good useful tool to have in your toolbelt, without all the hype and marketing, I think a lot of folks (who've already been jaded by the scamminess of Web3 and NFTs and Crypto in recent memory) would be a lot less hostile. | | |
| ▲ | TeMPOraL 7 hours ago | parent | next [-] | | How about: 1) Unbounded claims of miraculous intelligence don't come from people actually using it; 2) The LLMs really are a "miraculous intelligence that works right out-of-the-box" for simple cases of a very large class of problems that previously was not trivial (or possible) to solve with computers. 3) Once you move past simple cases, they require increasing amount of expertise and hand-holding to get good results from. Most of the "holding it wrong" responses happen around the limits of what current LLMs can reliably do. 4) But still, that they can do any of that at all is not far from a miraculous wonder in itself - and they keep getting better. | | |
| ▲ | scubbo 2 hours ago | parent [-] | | With the exception of 1) being "No True Scotsman"-ish, this is all very fair - and if the technology was presented with this kind of grounded and realistic evaluation, there'd be a lot less hostility (IMO)! |
| |
| ▲ | vidarh 8 hours ago | parent | prev [-] | | The problem with this argument is that it is usually not the same people making the different arguments. |
|
|
|
|
| ▲ | clarinificator 19 hours ago | parent | prev | next [-] |
| What gets me the most about the hype and the people arguing about it is: if it is so clearly revolutionary and the inevitable future, each minute you spend arguing about it is a minute you waste. People who stumble upon game changing technologies don't brag about it online, they use that edge in silence for as long as possible. |
| |
| ▲ | TeMPOraL 17 hours ago | parent [-] | | > People who stumble upon game changing technologies don't brag about it online, they use that edge in silence for as long as possible. Why? I'm not in this to make money, I'm this for cool shit. Game-changing technologies are created incrementally, and come from extensive collaboration. |
|
|
| ▲ | oytis 19 hours ago | parent | prev | next [-] |
| > Except it's turns out it's not a problem in practice Come on, this problem is now a US president |
|
| ▲ | danielbarla 19 hours ago | parent | prev [-] |
| I mean, I think the truth is somewhere in the middle, with a sliding-scale that moves with time. I got limited access to the internet in the Netscape Navigator era, and while it was absolutely awesome, until around 2010, maybe 2015 I maintained that for technical learning, the best quality materials were all printed books (well, aside from various newsgroups where you had access to various experts). I think the high barrier to entry and significant effort that it required were a pretty good junk filter. I suspect the same is true of LLMs. You're right, they're right, to various degrees, and it's changing in various ways as time goes on. |
| |
| ▲ | vidarh 18 hours ago | parent [-] | | Ca 1994 was the tipping point for me, when I could find research papers in minutes that I wouldn't even know about if I had to rely on my university library. |
|