| ▲ | Workaccount2 13 hours ago |
| People like communicating in natural language. LLMs are the first step in the movement away from the "early days" of computing where you needed to learn the logic based language and interface of computers to interact with them. That is where the inevitabilism comes from. No one* wants to learn how to use a computer, they want it to be another entity that they can just talk to. *I'm rounding off the <5% who deeply love computers. |
|
| ▲ | layer8 13 hours ago | parent | next [-] |
| People also like reliable and deterministic behavior, like when they press a specific button it does the same thing 99.9% of the time, and not slightly different things 90% of the time and something rather off the mark 10% of the time (give and take some percentage points). It's not clear that LLMs will get us to the former. |
| |
| ▲ | ryankrage77 3 hours ago | parent | next [-] | | To a user, many modern UI's are unpredictable and unreliable anyway. "I've always done it this way, but it's not working...". | | |
| ▲ | layer8 3 hours ago | parent [-] | | I agree, but UIs don't need to be that way. Non-smart light switches, thermostats, household appliances, etc. generally aren't that way, and that’s why many people prefer them, and expect UIs to work similarly — which they overall typically still do. |
| |
| ▲ | hnfong 11 hours ago | parent | prev | next [-] | | You can set the temperature of LLMs to 0 and that will make them deterministic. Not necessarily reliable though, and you could get different results if you typed an extra whitespace or punctuation. | | |
| ▲ | sealeck 10 hours ago | parent | next [-] | | Even then, this isn't actually what you want. When people say deterministic, at one level they mean "this thing should be a function" (so input x always returns the same output y). Some people also use determinism to mean they want a certain level of "smoothness" so that the function behaves predictably (and they can understand it). That is "make me a sandwich" should not return radically different results to "make me a cucumber sandwich". As you note, your scheme significantly solves the first problem (which is a pretty weak condition) but fails to solve the second problem. | |
| ▲ | jihadjihad 10 hours ago | parent | prev [-] | | > You can set the temperature of LLMs to 0 and that will make them deterministic. It will make them more deterministic, but it will not make them fully deterministic. This is a crucial distinction. | | |
| ▲ | cookingrobot 10 hours ago | parent | next [-] | | That’s an implementation choice. All the math involved is deterministic if you want it to be. | | |
| ▲ | Jaxan 8 hours ago | parent [-] | | It will still be nondeterministic in this context. Prompts like “Can you do X?” and “Please do X” might result in very different outcomes, even when it’s “technically deterministic”. For the human operating with natural language it’s nondeterministic. |
| |
| ▲ | falcor84 10 hours ago | parent | prev [-] | | Google is significantly less deterministic than AltaVista was. |
|
| |
| ▲ | erikerikson 11 hours ago | parent | prev [-] | | That is a parameter that can be changed, often called temperature. Setting the variance to 0 can be done and you will get repeatability. Whether you would be happy with that is another matter. |
|
|
| ▲ | e3bc54b2 10 hours ago | parent | prev | next [-] |
| > LLMs are the first step in the movement away from (...) the logic based language This dumb thing again.. The logic based language was and remains a major improvement [0] in being able to build abstractions because it allows the underlying implementations to be 'deterministic'. The natural language misses that mark by such a wide margin that it is impossible to explain in nicer language. And if one wants to make the argument that people achieve that anyway, perhaps you reading through one [1] will put that thought to rest :) [0] www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html [1] https://www.congress.gov/bill/119th-congress/house-bill/1/te... |
| |
| ▲ | JustBreath 8 hours ago | parent [-] | | Very true, the whole point of logic and programming is that language is itself subjective and vague. A deterministic program given the same inputs will always give the same outputs. We can debate about what is cool, cold or freezing but a thermometer will present the same numeric value to everyone. |
|
|
| ▲ | spopejoy 13 hours ago | parent | prev | next [-] |
| > People like communicating in natural language It does puzzle me a little that there isn't more widespread acclaim of this, achieving a natural-language UI has been a failed dream of CS for decades and now we can just take it for granted. LLMs may or may not be the greatest thing for coding, writing, researching, or whatever, but this UX is a keeper. Being able to really use language to express a problem, have recourse to abbreviations, slang, and tone, and have it all get through is amazing, and amazingly useful. |
|
| ▲ | pera 11 hours ago | parent | prev | next [-] |
| > Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say. - Bertrand Russell, The Scientific Outlook (1931) There is a reason we don't use natural language for mathematics anymore: It's overly verbose and extremely imprecise. |
| |
| ▲ | calvinmorrison 10 hours ago | parent [-] | | which is why every NoCode platform, or iPaas or whatever always falls back to implementing DSLs. programming languages are the most succinct deterministic way to instruct a computer, or even a person to do something. |
|
|
| ▲ | hinkley 12 hours ago | parent | prev | next [-] |
| If there was a way to explain contracts in natural language, don’t you think lawyers would have figured it out by now? How much GDP do we waste on one party thinking the contract says they paid for one thing but they got something else? |
| |
| ▲ | cootsnuck 11 hours ago | parent | next [-] | | > If there was a way to explain contracts in natural language, don’t you think lawyers would have figured it out by now? Uh...I mean...you do know they charge by the hour, right? Half joking, but seriously, the concept of "job security" still exists even for a $400 billion industry. Especially when that industry commands substantial power across essentially all consequential areas of society. LLMs literally do explain contracts in natural language. They also allow you to create contracts with just natural language. (With all the same caveats as using LLMs for programming or anything else.) I would say law is quietly one of the industries that LLMs have had a larger than expected impact on. Not in terms of job loss (but idk, would be curious to see any numbers on this). But more just like evident efficacy (similar to how programming became a clear viable use case for LLMs). All of that being said, big law, the type of law that dominates the industry, does not continue to exist because of "contract disputes". It exists to create and reinforce legal machinations that advance the interests of their clients and entrench their power. And the practice of doing that is inherently deeply human. As in, the names of the firm and lawyers involved are part of the efficacy of the output. It's deeply relational in many ways. (I'd bet anything though that smart lawyers up and down the industry are already figuring out ways to make use of LLMs to allow them to do more work.) | | |
| ▲ | dmoy 9 hours ago | parent [-] | | > LLMs literally do explain contracts in natural language. They also allow you to create contracts with just natural language. (With all the same caveats as using LLMs for programming or anything else.) I can't generalize, but the last time I tried to use an LLM for looking at a legal document (month or two ago), it got a section completely wrong. And then when that was pointed out, it dug in its heels and insisted it was right, even though it was very wrong. Interestingly there was a typo, which was obvious to any human, and would have been accepted as intended in a court, but the LLM insisted on using a strict interpretation accepting the typo as truth. It was weird, because it felt like on the one hand the LLM was trained to handle legal documents with a more strict interpretation of what's written, but then couldn't cope with the reality of how a simple typo would be handled in courts or real legal proceedings. So.... I dunno. LLMs can explain contracts, but they may explain then in a very wrong way, which could lead to bad outcomes if you rely on it. |
| |
| ▲ | 827a 11 hours ago | parent | prev [-] | | There's a potentially interesting idea in the space of: The cryptobros went really deep into trying to describe everything Up To And Including The World in computer code, with things like Etherium contracts, tokenization of corporate voting power, etc. That's all dead now, but you have to have some respect for the very techno-utopian idea that we can extend the power and predictability of Computer Code into everything; and its interesting how LLMs were the next techno-trend, yet totally reversed it. Now, its: computer code doesn't matter, only natural language matters, describe everything in natural language including computer code. |
|
|
| ▲ | aksosoakbab 13 hours ago | parent | prev | next [-] |
| Spoken language is a miserable language to communicate in for programming. It’s one of the major detractors of LLMs. Programming languages have a level of specification orders of magnitude greater than human communication ones. |
| |
| ▲ | noosphr 13 hours ago | parent | next [-] | | It absolutely is, but 99% of programs the average person wants to write for thier job are some variation of, sort these files, filter between value A and B, search inside for string xyz, change string to abc. LLMs are good enough for that. Just like how spreadsheets are good enough for 99% of numerical office work. | |
| ▲ | jaza 12 hours ago | parent | prev [-] | | Computer scientists in the ~1970s said that procedural languages are a miserable medium for programming, compared to assembly languages. And they said in the ~1960s that assembly languages are a miserable medium for programming, compared to machine languages. (Ditto for every other language paradigm under the sun since then, particularly object-oriented languages and interpreted languages). I agree that natural languages are a miserable medium for programming, compared to procedural / object-oriented / functional / declarative languages. But maybe I only agree because I'm a computer scientist from the ~2010s! | | |
| ▲ | abagee 12 hours ago | parent | next [-] | | I don't think that's the only difference - every "leap" in languages you mentioned was an increase in the level of abstraction, but no change in the fact that the medium was still deterministic. Programming in natural languages breaks that mold by adding nondeterminism and multiple interpretations into the mix. Not saying it will never happen - just saying that I don't think it's "only" because you're a computer scientist from the 2010s that you find natural languages to be a poor medium for programming. | | |
| ▲ | hnfong 11 hours ago | parent [-] | | > the medium was still deterministic Well, you should participate more in the discussions on Undefined Behavior in C/C++.... |
| |
| ▲ | 11 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | davesque 9 hours ago | parent | prev | next [-] |
| The thing is, people also dislike natural language for its ambiguity. That's why we invented things like legalese and computers; to get more reliable results. There will always be a need for that. |
|
| ▲ | xpe 12 hours ago | parent | prev | next [-] |
| > LLMs are the first step in the movement away from the "early days" of computing where you needed to learn the logic based language and interface of computers to interact with them. Even if one accepts the framing (I don’t), LLMs are far from the first step. The article is about questioning “inevitabilism”! To do that, we need to find anchoring and assuming the status-quo. Think broader: there are possible future scenarios where people embrace unambiguous methods for designing computer programs, even business processes, social protocols, governments. |
| |
| ▲ | xpe 8 hours ago | parent [-] | | belated edits: … find other anchors … and try not to assume the status quo will persist, much less be part of a pattern or movement (which may only be clear in retrospect) |
|
|
| ▲ | quantumHazer 5 hours ago | parent | prev | next [-] |
| Dijkstra would like to have a word here https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667... |
|
| ▲ | usrbinbash 13 hours ago | parent | prev | next [-] |
| > No one* wants to learn how to use a computer, they want it to be another entity that they can just talk to. No, we don't. Part of the reason why I enjoy programming, is because it is a mental exercise allowing me to give precise, unambiguous instructions that either work exactly as advertised or they do not. |
| |
| ▲ | jowea 13 hours ago | parent [-] | | Exactly, we are in the *, the 5% (and I think that's an overestimate) who actually like it. Seems tech is at least partly moving on. | | |
| ▲ | xpe 12 hours ago | parent [-] | | > seems like tech is at least partly moving on This framing risks getting it backwards and disempowering people, doesn’t it? Technology does not make its own choices (at least not yet). Or does it? To problematize my own claims… If you are a materialist, “choice” is an illusion that only exists once you draw a system boundary. In other words, “choice” is only an abstraction that makes sense if one defines an “agent”. We have long-running agents, so… | | |
| ▲ | eddythompson80 5 hours ago | parent [-] | | > This framing risks getting it backwards and disempowering people, doesn’t it? Technology does not make its own choices (at least not yet). It doesn't but we rarely chase technology for its own sake. Some do, and I envy them. However, most of us are being paid to solve specific problems usually using a specific set of technologies. It doesn't matter how much I love the Commodore or BASIC, it'll be very hard to convince someone to pay me to develop a solution for their problem based on it. The choice to use nodejs and react to solve their problem was.... my choice. Will there be a future where I can't really justify paying you to write code by hand. instead I can only really justify paying you to debug LLM generated code or to debug a prompt? Like could we have companies selling products and services with fundamentally no one at the helm of writing code. The entire thing is built through prompting and everynow and then you hire someone to take the hammer and keep beating a part until it sorta behaves the way it sorta should and they add a few ALL CAPS INSTRUCTIONS TO THE AGENT NOT TO TOUCH THIS!!!!! |
|
|
|
|
| ▲ | yoyohello13 9 hours ago | parent | prev | next [-] |
| Logic based languages are useful because they are unambiguous. Natural language is far less efficient for communicating hard requirements. Why do you think mathematical notation exists? It’s not just because the ivory tower elites want to sound smart. It’s a more efficient medium for communicating mathematical ideas. |
|
| ▲ | andai 13 hours ago | parent | prev | next [-] |
| Many people love games, and some of those even love making games, but few truly love to code. I'm designing a simple game engine now and thinking, I shall have to integrate AI programming right into it, because the average user won't know how to code, and they'll try to use AI to code, and then the AI will frantically google for docs, and/or hallucinate, so I might as well set it up properly on my end. In other words, I might as well design it so it's intuitive for the AI to use. And -- though I kind of hate to say this -- based on how the probabilistic LLMs work, the most reliable way to do that is to let the LLM design it itself. (With the temperature set to zero.) i.e. design it so the system already matches how the LLM thinks such a system works. This minimizes the amount of prompting required to "correct" its behavior. The passionate human programmer remains a primary target, and it's absolutely crucial that it remains pleasant for humans to code. It's just that most of them won't be in that category, they'll be using it "through" this new thing. |
| |
| ▲ | deltaburnt 12 hours ago | parent [-] | | I'm not sure I see the logic in what you're describing. By the time you run into this "users using AI on my engine" problem, the models will be different from the ones you used to make the design. Design how you like, I would just be surprised if that choice actually ended up mattering 5 years from now. | | |
| ▲ | andai 4 hours ago | parent [-] | | >by the time you run into this problem I'm describing the present day. My friend, who doesn't know anything about programming, made three games in an afternoon with Gemini. |
|
|
|
| ▲ | globular-toast 12 hours ago | parent | prev | next [-] |
| LLMs are nowhere near the first step. This is Python, an almost 35 year old language: for apple in sorted(bag):
snake.eat(apple)
The whole point of high-level programming languages is we can write code that is close enough to natural language while still being 100% precise and unambiguous. |
| |
| ▲ | Workaccount2 7 hours ago | parent | next [-] | | My 65yr old mother will never use python. What she wants is to tell her phone to switch it's background to the picture she took last night of the family. That is the inevitabilism. Forget about the tiny tech bubble for a moment and see the whole world. | |
| ▲ | 827a 11 hours ago | parent | prev [-] | | I really appreciate this take. High level programming languages should be able to do much that LLMs can do when it comes to natural language expression of ideas into computing behavior, but with the extreme advantage of 100% predictable execution. LLM queries, system prompts, and context, of sufficient complexity, required to get reasonably good results out of the LLM, begin to look like computer code and require skills similar to software engineering; but still without the predictable conformance. Why not just write computer code? Our industry developed some insanely high productivity languages, frameworks, and ways of thinking about systems development, in the mid-2000s. Rails is the best example of this; Wordpress, Django, certainly a few others. Then, for some reason, around the early 2010s, we just forgot about that direction of abstraction. Javascript, Go, and Rust took over, React hit in the mid-2010s, microservices and kubernetes, and it feels like we forgot about something that we shouldn't have ever forgotten about. |
|
|
| ▲ | techpineapple 7 hours ago | parent | prev | next [-] |
| “People like communicating in natural language” I would actually want to see some research on this. Maybe? But I’d think there would be a lot of exceptions. At its most basic, I’d rather flick my thumb than constantly say “scroll down”. And I think that you’d want to extrapolate that out. |
|
| ▲ | deadbabe 11 hours ago | parent | prev [-] |
| Let’s reframe your world view: No one wants to communicate with a computer. Computers are annoying, vile things. They just want things to work easily and magically. Therefore, for these people, being able to communicate in a natural language isn’t going to be anymore appealing than a nice graphical user interface. Using a search engine to find stuff you want already requires no logic, the LLM does the same but it just gives you better results. Thus the world of LLMs is going to look much like the world of today: just with lazier people who want to do even less thinking than they do now. It is inevitable. |