| ▲ | alpaylan 4 hours ago |
| I think the point I wanted to make was that even if it was deterministic (which you can technically make it to be I guess?) you still shouldn’t live in a world where you’re guided by the “guesses” that the model makes when solidifying your intent into concrete code. Discounting hallucinations (I know this a is a big preconception, I’m trying to make the argument from a disadvantaged point again), I think you need a stronger argument than determinism in the discussion against someone who claims they can write in English, no reason for code anymore; which is what I tried to make here. I get your point that I might be taking the discussion to seriously though. |
|
| ▲ | liveoneggs 4 hours ago | parent | next [-] |
| The future is about embracing absolute chaos. The great reveal of LLMs is that, for the most part, nothing actually mattered except the most shallow approximation of a thing. |
| |
| ▲ | belZaah 2 hours ago | parent | next [-] | | This is true only for a small subset of problems. If you write crypto or hardware drivers, details do matter. | |
| ▲ | ModernMech 10 minutes ago | parent | prev | next [-] | | I think the exact opposite is true: LLMs revealed that when you average everything together, it's really bland and uninteresting no matter how technically good. It's the small choices that bring life into a thing and transform it from slop into something interesting and worthy of attention. | |
| ▲ | wizzwizz4 4 hours ago | parent | prev [-] | | The great reveal of LLMs is that our systems of checks and balances don't really work, and allow grifters to thrive, but despite that most people were actually trying to do their jobs properly. Perhaps nothing matters to you except the most shallow approximation of a thing, but there are usually people harmed by such negligence. | | |
| ▲ | liveoneggs 2 hours ago | parent | next [-] | | I'm just as upset as you are about it, believe me. Unfortunately I have to live in the world as I see it and what I've observed in the last 18-ish months is a complete breakdown of prior assumptions. | |
| ▲ | skydhash 3 hours ago | parent | prev [-] | | Imagine if the amount of a bank transfer does not matter, but it can only be an approximation, also you can approximate the selected account too. Or the system for monitoring the temperature of blood stockage for transfusion… Often it seems like tech maximalists are the most against tech reliability. | | |
| ▲ | wavemode 2 hours ago | parent | next [-] | | Well, the person who vibe-coded the banking app also vibe-coded a bunch of test cases, so this will only affect a small percentage of customers. When it does and they lose a bunch of money, well, you have a PR team and they don't, so just sweep the story under the rug. Imagine that - you got your project done ahead of schedule (which looks great on your OKRs) AND finally achieved your dream of no longer being dependent on those stupid overpaid, antisocial software engineers, and all it cost you was the company's reputation. Boeing management would be proud. Lots of business leaders will do the math and decide this is the way to operate from now on. | |
| ▲ | snovv_crash 3 hours ago | parent | prev | next [-] | | No need to be so practical. I suggest when their pointer dereferences, it can go a bit forward or backwards in memory as long as it is mostly correct. | |
| ▲ | SecretDreams 3 hours ago | parent | prev [-] | | Let's give people a choice. My banking will be deterministic, others can have probabilistic banking. Every so often, they transfer me some money by random chance, but at least they can say their banking is run by LLMs. Totally fair trade. |
|
|
|
|
| ▲ | raw_anon_1111 2 hours ago | parent | prev | next [-] |
| Before LLMs and now more than a decade ago in my career, I was assigned a task and my job was to translate that task into a working implementation. I was guided by the “guesses” that other developers made. I had to trust that they could do FizzBuzz competently without having to tell them to use the mod operator Then my job became I am assigned a larger implementation and depending on how large the implementation was, I had to design specifications for others to do some or all of the work and validate the final product for correctness. I definitely didn’t pore over every line of code - especially not for front end work that I stopped doing around the same time. The same is true for LLMs. I treat them like junior developers and slowly starting to treat them like halfway competent mid level ticket takers. |
|
| ▲ | blazinglyfast 4 hours ago | parent | prev [-] |
| > even if it was deterministic (which you can technically make it to be I guess?) No. LLMs are undefined behavior. |
| |
| ▲ | xixixao 4 hours ago | parent [-] | | OP means “given the same input, produce the same output” determinism. This isn’t really much different from normal compilers, you might have a language spec, but at the end of the day the results are determined by the concrete compiler’s implementation. But most LLM services on purpose introduce randomness, so you don’t get the same result for the same input you control as a user. | | |
| ▲ | zaphar an hour ago | parent | next [-] | | You can get deterministic output if just turn the temperature all the way down. The problem is that you usually get really bad results, deterministically. It turns out the randomness helps in finding solutions. | | |
| ▲ | recursive 3 minutes ago | parent [-] | | You can also get deterministic output if you use whatever temperature you want and use an arbitrary fixed RNG seed. |
| |
| ▲ | 4 hours ago | parent | prev [-] | | [deleted] |
|
|