|  ▲  | 113 2 days ago | 
 | > Because nobody actually wants a "web app". People want food, love, sex or: solutions. Okay but when I start my car I want to drive it, not fuck it.  | 
|
 | ▲ | jstummbillig 2 days ago | parent | next [-] | 
 | Most of us actually drive a car to get somewhere. The car, and the driving, are just a modality. Which is the point.  | 
 | |
  | ▲ | kennywinker a day ago | parent | next [-] |   | If this was a good answer to mobility, people would prefer the bus over their car. It’s non-deterministic - when will it come? How quick will i get there? Will i get to sit? And it’s operated by an intelligent agent (driver). Every reason people prefer a car or bike over the bus is a reason non-deterministic agents are a bad interface. And that analogy works as a glimpse into the future - we’re looking at a fast approaching world where LLMs are the interface to everything for most of us - except for the wealthy, who have access to more deterministic services or actual human agents. How long before the rich person car rental service is the only one with staff at the desk, and the cheaper options are all LLM based agents? Poor people ride the bus, rich people get to drive.  |   | |
  | ▲ | aryehof 21 hours ago | parent | next [-] |   | Bus vs car hit home for me as a great example of non vs deterministic. It has always seemed to me that workflow or processes need to be deterministic and not decided by an LLM.  |  |
  | ▲ | soco 5 hours ago | parent | prev [-] |   | Here in Switzerland the bus is the deterministic choice. Just saying.  |  
  |  |
  | ▲ | 63stack 2 days ago | parent | prev | next [-] |   | Most of us actually want to get somewhere to do an activity. The getting there is just a modality.  |   | |
  | ▲ | jermaustin1 2 days ago | parent [-] |   | Most of us actually want to get some where to do an activity to enjoy ourselves. The getting there, and activity, are just modalities.  |   | |
  | ▲ | tags2k 2 days ago | parent [-] |   | Most of us actually want to get somewhere to do an activity to then have known we did it for the rest of our lives as if to extract some intangible pleasure from its memory. Why don't we just hallucinate that we did it?  |   | |
  | ▲ | shswkna 2 days ago | parent | next [-] |   | This leads to us asking the deepest question of all: What is the point of our existence. Or as someone suggests lower down, in our current form all needs could ultimately be satisfied if AI just provided us with the right chemicals. (Which drug addicts already understand) This can be answered though, albeit imperfectly. On a more reductionist level, we are the cosmos experiencing itself. Now there are many ways to approach this. But just providing us with the right chemicals to feel pleasure/satisfaction is a step backwards. All the evolution of a human being, just to end up functionally like an amoeba or a bacteria. So we need to retrace our steps backwards in this thought process. I could write a long essay on this. But, to exist in first place, and to keep existing against all the constraints of the universe, is already pretty fucking amazing. Whether we do all the things we do, just in order to stay alive and keep existing, or if the point is to be the cosmos “experiencing itself”, is pretty much two sides of the same coin.  |   | |
  | ▲ | narrator 2 days ago | parent | next [-] |   | >Or as someone suggests lower down, in our current form all needs could ultimately be satisfied if AI just provided us with the right chemicals. (Which drug addicts already understand) When you suddenly realize walking down the street that the very high fentanyl zombie is having a better day than you are. Yeah, you can push the button in your brain that says "You won the game." However, all those buttons were there so you would self-replicate energy efficient compute.  Your brain runs on 10 watts after all. It's going to take a while for AI to get there, especially without the capability for efficient self-repair.  |  |
  | ▲ | tags2k 2 days ago | parent | prev | next [-] |   | Indeed - stick me in my pod and inject those experience chemicals into me, what's the difference? But also, what would be the point? What's the point anyway? In one scenario every atom's trajectory was destined from the creation of time and we're just sitting in the passenger seat watching. In another, if we do have free will then we control the "real world" underneath - the quantum and particle realms - as if through a UI. In the pod scenario, we are just blobs experiencing chemical reactions through some kind of translation device - but aren't we the same in the other scenarios too?  |  |
  | ▲ | awesomecomment a day ago | parent | prev [-] |   | [dead]  |  
  |  |
  | ▲ | 63stack 16 hours ago | parent | prev [-] |   | This was actually my point as well. You can follow this thought process all the way up to "make those specific neuron pathways in my brain fire", everything else is just the getting there part.  |  
  |  
  |  
  |  |
  | ▲ | GTP 2 days ago | parent | prev | next [-] |   | But I want that somewhere to be deterministic, i.e. I want to arrive to the place I choose. With this kind of non-determinism instead, I have a big chance of getting to the place I choose. But I will also every now and then end up in a different place.  |  |
  | ▲ | 113 2 days ago | parent | prev [-] |   | Yeah but in this case your car is non-deterministic so  |   | |
  | ▲ | mikodin 2 days ago | parent | next [-] |   | Well the need is to arrive where you are going. If we were in an imagined world and you are headed to work You either walk out your door and there is a self driving car, or you walk out of your door and there is a train waiting for you or you walk out of your door and there is a helicopter or you walk out of your door and there is a literal worm hole. Let's say all take the same amount of time, are equally safe, same cost, have the same amenities inside, and "feel the same" - would you care if it were different every day? I don't think I would. Maybe the wormhole causes slight nausea ;)  |   | |
  | ▲ | didericis 2 days ago | parent | next [-] |   | > Well the need is to arrive where you are going. In order to get to your destination, you need to explain where you want to go. Whatever you call that “imperative language”, in order to actually get the thing you want, you have to explain it. That’s an unavoidable aspect of interacting with anything that responds to commands, computer or not. If the AI misunderstands those instructions and takes you to a slightly different place than you want to go, that’s a huge problem. But it’s bound to happen if you’re writing machine instructions in a natural language like English and in an environment where the same instructions aren’t consistently or deterministically interpreted. It’s even more likely if the destination or task is particularly difficult/complex to explain at the desired level of detail. There’s a certain irreducible level of complexity involved in directing and translating a user’s intent into machine output simply and reliably that people keep trying to “solve”, but the issue keeps reasserting itself generation after generation. COBOL was “plain english” and people assumed it would make interacting with computers like giving instructions to another employee over half a century ago. The primary difficulty is not the language used to articulate intent, the primary difficulty is articulating intent.  |   | |
  | ▲ | simianwords 2 days ago | parent [-] |   | this is a weak argument.. i use normal taxis and ask the driver to take me to a place in natural language - a process which is certainly non deterministic.  |   | |
  | ▲ | didericis a day ago | parent | next [-] |   | > a process which is certainly non deterministic The specific events that follow when asking a taxi driver where to go may not be exactly repeatable, but reality enforces physical determinism that is not explicitly understood by probabilistic token predictors. If you drive into a wall you will obey deterministic laws of momentum. If you drive off a cliff you will obey deterministic laws of gravity. These are certainties, not high probabilities. A physical taxi cannot have a catastrophic instant change in implementation and have its wheels or engine disappear when it stops to pick you up. A human taxi driver cannot instantly swap their physical taxi for a submarine, they cannot swap new york with paris, they cannot pass through buildings… the real world has a physically determined option-space that symbolic token predictors don’t understand yet. And the reason humans are good at interpreting human intent correctly is not just that we’ve had billions of years of training with direct access to physical reality, but because we all share the same basic structure of inbuilt assumptions and “training history”. When interacting with a machine, so many of those basic unstated shared assumptions are absent, which is why it takes more effort to explicitly articulate what it is exactly that you want. We’re getting much better at getting machines to infer intent from plain english, but even if we created a machine which could perfectly interpret our intentions, that still doesn’t solve the issue of needing to explain what you want in enough detail to actually get it for most tasks. Moving from point A to point B is a pretty simple task to describe. Many tasks aren’t like that, and the complexity comes as much from explaining what it is you want as it does from the implementation.  |  |
  | ▲ | chii 2 days ago | parent | prev [-] |   | and the taxi driver has an intelligence that enables them to interpret your destination, even if ambiguous. And even then, mistakes happen (all the time with taxis going to a different place than the passenger intended because the names may have been similar).  |   | |
  | ▲ | simianwords 2 days ago | parent [-] |   | Yes so a bit of non determinism doesn’t hurt anyone. Current LLMs are pretty accurate when it comes to these sort of things.  |  
  |  
  |  
  |  |
  | ▲ | hyperadvanced 2 days ago | parent | prev [-] |   | I think it’s pretty obvious but most people would prefer a regular schedule not a random and potentially psychologically jarring transportation event to start the day.  |  
  |  |
  | ▲ | chii 2 days ago | parent | prev | next [-] |   | > your car is non-deterministic it's not as far as your experience goes - you press pedal, it accelerates. You turn the steering, it goes the way it turns. What the car does is deterministic. More importantly, it does this every time, and the amount of turning (or accelerating) is the same today as it was yesterday. If an LLM interpreted those inputs, can you say with confidence, that you will accelerate in a way that you predicted? If that is the case, then i would be fine with an LLM interpreted input to drive. Otherwise, how do you know, for sure, that pressing the brakes will stop the car, before you hit somebody in front of you? of course, you could argue that the input is no longer your moving the brake pads etc - just name a destination and you get there, and that is suppose to be deterministic, as long as you describe your destination correctly. But is that where LLM is at today? or is that the imagined future of LLMs?  |   | |
  | ▲ | iliaxj a day ago | parent | next [-] |   | Sometimes it doesn't though. Sometimes the engine seizes because a piece of tubing broke and you left your coolant down the road two turns ago. Or you steer off a cliff because there was coolant on the road for some reason. Or the meat sack in front of the wheel just didn't get enough sleep and your response time is degraded and you just can't quite get the thing to feel how you usually do. Ultimately the failure rate is low enough to trust your life on it, but that's just a matter of degree.  |   | |
  | ▲ | pepoluan a day ago | parent | next [-] |   | The situations you described reflects a System that has changed. And if the System has changed, then a change in output is to be expected. It's the same as having a function called "factorial" but you change the multiplication operation to addition instead.  |  |
  | ▲ | chii a day ago | parent | prev [-] |   | all of those situations are the "driver's own fault", because they could've had a check to ensure none of that happened before driving. Not true with an LLM (at least, not as of today).  |  
  |  |
  | ▲ | crote 2 days ago | parent | prev [-] |   | Tesla's "self-driving" cars have been working very hard to change this. That piece of road it has been doing flawlessly for months? You're going straight into the barrier today, just because it feels like it.  |  
  |  |
  | ▲ | nurettin 2 days ago | parent | prev [-] |   | I mean, as long as it works and it is still technically "my car", I would welcome the change.  |  
  |  
  | 
|
 | ▲ | ozim 2 days ago | parent | prev | next [-] | 
 | I feel like this is the point where we start to make jokes about Honda owners.  | 
 | |
  | ▲ | bfkwlfkjf 2 days ago | parent [-] |   | Go on, what about honda owners? I don't know the meme.  |   | |
  | ▲ | hathawsh 2 days ago | parent [-] |   | The "Wham Baam" YouTube channels have a running joke about Hondas bumping into other cars with concerning frequency.  |  
  |  
  | 
|
 | ▲ | stirfish 2 days ago | parent | prev | next [-] | 
 | But do you want to drive, or do you want to be wherever you need to be to fuck?  | 
 | |
  | ▲ | codebje 2 days ago | parent [-] |   | For me personally, the latter, but there's definitely people out there that just love driving. Either way, these silly reductionist games aren't addressing the point: if I just want to get from A to B then I definitely want the absolute minimum of unpredictability in how I do it.  |   | |
  | ▲ | theendisney 2 days ago | parent | next [-] |   | That would ruin the brain placticity. I wonder now, if everything is always different and suddenly every day would be the same. How many times as terrifying would that be compared to the opposite?  |   |  |  |
  | ▲ | mewpmewp2 2 days ago | parent | prev [-] |   | Only because you think the driving is what you want. The point is that what you want is determined by our brain chemicals. Many steps could be skipped if we could just give you the chemicals in your brain that you craved.  |  
  |  
  | 
|
 | ▲ | lambdaone 2 days ago | parent | prev | next [-] | 
 | Sadly, this is not true of a (admittedly very small) number of individuals.  | 
|
 | ▲ | hinkley 2 days ago | parent | prev | next [-] | 
 | Christine didn’t end well for anyone.  | 
|
 | ▲ | OJFord 2 days ago | parent | prev | next [-] | 
 | ...so that you can get to the supermarket for food, to meet someone you love, meet someone you may or may not love, or to solve the problem of how to get to work; etc. Your ancestors didn't want horses and carts, bicycles, shoes - they wanted the solutions of the day to the same scenarios above.  | 
 | |
  | ▲ | sublinear 2 days ago | parent [-] |   | As much as I love your point, this is where I must ask whether you even want a corporeal form to contain the level of ego you're describing. Would you prefer to be an eternal ghost? To dismiss the entire universe and its hostilities towards our existence and the workarounds we invent in response as mere means to an end rather than our essence is truly wild.  |   | |
  | ▲ | anonzzzies 2 days ago | parent [-] |   | Most people need to go somewhere (in a hurry) to make money or food etc which most people don't want to do if they didn't have to, so yeah it is mostly a means to an end.  |   | |
  | ▲ | sublinear 2 days ago | parent [-] |   | And yet that money is ultimately spent on more means to ends that are just as inconvenient from another perspective? My point was that there is no true end goal as long as whims continue. The need to craft yet more means is equally endless. The crafting is the primary human experience, not the using. The using of a means inevitably becomes transparent and boring.  |   | |
  | ▲ | mewpmewp2 2 days ago | parent [-] |   | It should finalize into introducing satisfaction to the whims directly, so the AI would be directly managing the chemicals in our brains that would trigger feelings of reward and satisfaction.  |   |  |  
  |  
  |  
  |  
  | 
|
 | ▲ | lazide 2 days ago | parent | prev | next [-] | 
 | Even if it purred real nice when it started up? (I’m sorry)  | 
 |  | 
|
 | ▲ | zahrevsky 2 days ago | parent | prev | next [-] | 
 | Weird kink  | 
|
 | ▲ | mjevans 2 days ago | parent | prev [-] | 
 | Food -> 'basic needs'... so yeah, Shelter, food, etc.  That's why most of us drive.  You are also correct to separate Philia and Eros ( https://en.wikipedia.org/wiki/Greek_words_for_love ). A job is better if your coworkers are of a caliber that they become a secondary family.  |