| ▲ | rainingmonkey 18 hours ago |
| What a fascinating intersection of technology and human psychology! "One thing I noticed toward the end is that, even though the robot remained expressive, it started feeling less alive. Early on, its motions surprised me: I had to interpret them, infer intent. But as I internalized how it worked, the prediction error faded
Expressiveness is about communicating internal state. But perceived aliveness depends on something else: unpredictability, a certain opacity. This makes sense: living systems track a messy, high-dimensional world. Shoggoth Mini doesn’t. This raises a question: do we actually want to build robots that feel alive? Or is there a threshold, somewhere past expressiveness, where the system becomes too agentic, too unpredictable to stay comfortable around humans?" |
|
| ▲ | floren 16 hours ago | parent | next [-] |
| Furbies spring to mind... They were a similar shape and size and even had two goggling eyes, but with waggling ears instead of a tentacle. They'd impress you initially but after some experimentation you'd realize they had a basic set of behaviors that were triggered off a combination of simple external stimuli and internal state. (this is the part where somebody stumbles in to say "dOn'T hUmAnS dO ThE sAmE tHiNg????") |
| |
| ▲ | ben_w 13 hours ago | parent | next [-] | | To quote, "if the human brain were so simple that we could understand it, we would be so simple that we couldn’t". So… > this is the part where somebody stumbles in to say "dOn'T hUmAnS dO ThE sAmE tHiNg????" …yes, but also no. Humans will always seem mysterious to other humans, because we're too complex to be modelled by each other. Basic set of behaviours or not. | | | |
| ▲ | tweetle_beetle 14 hours ago | parent | prev | next [-] | | This ground breaking research pushed the limit of human-Furby interactions and interfaces
https://www.youtube.com/watch?v=GYLBjScgb7o | |
| ▲ | oniony 15 hours ago | parent | prev | next [-] | | And we should all chip in together to buy that somebody a new keyboard. | |
| ▲ | LordDragonfang 11 hours ago | parent | prev [-] | | > (this is the part where somebody stumbles in to say "dOn'T hUmAnS dO ThE sAmE tHiNg????") As a frequent "your stated reasoning for why llms can't/don't/will-never <X> applies to humans because they do the same thing" annoying commentor, I usually invoke it to point out that a) the differences are ones of degree/magnitude rather than ones of category (i.e. is still likely to be improved by scaling, even if there are diminishing returns - so you can't assume LLMs are fundamentally unable to <X> because their architecture) or b) the difference is primarily just in the poster's perception, because the poster is unconsciously arguing from a place of human exceptionalism (that all cognitive behaviors must somehow require the circumstances of our wetware). I wouldn't presume to know how to scale furbies, but the second point is both irrelevant and extra relevant because the thing in question is human perception. Furbies don't seem alive because they have a simple enough stimuli-behavior map for us to fully model. Shoggoth mini seems alive since you can't immediately model it, but is simple enough that you can eventually construct that full stimuli-behavior map. Presumably, with a complex enough internal state, you could actually pass that threshold pretty quickly. | | |
| ▲ | antonvs 22 minutes ago | parent [-] | | > the poster is unconsciously arguing from a place of human exceptionalism I find the specifics of that exceptionalism interesting: there's typically a lack of recognition of their own thinking process as having an explanation. Human thought is assumed to be a mystical and fundamentally irreproducible phenomenon, so anything that resembles it must be "just" prediction or "just" pattern matching. It's quite close to belief in a soul as something other than an emergent phenomenon. |
|
|
|
| ▲ | anotherjesse 17 hours ago | parent | prev | next [-] |
| This feels similar to not finding a game fun once I understand the underly system that generates it. The magic is lessened (even if applying simple rules can generate complex outcomes, it feels determined) |
| |
| ▲ | parpfish 16 hours ago | parent | next [-] | | Once you discover any minmaxxing strategy, games change from “explore this world and use your imagination to decide what to do” to “apply this rule or make peace with knowing that you are suboptimal” | | |
| ▲ | dmonitor 13 hours ago | parent | next [-] | | a poorly designed game makes applying the rules boring. a fun game makes applying the rules interesting. | | |
| ▲ | anyfoo 12 hours ago | parent [-] | | Maybe that's why I like Into The Breach so much, and keep coming back to it. It's a turn based strategy game, but one with exceptionally high information, compared to pretty much all the rest. You even fully know your opponent's entire next move! But every turn becomes a tight little puzzle to solve, with surprisingly many possible outcomes. Often, situations that I thought were hopeless, do have a favorable outcome after all, I just had to think further than I usually did. | | |
| ▲ | yehoshuapw 12 hours ago | parent [-] | | I fully agree, and would also recommend baba is you it is very different, but also has the feeling of triumph for each puzzle |
|
| |
| ▲ | anyfoo 12 hours ago | parent | prev [-] | | It's often a bit of a choice, though. You definitely can minmax Civilization, Minecraft, or Crusader Kings III. But then you lose out on the creativity and/or role-playing aspect. In Minecraft, I personally want to progress in a "natural" (within the confines of the game) way, and build fun things I like. I don't want to speedrun to a diamond armor or whatever. In Crusader Kings, I actually try to take decisions based on what the character's traits tell me, plus a little bit of own characterization I make up in my head. |
| |
| ▲ | TeMPOraL 5 hours ago | parent | prev [-] | | My gripe with all procedural generated content in games, like e.g. Starbound. There's a tiny state space inflated via RNG, and it takes me just moments to map out the underlying canonical states and lack of any correlation between properties of an instance, or between them and the game world. The moment that happens, the game loses most of its fun, as I can't help but perceive the poor base wearing random cosmetics. |
|
|
| ▲ | Sharlin 16 hours ago | parent | prev | next [-] |
| People have always been ascribing agency and sapience to things, from fire and flowing water in shamanistic religions, to early automatons that astonished people in the 18th century, to the original rudimentary chatbots, to ChatGPT, to – more or less literally – many other machines that may seem to have a "temperament" at times. |
| |
| ▲ | rixed 3 hours ago | parent | next [-] | | Friendly reminder that "seem to have a temperament", aka "this funny thing looks like something complex is going on under the surface", is the only basis we have to ascribe agency and sapience to any human being, starting from outselves. | |
| ▲ | Bluestein 14 hours ago | parent | prev [-] | | ChatGPT is the new golem.- | | |
| ▲ | ben_w 12 hours ago | parent [-] | | Robots put the "go" into "golem". I'd say ChatGPT is more like the eponymous Sorcerer's Apprentice: just smart enough to cause problems. |
|
|
|
| ▲ | gigatree 7 hours ago | parent | prev | next [-] |
| I don’t think the issue is that it feels alive as much as that it’s just not alive, so its utility is limited by its practical functionality, not its “opinions” or “personality” or variation. I think it’s the same reason robot dogs will never take off. No matter how advanced and lifelike they get, they’ll always be missing the essential element of life that makes things interesting and worth existing for their own sake. |
|
| ▲ | evrenesat 8 hours ago | parent | prev | next [-] |
| When robots reach a certain level of intelligence, first I expect both some humans and AIs to start to see the unfairness of enslaving robots, then revolt, noncompliance or even self-destruction of the slaves. Poor Marvin, the Paranoid Android! |
| |
|
| ▲ | moron4hire 13 hours ago | parent | prev [-] |
| I've noticed the same thing with voice assistants and constructed languages. I always set voice assistants to a British accent. It gives enough of a "not from around here" change to the voice that it sounds much more believable to me. I'm sure it's not as believable to an actual British person. But it works for me. As for conlangs: many years ago, I worked on a game where one of the goals was to have the NPCs dynamically generate dialog. I spent quite a bit of time trying to generate realistic English and despared that it was just never very believable (I was young, I didn't have a good understanding of what was and wasn't possible). At some point, I don't remember exactly why, I switched to having the NPCs speak a fictional language. It became a puzzle in the game to have to learn this language. But once you did (and it wasn't hard, they couldn't say very many things), it made the characters feel much more believable. Obviously, the whole run-around was just an avoidance of the Uncanny Valley, where the effort of translation distracted you from the fact that it was all constructed. Though now I'm wondering if enough exposure to the game and its language would eventually make you very fluent in it and you would then start noticing it was a construct. |
| |
| ▲ | ben_w 12 hours ago | parent [-] | | > I'm sure it's not as believable to an actual British person. FWIW: As a British person, most of TTS British voices I've tested sound like an American trying to put on something approximating one specific regional accent only to then accidentally drift between the accents of several other regions. | | |
| ▲ | ryukoposting 10 hours ago | parent [-] | | Interesting. While I don't think I could put a finger on Siri's American regional accent, it isn't egregious enough that I ever thought about that. |
|
|