Remix.run Logo
floren 15 hours ago

Furbies spring to mind... They were a similar shape and size and even had two goggling eyes, but with waggling ears instead of a tentacle.

They'd impress you initially but after some experimentation you'd realize they had a basic set of behaviors that were triggered off a combination of simple external stimuli and internal state. (this is the part where somebody stumbles in to say "dOn'T hUmAnS dO ThE sAmE tHiNg????")

ben_w 12 hours ago | parent | next [-]

To quote, "if the human brain were so simple that we could understand it, we would be so simple that we couldn’t".

So…

> this is the part where somebody stumbles in to say "dOn'T hUmAnS dO ThE sAmE tHiNg????"

…yes, but also no.

Humans will always seem mysterious to other humans, because we're too complex to be modelled by each other. Basic set of behaviours or not.

tomjakubowski 8 hours ago | parent | next [-]

> "if the human brain were so simple that we could understand it, we would be so simple that we couldn’t".

https://www.lightspeedmagazine.com/fiction/exhalation/

cjbgkagh 9 hours ago | parent | prev [-]

Perhaps there is some definition of ‘understand’ where that quote is true but it is possible to understand some things without understanding everything.

tweetle_beetle 13 hours ago | parent | prev | next [-]

This ground breaking research pushed the limit of human-Furby interactions and interfaces https://www.youtube.com/watch?v=GYLBjScgb7o

oniony 14 hours ago | parent | prev | next [-]

And we should all chip in together to buy that somebody a new keyboard.

LordDragonfang 10 hours ago | parent | prev [-]

> (this is the part where somebody stumbles in to say "dOn'T hUmAnS dO ThE sAmE tHiNg????")

As a frequent "your stated reasoning for why llms can't/don't/will-never <X> applies to humans because they do the same thing" annoying commentor, I usually invoke it to point out that

a) the differences are ones of degree/magnitude rather than ones of category (i.e. is still likely to be improved by scaling, even if there are diminishing returns - so you can't assume LLMs are fundamentally unable to <X> because their architecture) or

b) the difference is primarily just in the poster's perception, because the poster is unconsciously arguing from a place of human exceptionalism (that all cognitive behaviors must somehow require the circumstances of our wetware).

I wouldn't presume to know how to scale furbies, but the second point is both irrelevant and extra relevant because the thing in question is human perception. Furbies don't seem alive because they have a simple enough stimuli-behavior map for us to fully model. Shoggoth mini seems alive since you can't immediately model it, but is simple enough that you can eventually construct that full stimuli-behavior map. Presumably, with a complex enough internal state, you could actually pass that threshold pretty quickly.