Remix.run Logo
isaacremuant 2 days ago

No they aren't. Of course you cans also call it's sonar eyes but it isn't.

Anthropomorphizing cars doesn't make them humans either.

hackinthebochs 2 days ago | parent [-]

Why would you think body only refers to flesh?

emodendroket a day ago | parent [-]

Even if I take the more expansive possible interpretation of “body” typically applied to vehicles, the propeller on the back of it isn’t part of the “body” and the “body” of a submarine is rigid and immobile.

Is this an intellectual exercise for you or have you ever in your life heard someone say something like “the submarine swam through the water”? It’s so ridiculous I would be shocked to see it outside of a story intended for children or an obvious nonnative speaker of English.

hackinthebochs a day ago | parent [-]

>the propeller on the back of it isn’t part of the “body” and the “body” of a submarine is rigid and immobile.

That's a choice to limit the meaning of the term to the rigid/immobile parts of the external boundary of an object. It's not obviously the correct choice. Presumably you don't take issue with people saying planes fly. The issue of submarines swimming seems analogous.

>Is this an intellectual exercise for you or have you ever in your life heard someone say something like “the submarine swam through the water”?

I don't think I've ever had a discussion about submarines with anyone, outside of the OceanGate disaster. But this whole approach to the issue seems misguided. With terms like this we should ask what the purpose behind the term is, i.e. it's intension (the concept), not the incidental extension of the term (the collection of things it applies to at some point in time). When we refer to something swimming, we mean that it is moving through water under its own power. The reference to "body" is incidental.

emodendroket a day ago | parent [-]

Which parts of the car does a "body shop" service?

hackinthebochs a day ago | parent [-]

Irrelevant, for the reasons mentioned

emodendroket 13 hours ago | parent [-]

It's not really a "choice" to use words how they are commonly understood but a choice to do the opposite. The point of Dijkstra's example is you can slap some term on a fundamentally different phenomenon to liken it to something more familiar but it confuses rather than clarifies anything.

The point that "swim" is not very consistent with "fly" is true enough but not really helpful. It doesn't change the commonly understood meaning of "swim" to include spinning a propeller just because "fly" doesn't imply anything about the particular means used to achieve flight.

hackinthebochs 11 hours ago | parent [-]

>It's not really a "choice" to use words how they are commonly understood but a choice to do the opposite.

I meant a collective choice. Words evolve because someone decides to expand their scope and others find it useful. The question here shouldn't be what do other people mean by a term but whether the expanded scope is clarifying or confusing.

The question of whether submarines swim is a trivial verbal dispute, nothing of substance turns on its resolution. But we shouldn't dismiss the question of whether computers think by reference to the triviality of submarines swimming. The question we need to ask is what work does the concept of thinking do and whether that work is or can be applied to computers. This is extremely relevant in the present day.

When we say someone thinks, we are attributing some space of behavioral capacities to that person. That is, a certain competence and robustness with managing complexity to achieve a goal. Such attributions may warrant a level of responsibility and autonomy that would not be warranted without it. A system that thinks can be trusted in a much wider range of circumstances than one that doesn't. That this level of competence has historically been exclusive to humans should not preclude this consideration. When some future AI does reach this level of competence, we should use terms like thinking and understanding as indicating this competence.

emodendroket 4 hours ago | parent [-]

This sub thread started on the claim that regular, deterministic code is “thought.” I would submit that the difference between deterministic code and human thought are so big and obvious that it is doing nothing but confusing the issue to start insisting on this.

hackinthebochs 3 hours ago | parent [-]

I'm not exactly sure what you mean by deterministic code but I do think there is an obvious distinction between typical code people write and what human minds do. The guy upthread is definitely wrong in thinking that, e.g. any search or minimax algorithm is thinking. But its important to understand what this distinction is so we can spot when it might no longer apply.

To make a long story short, the distinction is that typical programs don't operate on the semantic features of program state, just on the syntactical features. We assign a correspondence with the syntactical program features and their transformations to the real-world semantic features and logical transformations on them. The execution of the program then tells us the outcomes of the logical transformations applied to the relevant semantic features. We get meaning out of programs because of this analogical correspondence.

LLMs are a different computing paradigm because they now operate on semantic features of program state. Embedding vectors assign semantic features to syntactical structures of the vector space. Operations on these syntactical structures allow the program to engage with semantic features of program state directly. LLMs engage with the meaning of program state and alter its execution accordingly. It's still deterministic, but its a fundamentally more rich programming paradigm, one that bridges the gap between program state as syntactical structures and the meaning they represent. This is why I am optimistic that current or future LLMs should be considered properly thinking machines.

emodendroket 3 hours ago | parent [-]

LLMs are not deterministic at all. The same input leads to different outputs at random. But I think there’s still the question if this process is more similar to thought or a Markov chain.

hackinthebochs 2 hours ago | parent [-]

They are deterministic in the sense that the inference process scores every word in the vocabulary in a deterministic manner. This score map is then sampled from according to the temperature setting. Non-determinism is artificially injected for ergonomic reasons.

>But I think there’s still the question if this process is more similar to thought or a Markov chain.

It's definitely far from a Markov chain. Markov chains treat the past context as a single unit, an N-tuple that has no internal structure. The next state is indexed by this tuple. LLMs leverage the internal structure of the context which allows a large class of generalization that Markov chains necessarily miss.