Remix.run Logo
GuinansEyebrows 4 hours ago

> an AI system is literally a machine that can think and do things itself

why do so many writers claim this as a matter of fact? are we losing (or did we never have) a shared definition of the word "think"? can an LLM, at this time, function with zero human input whatsoever?

edit to add: these are genuine questions, not meant to be rhetorical :)

it's hard for me to gauge a broader understanding of AI/LLMs since most of the conversations i experience around them are here, or in negative contexts with people i know. and i'll admit i'm one of those negative people, but my general aversion to AI mostly has to do with my own anxiety around my mental health and cognitive ability in a use-it-or-lose-it sense, along with a disdain for its use in traditionally-creative fields.

derektank 4 hours ago | parent [-]

>are we losing (or did we never have) a shared definition of the word "think"

People have been saying, “the computer is thinking,” while webpages are loading or software is running for as long as I’ve been consciously aware. I agree there’s something new about describing AI as, “literally a machine that can think,” but language has always had fuzzy borders

TimTheTinker 4 hours ago | parent | next [-]

It's wild to watch documentaries from the 1980s where a primitive computer is said to be "a thinking machine" that is "taking most of the work out of a job".

GuinansEyebrows 4 hours ago | parent | prev [-]

yeah, for sure. i really think some people are under the impression that LLMs are a form of general AI that actually processes thought rather than being an admittedly-impressive exponential autocomplete.

though i'm not by any means an AI booster, my question wasn't really meant to be taken as a gotcha - more a general taking stock of where we're at in terms of broader understanding of these technologies outside of the professional AI/hobbyist world.