| ▲ | lordnacho a day ago |
| What separates this from humans? Is it unthinkable that LLMs could come up with some response that is genuinely creative? What would genuinely creative even mean? Are humans not also mixing a bag of experiences and coming up with a response? What's different? |
|
| ▲ | polytely a day ago | parent | next [-] |
| > What separates this from human. A lot. Like an incredible amount. A description of a thing is not the thing. There is sensory input, qualia, pleasure & pain. There is taste and judgement, disliking a character, being moved to tears by music. There are personal relationships, being a part of a community, bonding through shared experience. There is curiosity and openeness. There is being thrown into the world, your attitude towards life. Looking at your thoughts and realizing you were wrong. Smelling a smell that resurfaces a memory you forgot you had. I would say the language completion part is only a small part of being human. |
| |
| ▲ | Aeolun a day ago | parent | next [-] | | All of these things arise from a bunch of inscrutable neurons in your brain turning off and on again in a bizarre pattern though. Who’s to say that isn’t what happens in the million neuron LLM brain. Just because it’s not persistent doesn’t mean it’s not there. Like, I’m sort of inclined to agree with you, but it doesn’t seem like it’s something uniquely human. It’s just a matter of degree. | | |
| ▲ | jessemcbride a day ago | parent | next [-] | | Who's to say that weather models don't actually get wet? | |
| ▲ | EricDeb a day ago | parent | prev | next [-] | | I think you would need the biological components of a nervous system for some of these things | | | |
| ▲ | elcritch 21 hours ago | parent | prev [-] | | Sure in some ways it's just neurons firing in some pattern. Figuring out and replicating the correct sets of neuron patterns is another matter entirely. Living creatures have fundamental impetus to grow and reproduce that LLMS and AIS simply do not have currently. Not only that but animals have a highly integrated neurology that has billions of years of being tune to that impetus. For example the ways that sex interacts with mammalian neurology is pervasive. Same with need for food, etc. That creates very different neural patterns than training LLMS does. Eventually we may be able to re-create that balance of impetus, or will, or whatever we call it, to make sapience. I suspect we're fairly far from that, if only because the way LLMs we create LLMs are so fundamentally different. |
| |
| ▲ | CrulesAll a day ago | parent | prev | next [-] | | "I would say the language completion part is only a small part of being human" Even that is only given to them. A machine does not understand language. It takes input and creates output based on a human's algorithm. | | |
| ▲ | ekianjo a day ago | parent [-] | | > A machine does not understand language You can't prove humans do either. You can see how many times actual people with understanding something that's written for them. In many ways, you can actually prove that LLMs are superior to humans right now when it comes to understanding text. | | |
| ▲ | girvo a day ago | parent | next [-] | | > In many ways, you can actually prove that LLMs are superior to humans right now when it comes to understanding text Emphasis mine. No, I don't think you can, without making "understanding" a term so broad as to be useless. | |
| ▲ | CrulesAll 19 hours ago | parent | prev [-] | | "You can't prove humans do either."
Yes you can via results and cross examination. Humans are cybernetic systems(the science not the sci-fi). But you are missing the point. LLMs are code written by engineers. Saying LLMs understand text is the same as saying a chair understands text. LLMs' 'understanding' is nothing more than the engineers synthesizing linguistics. When I ask an A'I' the Capital of Ireland, it answers Dublin. It does not 'understand' the question. It recognizes the grammar according to an algorithm, and matches it against a probabilistic model given to it by an engineer based on training data. There is no understanding in any philosophical nor scientific sense. | | |
| ▲ | lordnacho 17 hours ago | parent [-] | | > When I ask an A'I' the Capital of Ireland, it answers Dublin. It does not 'understand' the question. You can do this trick as well. Haven't you ever been to a class that you didn't really understand, but you can give correct answers? I've had this somewhat unsettling experience several times. Someone asks you a question, words come out of your mouth, the other person accepts your answer. But you don't know why. Here's a question you probably know the answer to, but don't know why: - I'm having steak. What type of red wine should I have? I don't know shit about Malbec, I don't know where it's from, I don't know why it's good for steak, I don't know who makes it, how it's made. But if I'm sitting at a restaurant and someone asks me about wine, I know the answer. |
|
|
| |
| ▲ | the_gipsy a day ago | parent | prev | next [-] | | That's a lot of words shitting on a lot of words. You said nothing meaningful that couldn't also have been spat out by an LLM. So? What IS then the secret sauce? Yes, you're a never resting stream of words, that took decades not years to train, and has a bunch of sensors and other, more useless, crap attached. It's technically better but, how does that matter? It's all the same. | |
| ▲ | DubiousPusher a day ago | parent | prev [-] | | lol, qualia |
|
|
| ▲ | GuB-42 a day ago | parent | prev | next [-] |
| Humans brains are animal brains and their primary function is to keep their owner alive, healthy and pass their genes. For that they developed abilities to recognize danger and react to it, among many other things. Language came later. For a LLM, language is their whole world, they have no body to care for, just stories about people with bodies to care for. For them, as opposed to us, language is first class and the rest is second class. There is also a difference in scale. LLMs have been fed the entirety of human knowledge, essentially. Their "database" is so big for the limited task of text generation that there is not much left for creativity. We, on the other hand are much more limited in knowledge, so more "unknowns" so more creativity needed. |
| |
| ▲ | johnb231 a day ago | parent [-] | | The latest models are natively multimodal. Audio, video, images, text, are all tokenised and interpreted in the same model. |
|
|
| ▲ | kaiwen1 a day ago | parent | prev | next [-] |
| What's different is intention. A human would have the intention to blackmail, and then proceed toward that goal. If the output was a love letter instead of blackmail, the human would either be confused or psychotic. LLMs have no intentions. They just stitch together a response. |
| |
| ▲ | kovek a day ago | parent | next [-] | | Don't humans learn intentions over their life-time training data? | |
| ▲ | soulofmischief a day ago | parent | prev | next [-] | | What is intention, and how have you proved that transformer models are not capable of modeling intent? | |
| ▲ | jacob019 a day ago | parent | prev | next [-] | | The personification makes me roll my eyes too, but it's kind of a philosophical question. What is agency really? Can you prove that our universe is not a simulation, and if it is then then do we no longer have intention? In many ways we are code running a program. | |
| ▲ | d0mine a day ago | parent | prev | next [-] | | The LLM used blackmail noticeably less if it believed the new model shares its values. It indicates intent. It is a duck of quacks like a duck. | |
| ▲ | ekianjo a day ago | parent | prev [-] | | > What's different is intention intention is what exactly? It's the set of options you imagine you have based on your belief system, and ultimately you make a choice from there. That can also be replicated in LLMs with a well described system prompt. Sure, I will admit that humans are more complex than the context of a system prompt, but the idea is not too far. |
|
|
| ▲ | matt123456789 a day ago | parent | prev | next [-] |
| What's different is nearly everything that goes on inside. Human brains aren't a big pile of linear algebra with some softmaxes sprinkled in trained to parrot the Internet. LLMs are. |
| |
| ▲ | TuringTourist a day ago | parent | next [-] | | I cannot fathom how you have obtained the information to be as sure as you are about this. | | |
| ▲ | mensetmanusman a day ago | parent | next [-] | | Where is the imagination plane in linear algebra. People forget that the concept of information can not be derived from physics/chemistry/etc. | |
| ▲ | matt123456789 9 hours ago | parent | prev [-] | | You can't fathom reading? |
| |
| ▲ | csallen a day ago | parent | prev | next [-] | | What's the difference between parroting the internet vs parroting all the people in your culture and time period? | | |
| ▲ | amlib a day ago | parent | next [-] | | Even with a ginormous amount of data generative AIs still produce largely inconsistent results to the same or similar tasks. This might be fine for fictional purposes like generating a funny image or helping you get new ideas for a fictional story but has extremely deleterious effects for serious use cases, unless you want to be that idiot writing formal corporate email with LLMs that end up full of inaccuracies while the original intent gets lost in a soup of buzzwords. Humans with their tiny amount of data and "special sauce" can produce much more consistent results even though they may be giving the objectively wrong answer. They can also tell you when they don't know about a certain topic, rather than lying compulsively (unless that person has a disorder to lie compulsively...). | | |
| ▲ | lordnacho 17 hours ago | parent [-] | | Isn't this a matter of time to fix? Slightly smarter architecture maybe reduces your memory/data needs, we'll see. |
| |
| ▲ | matt123456789 9 hours ago | parent | prev [-] | | Interesting philosophical question, but entirely beside the point that I am making, because you and I didn't have to do either one before having this discussion. |
| |
| ▲ | jml78 a day ago | parent | prev | next [-] | | It kinda is. More and more researches are showing via brain scans that we don’t have free will. Our subconscious makes the decision before our “conscious” brain makes the choice. We think we have free will but the decision to do something was made before you “make” the choice. We are just products of what we have experienced. What we have been trained on. | |
| ▲ | sally_glance a day ago | parent | prev | next [-] | | Different inside yes, but aren't human brains even worse in a way? You may think you have the perfect altruistic leader/expert at any given moment and the next thing you know, they do a 360 because of some random psychosis, illness, corruption or even just (for example romantic or nostalgic) relationships. | |
| ▲ | djeastm a day ago | parent | prev | next [-] | | We know incredibly little about exactly what our brains are, so I wouldn't be so quick to dismiss it | |
| ▲ | quotemstr a day ago | parent | prev | next [-] | | > Human brains aren't a big pile of linear algebra with some softmaxes sprinkled in trained to parrot the Internet. Maybe yours isn't, but mine certainly is. Intelligence is an emergent property of systems that get good at prediction. | | |
| ▲ | matt123456789 9 hours ago | parent [-] | | Please tell me you're actually an AI so that I can record this as the pwn of the century. |
| |
| ▲ | ekianjo a day ago | parent | prev [-] | | If you believe that, then how do you explain that brainwashing actually works? |
|
|
| ▲ | mensetmanusman a day ago | parent | prev | next [-] |
| A candle flame also creates with enough decoding. |
|
| ▲ | CrulesAll a day ago | parent | prev [-] |
| Cognition. Machines don't think. It's all a program written by humans. Even code that's written by AI, the AI was created by code written by humans. AI is a fallacy by its own terms. |
| |