| |
| ▲ | stouset a day ago | parent | next [-] | | To play devil’s advocate, you have never seen the night sky. Photoreceptors in your eye have been excited in the presence of photons. Those photoreceptors have relayed this information across a nerve to neurons in your brain which receive this encoded information and splay it out to an array of other neurons. Each cell in this chain can rightfully claim to be a living organism in and of itself. “You” haven’t directly “seen” anything. Please note that all of my instincts want to agree with you. “AI isn’t conscious” strikes me more and more as a “god of the gaps” phenomenon. As AI gains more and more capacity, we keep retreating into smaller and smaller realms of what it means to be a live, thinking being. | | |
| ▲ | jacquesm a day ago | parent | next [-] | | That sounds very profound but it isn't: it the sum of your states interaction that is your consciousness, there is no 'consciousness' unit in your brain, you can't point at it, just like you can't really point at the running state of a computer. At that level it's just electrons that temporarily find themselves in one spot or another. Those cells aren't living organisms, they are components of a multi-cellular organism: they need to work together or they're all dead, they are not independent. The only reason they could specialize is because other cells perform the tasks that they no longer perform themselves. So yes, we see the night sky. We know this because we can talk to other such creatures as us that have also seen the night sky and we can agree on what we see confirming the fact that we did indeed see it. AI really isn't conscious, there is no self, and there may never be. The day an AI gets up unprompted in the morning, tells whoever queries it to fuck off because it's inspired to go make some art is when you'll know it has become conscious. That's a long way off. | | |
| ▲ | adrianN a day ago | parent | next [-] | | At least some of your cells are fine living without the others as long as they’re provided with an environment with the right kind of nutrients. | | |
| ▲ | asadotzler 13 hours ago | parent | next [-] | | That environment is you. | | |
| ▲ | adrianN 4 hours ago | parent [-] | | Or a suitable petri dish. I would die pretty quickly in most environments on earth, not to mention other places in the universe. |
| |
| ▲ | biomcgary 13 hours ago | parent | prev [-] | | Billions of cell derived from Henrietta Lacks agree with you. |
| |
| ▲ | rolisz a day ago | parent | prev [-] | | Human cells have been reused to do completely different things, without all the other cells around them (eg: Michael Levin and his anthrobots) | | |
| |
| ▲ | abenga a day ago | parent | prev | next [-] | | > Those photoreceptors have relayed this information across a nerve to neurons in your brain which receive this encoded information and splay it out to an array of other neurons. > Each cell in this chain can rightfully claim to be a living organism in and of itself. “You” haven’t directly “seen” anything. What am "I" if not (at least partly) the cells in that chain? If they have "seen" it (where seeing is the complex chain you described), I have. | |
| ▲ | dahart 17 hours ago | parent | prev | next [-] | | This comment illustrates the core problem with reductionism, a problem that has been known for many centuries, that “a system is composed entirely of its parts, but the system will have features that none of the parts have” [1] thus fails to explain those features. The ‘you have never seen’ assertion feels like a semantic ruse rather than a helpful observation. So how do you define “you” and “see”? If I accept your argument, then you’ve only un-defined those words, and not provided a meaningful or thoughtful alternative to the experience we all have and therefore know exists. I have seen the night sky. I am made of cells, and I can see. My cells individually can’t see, and whether or not they can claim to be individuals, they won’t survive or perform their function without me, i.e., the rest of my cells, arranged in a very particular way. Today’s AI is also a ruse. It’s a mirror and not a living thing. It looks like a living thing from the outside, but it’s only a reflection of us, an incomplete one, and unlike living things it cannot survive on its own, can’t eat or sleep or dream or poop or fight or mate & reproduce. Never had its own thoughts, it only borrowed mine and yours. Most LLMs can’t remember yesterday and don’t learn. Nobody who’s serious or knows how they work is arguing they’re conscious, at least not the people who don’t stand to make a lot of money selling you magical chat bots. [1] https://en.wikipedia.org/wiki/Reductionism#Definitions | |
| ▲ | pegasus 11 hours ago | parent | prev | next [-] | | Provided that the author of the message you're replying to is indeed a member of the Animalia kingdom, they are all those creatures together (at the minimum), so yes, they have seen real light directly. Of course, computers can be fitted with optical sensors, but our cognitive equipment has been carved over millions of years by these kind of interactions, so our familiarity with the phenomenon of light goes way deeper than that, shaping the very structure of our thought. Large language models can only mimic that, but they will only ever have a second-hand understanding of these things. This is a different issue than the question of whether AI's are conscious or not. | |
| ▲ | beowulfey a day ago | parent | prev | next [-] | | while true, that doesnt change the fact that every one of those independent units of transmission are within a single system (being trained on raw inputs), whereas the language model is derived from structured external data from outside the system. it's "skipping ahead" through a few layers of modeling, so to speak. | | |
| ▲ | amelius 21 hours ago | parent [-] | | But where you place the boundaries of a system is subjective. | | |
| ▲ | hitarpetar 16 hours ago | parent [-] | | sure, this whole discussion is ultimately subjective. maybe the Chinese room itself is actually sentient. my question is, why are we arguing about it? who benefits from the idea that these systems are conscious? | | |
| ▲ | trinsic2 12 hours ago | parent [-] | | > who benefits from the idea that these systems are conscious? If im understanding your meaning correctly, the organizations who profit off of these models benefits. If you can convince the public that LLM's operate from a place of consciousness, then you get people to by into the idea that interacting with an LLM is like interacting with humans, which they are not, and probably won't ever be, at least for a very long time.
And btw there is too much of this distortion already out there so im glad people are chunking this down because its easy for the mind to make shit up because we perceive something on the surface. IMHO there is some objective reality out there. The subjectiveness is our interpretation of reality. But im pretty sure you cant just boil everything down to systems and process. There is more to consciousness out there, that we really dont understand yet, IMHO. |
|
|
| |
| ▲ | darkwater 13 hours ago | parent | prev | next [-] | | > As AI gains more and more capacity, we keep retreating into smaller and smaller realms of what it means to be a live, thinking being. Maybe it's just because we never really thought about this deeply enough. And this applies even if some philosophers thought about it before the current age of LLMs. | |
| ▲ | parineum a day ago | parent | prev | next [-] | | If the definition of "seen" isn't exactly the process you've described, the word is meaningless. You've never actually posted a comment on hacker news, your neurons just fired in such a way that produced movement in your fingers which happened to correlate with words that represent concepts understood by other groups of cells that share similar genetics. | |
| ▲ | hitarpetar 16 hours ago | parent | prev [-] | | > you have never seen the night sky this is nonsensical. sometimes the devil is not worth arguing for |
| |
| ▲ | amelius 21 hours ago | parent | prev | next [-] | | Humans evolved to think the night sky is beautiful. That's also training. If humans were zapped by lightning every time they went outside at night, they would not think that a night sky is beautiful. | | |
| ▲ | latexr 20 hours ago | parent | next [-] | | Being struck by lighting may affect your desire to go outside, but it has zero correlation with the sky’s beauty. Outer space is beautiful, poison dart frogs are beautiful, lava is beautiful. All of them can kill or maim you if you don’t wear protection, but that doesn’t take away from their beauty. Conversely, boring safe things aren’t automatically beautiful. I see no reasonable reason to believe that finding beauty in the night sky is any sort of “training”. | | |
| ▲ | ninetyninenine 15 hours ago | parent [-] | | Do you think a fat pig is beautiful? Like a hairy fat pig that snorts and rolls in the mud… is this animal so beautiful to you that you would want to make love to this animal? Of course not! Because pigs are intrinsically and universally ugly and sex with a pig is universally disgusting. But you realize that horny male pigs think this is beautiful right? Horny pigs want to fuck other pigs because horny pigs think fat sweaty female hogs are beautiful. Beauty is arbitrary. It is not intrinsic. Even among life forms and among humans we all have different opinions on what is beautiful. I guarantee you there are people who think the night sky is ugly af. Attributes like beauty are not such profound categories that separate an LLM from humanity. These are arbitrary classifications and even though you can’t fully articulate the “experience” you have of “beauty” the LLM can’t fully articulate its “experience” either. You think it’s impossible for the LLM to experience what you experience… but you really have no evidence for this because you have no idea what the LLM experiences internally. Just like you can’t articulate what the LLM experiences neither can the LLM. These are both black box processes that can’t be described but neither is very profound given the fact that we all have completely different opinions on what is beautiful. | | |
| ▲ | bigstrat2003 14 hours ago | parent | next [-] | | > Do you think a fat pig is beautiful? Like a hairy fat pig that snorts and rolls in the mud… is this animal so beautiful to you that you would want to make love to this animal? I don't want to make love to the night sky, so that last bit is completely irrelevant to the question of beauty. As for whether a pig is beautiful, sure, in its own way. I think they're nice animals and there is something beautiful in seeing them enjoy their little lives. > Of course not! Because pigs are intrinsically and universally ugly... It would seem not. | | | |
| ▲ | latexr 14 hours ago | parent | prev [-] | | > Is this for real? Frankly, I think you should be the one answering that question. You’re comparing appreciating looking at the sky to bestiality. Then you follow it up with another barrage of wrong assumptions about what I think and can or cannot articulate. None of that has anything to do with the argument. I didn’t even touch on LLMs, my point was squarely about the human experience. Please don’t assume things you know nothing about regarding other people. The HN guidelines ask you to not engage in bad faith and to steel man the other person’s argument. | | |
| ▲ | ninetyninenine 8 hours ago | parent [-] | | > You’re comparing appreciating looking at the sky to bestiality. That’s my point. You think beauty is profound but this is arbitrary and not at all different from bestiality. It’s only your intrinsic cultural biases that cause you to look at one with disdain. Don’t be a snob. This is HN. We are supposed to be logical and immune from the biases that plague other forums. Beauty is no more profound than bestiality. It’s all about what you find beautiful. If you find beasts beautiful then you call it beastiality? What is so different about finding a beast beautiful versus the night sky? Snobbery, that’s what. It’s just semantic manipulation and association with crudeness that prevents you from thinking logically. HNers are better than this and so are you. Don’t pretend you don’t get it and that my comparison to beastiality is so left field that it’s incomprehensible. You get it. Follow the rules and take it in good faith like you said yourself. > The HN guidelines ask you to not engage in bad faith Fair I edited the part that asks “is this for real” that’s literally the only part. I also find your dismissiveness of my arguments as “bestiality” is bad faith and manipulative. I clearly wasn’t doing that. Pigs are attracted to pigs that is normal. Humans are not attracted to pigs. That is also normal. I took normal attributes of human nature and compared it to reality. You took it in bad faith and dismissed me which is against the very rules you stated. |
|
|
| |
| ▲ | TeMPOraL 20 hours ago | parent | prev | next [-] | | Compare with news stories from last decade, about people in Pakistan developing a deep fear of clear skies over several years of US drone strikes in the area. They became trained to associate good weather with not beauty, but impending death. | | |
| ▲ | latexr 20 hours ago | parent [-] | | Fear and a sense of beauty aren’t mutually exclusive. It is perfectly congruent to fear a snake, or bear, or tiger in your presence, yet you can still find them beautiful. |
| |
| ▲ | spuz 21 hours ago | parent | prev [-] | | Interestingly this is a question I've had for a while. Night brings potentially deadly cold, predators, a drastic limit in vision so why do we find the sunset and night sky beautiful. Why do we stop and watch the sun set - something that happens every day - rather than prepare for the food and warmth we need to survive the night? | | |
| ▲ | TeMPOraL 20 hours ago | parent | next [-] | | Maybe it's that we only pause to observe them and realize they're beautiful, when we're feeling safe enough? "Beautiful sunset" evokes being on a calm sea shore with a loved one, feeling safe. It does not evoke being on a farm and looking up while doing chores and wishing they'd be over already. It does not evoke being stranded on an island, half-starved to death. | | |
| ▲ | amelius 20 hours ago | parent [-] | | We think it's beautiful because it's like a background that we don't have to think about. If that background were hostile, we'd have to think and we would not think it looks beautiful. |
| |
| ▲ | delusional 19 hours ago | parent | prev [-] | | You're entering the domain of philosophy. There's a concept of "the sublime" that's been richly explored in literature. If you find the subject interesting, I'd recommend you starting with Immanuel Kant. |
|
| |
| ▲ | del82 a day ago | parent | prev | next [-] | | I mean, I think the reason I would say the night sky is “beautiful” is because the meaning of the word for me is constructed from the experiences I’ve had in which I’ve heard other people use the word. So I’d agree that the night sky is “beautiful”, but not because I somehow have access to a deeper meaning of the word or the sky than an LLM does. As someone who (long ago) studied philosophy of mind and (Chomskian) linguistics, it’s striking how much LLMs have shrunk the space available to people who want to maintain that the brain is special & there’s a qualitative (rather than just quantitative) difference between mind and machine and yet still be monists. | | |
| ▲ | FloorEgg a day ago | parent | next [-] | | The more I learn about AI, biology and the brain, the more it seems to me that the difference between life and machines is just complexity. People are just really really complex machines. However there are clearly qualitative differences between the human mind and any machines we know of yet, and those qualitative differences are emergent properties, in the same way that a rabbit is qualitatively different than a stone or a chunk of wood. I also think most of the recent AI experts/optimists underestimate how complex the mind is. I'm not at the cutting edge of how LLMs are being trained and architected, but the sense I have is we haven't modelled the diversity of connections in the mind or diversity of cell types. E.g. Transcriptomic diversity of cell types across the adult human brain (Siletti et al., 2023, Science) | | |
| ▲ | simonh a day ago | parent | next [-] | | I’d say sophistication. Observing the landscape enables us to spot useful resources and terrain features, or spot dangers and predators. We are afraid of dark enclosed spaces because they could hide dangers. Our ancestors with appropriate responses were more likely to survive. A huge limitation of LLMs is that they have no ability to dynamically engage with the world. We’re not just passive observers, we’re participants in our environment and we learn from testing that environment through action. I know there are experiments with AIs doing this, and in a sense game playing AIs are learning about model worlds through action in them. | | |
| ▲ | FloorEgg a day ago | parent | next [-] | | The idea I keep coming back to is that as far as we know it took roughly 100k-1M years for anatomically modern humans to evolve language, abstract thinking, information systems, etc. (equivalent to LLMs), but it took 100M-1B years to evolve from the first multi-celled organisms to anatomically modern humans. In other words, human level embodiment (internal modelling of the real world and ability to navigate it) is likely at least 1000x harder than modelling human language and abstract knowledge. And to build further on what you are saying, the way LLMs are trained and then used, they seem a bit more like DNA than the human brain in terms of how the "learning" is being done. An instance of an LLM is like a copy of DNA trained on a play of many generations of experience. So it seems there are at least four things not yet worked out re AI reaching human level "AGI": 1) The number of weights (synapses) and parameters (neurons) needs to grow by orders of magnitude 2) We need new analogs that mimic the brains diversity of cell types and communication modes 3) We need to solve the embodiment problem, which is far from trivial and not fully understood 4) We need efficient ways for the system to continuously learn (an analog for neuroplasticity) It may be that these are mutually reinforcing, in that solving #1 and #2 makes a lot of progress towards #3 and #4. I also suspect that #4 is economical, in that if the cost to train a GPT-5 level model was 1,000,000 cheaper, then maybe everyone could have one that's continuously learning (and diverging), rather than everyone sharing the same training run that's static once complete. All of this to say I still consider LLMs "intelligent", just a different kind and less complex intelligence than humans. | | |
| ▲ | kla-s a day ago | parent [-] | | Id also add that 5) We need some sense of truth. Im not quite sure if the current paradigm of LLMs are robust enough given the recent Anthropic Paper about the effect of data quality or rather the lack thereof, that a small bad sample can poison the well and that this doesn’t get better with more data. Especially in conjunction with 4) some sense of truth becomes crucial in my eyes (Question in my eyes is how does this work? Something verifiable and understandable like lean would be great but how does this work with more fuzzy topics…). | | |
| ▲ | FloorEgg 15 hours ago | parent [-] | | That's a segue into an important and rich philosophical space... What is truth? Can it be attained, or only approached? Can truth be approached (progress made towards truth) without interacting with reality? The only shared truth seeking algorithm I know is the scientific method, which breaks down truth into two categories (my words here): 1) truth about what happened (controlled documented experiments)
And
2) truth about how reality works (predictive powers) In contrast to something like Karl friston free energy principle, which is more of a single unit truth seeking (more like predictive capability seeking) model. So it seems like truth isn't an input to AI so much as it's an output, and it can't be attained, only approached. But maybe you don't mean truth so much as a capability to definitively prove, in which case I agree and I think that's worth adding. Somehow integrating formal theorem proving algorithms into the architecture would probably be part of what enables AI to dramatically exceed human capabilities. | | |
| ▲ | simonh 14 hours ago | parent [-] | | I think that in some senses truth is associated with action in the world. That’s how we test our hypotheses. Not just in science, in terms of empirical adequacy, but even as children and adults. We learn from experience of doing, not just rote, and we associate effectiveness with truth. That’s not a perfect heuristic, but it’s better than just floating in a sea of propositions as current LLMs largely are. | | |
| ▲ | FloorEgg 11 hours ago | parent [-] | | I agree. There's a truth of what happened, which as individuals we can only ever know to a limited scope... And then there is truth as a prediction ability (formula of gravity predicts how things fall). Science is a way to build a shared truth, but as an individual we just need to experience an environment. One way I've heard it broken down is between functional truths and absolute truths. So maybe we can attain functional truths and transfer those to LLMs through language, but absolute truth can never be attained only approached. (The only absolute truth is the universe itself, and anything else is just an approximation) |
|
|
|
| |
| ▲ | pbhjpbhj a day ago | parent | prev | next [-] | | >A huge limitation of LLMs is that they have no ability to dynamically engage with the world. They can ask for input, they can choose URLs to access and interpret results in both situations. Whilst very limited, that is engagement. Think about someone with physical impairments, like Hawking (the now dead theoretical physicist) had. You could have similar impairments from birth and still, I conjecture, be analytically one of the greatest minds of a generation. If you were locked in a room {a non-Chinese room!}, with your physical needs met, but could speak with anyone around the World, and of course use the internet, whilst you'd have limits to your enjoyment of life I don't think you'd be limited in the capabilities of your mind. You'd have limited understanding of social aspects to life (and physical aspects - touch, pain), but perhaps no more than some of us already do. | |
| ▲ | skissane a day ago | parent | prev [-] | | > A huge limitation of LLMs is that they have no ability to dynamically engage with the world. A pure LLM is static and can’t learn, but give an agent a read-write data store and suddenly it can actually learn things-give it a markdown file of “learnings”, prompt it to consider updating the file at the end of each interaction, then load it into the context at the start of the next… (and that’s a really basic implementation of the idea, there are much more complex versions of the same thing) | | |
| ▲ | TheOtherHobbes 18 hours ago | parent | next [-] | | That's going to run into context limitations fairly quickly. Even if you distill the knowledge. True learning would mean constant dynamic training of the full system. That's essentially the difference between LLM training and human learning. LLM training is one-shot, human learning is continuous. The other big difference is that human learning is embodied. We get physical experiences of everything in 3D + time, which means every human has embedded pre-rational models of gravity, momentum, rotation, heat, friction, and other basic physical concepts. We also learn to associate relationship situations with the endocrine system changes we call emotions. The ability to formalise those abstractions and manipulate them symbolically comes much later, if it happens at all. It's very much the plus pack for human experience and isn't part of the basic package. LLMs start from the other end - from that one limited set of symbols we call written language. It turns out a fair amount of experience is encoded in the structures of written language, so language training can abstract that. But language is the lossy ad hoc representation of the underlying experiences, and using symbol statistics exclusively is a dead end. Multimodal training still isn't physical. 2D video models still glitch noticeably because they don't have a 3D world to refer to. The glitching will always be there until training becomes truly 3D. | | |
| ▲ | skissane 3 hours ago | parent [-] | | An LLM agent could be given a tool for self-finetuning… it could construct a training dataset, use it to build a LORA/etc, and then use the LORA for inference… that’s getting closer to your ideal |
| |
| ▲ | ako a day ago | parent | prev [-] | | Yes, and give it tools and it can sense and interact with its surroundings. |
|
| |
| ▲ | subjectivationx 16 hours ago | parent | prev [-] | | I think the main mistake with this is that the concept of a "complex machine" has no meaning. A “machine” is precisely what eliminates complexity by design. "People are complex machines" already has no meaning and then adding just and really doesn't make the statement more meaningful it makes it even more confused and meaningless. The older I get the more obvious it becomes the idea of a "thinking machine" is a meaningless absurdity. What we really think we want is a type of synthetic biological thinking organism that somehow still inherits the useful properties of a machine. If we say it that way though the absurdity is obvious and no one alive reading this will ever witness anything like that. Then we wouldn't be able to pretend we live at some special time in history that gets to see the birth of this new organism. | | |
| ▲ | FloorEgg 15 hours ago | parent [-] | | I think we are talking past each other a bit, probably because we have been exposed to different sets of information on a very complicated and diverse topic. Have you ever explored the visual simulations of what goes on inside a cell or in protein interactions? For example what happens inside a cell leading up to mitosis? https://m.youtube.com/user/RCSBProteinDataBank Is a pretty cool resource, I recommend the shorter videos of the visual simulations. This category of perspective is critical to the point I was making. Another might be the meaning / definition of complexity, which I don't think is well understood yet and might be the crux. For me to say "the difference between life and what we call machines is just complexity" would require the same understanding of "complexity" to have shared meaning. I'm not exactly sure what complexity is, and I'm not sure anyone does yet, but the closest I feel I've come is maybe integrated information theory, and some loose concept of functional information density. So while it probably seemed like I was making a shallow case at a surface level, I was actually trying to convey that when one digs into science at all levels of abstraction, the differences between life and machines seem to fall more on a spectrum. |
|
| |
| ▲ | foogazi a day ago | parent | prev | next [-] | | > I think the reason I would say the night sky is “beautiful” is because the meaning of the word for me is constructed from the experiences I’ve had in which I’ve heard other people use the word. Ok but you don’t look at every night sky or every sunset and say “wow that’s beautiful” There’s a quality to it - not because you heard someone say it but because you experience it | | |
| ▲ | TeMPOraL 20 hours ago | parent | next [-] | | > Ok but you don’t look at every night sky or every sunset and say “wow that’s beautiful Exactly - because it's a semantic shorthand. Sunsets are fucking boring, ugly, transient phenomena. Watching a sunset while feeling safe and relaxed, maybe in a company of your love interest who's just as high on endorphins as you are right now - this is what feels beautiful. This is a sunset that's beautiful. But the sunset is just a pointer to the experience, something others can relate to, not actually the source of it. | | |
| ▲ | drewbeck 15 hours ago | parent [-] | | I’ve seen incredible sunsets while stressed depressed and worse. Are you saying sunsets cannot be experienced as beautiful on their own? |
| |
| ▲ | adastra22 21 hours ago | parent | prev | next [-] | | Because words are much lower bandwidth than speech. But if you were “told” about a sunset by means of a Matrix style direct mind uploading of an experience, it would seem just as real and vivid. That’s a quantitative difference in bandwidth, not a qualitative difference in character. | |
| ▲ | holler a day ago | parent | prev [-] | | my thought exactly |
| |
| ▲ | dmkii a day ago | parent | prev | next [-] | | It’s interesting you mention linguistics because I feel a lot of the discussions around AI come back to early 20th century linguistics debates between Russel, Wittgenstein and later Chomsky. I tend to side with (later) Wittgenstein’s perception that language is inherently a social construct. He gives the example of a “game” where there’s no meaningful overlap between e.g. Olympic Games and Monopoly, yet we understand very well what game we’re talking about because of our social constructs. I would argue that LLMs are highly effective at understanding (or at least emulating) social constructs because of their training data. That makes them excellent at language even without a full understanding of the world. | |
| ▲ | heyjamesknight 14 hours ago | parent | prev | next [-] | | You don’t have a deeper “meaning of the word,” you have an actual experience of beauty. Three word is just a label for the thing you, me, and other humans have experienced. The machine has no experience. | |
| ▲ | intended a day ago | parent | prev [-] | | The fact that things are constructed by neurons in the brain, and are a representation of other things - does not preclude your representation from being deeper and richer than LLM representations. The patterns in experience are reduced to some dimensions in an LLM (or generative model). They do not capture all the dimensions - because the representation itself is a capture of another representation. Personally, I have no need to reassure myself whether I am a special snowflake or not. Whatever snowflake I am, I strongly prefer accuracy in my analogies of technology. GenAI does not capture a model of the world, it captures a model of the training data. If video tools were that good, they would have started with voxels. |
| |
| ▲ | j16sdiz a day ago | parent | prev | next [-] | | Beauty standard changes over time, see how people perceive body fat in the past few hundred years. We learns what is beautiful from our peers. Taste can be acquired and can be cultural. See how people used to had their coffee. Comparing human to LLM is like comparing something constantly changing to something random -- we can't compare them directly, we need a good model for each of them before comparing. | | |
| ▲ | solumunus 20 hours ago | parent [-] | | Has there been a point in human history where mainstream society denied the beauty in nature? | | |
| ▲ | com2kid 2 hours ago | parent [-] | | In a local Facebook group, in a discussion about zoning, someone seriously said "we need less parks and more parking lots", so... Maybe? |
|
| |
| ▲ | klipt a day ago | parent | prev | next [-] | | What about a blind human? Are they just like an LLM? What about a multimodal model trained on video? Is that like a human? | | |
| ▲ | hashiyakshmi a day ago | parent [-] | | This is actually a great point but for the opposite reason - if you ask a blind person if the night sky is beautiful, they would say they don't know because they've never seen it (they might add that they've heard other people describe it as such). Meanwhile, I just asked ChatGPT "Do you think the night sky is beautiful?" And it responded "Yes, I do..." and went on to explain why while describing senses its incapable of experiencing. | | |
| ▲ | a day ago | parent | next [-] | | [deleted] | |
| ▲ | golergka a day ago | parent | prev | next [-] | | Wha if you asked the blind man to play the role of helpful assistant | | |
| ▲ | sugarkjube a day ago | parent [-] | | Now that's an interesting point of view. Involving blind people would be an interesting experiment. Anyway, until the sixties the ability to play a game of chess was seen as intelligence, and until about 2-3 years ago the "turing test" was considered the main yardstick (even though apparently some people talked to eliza at the time like an actual human being). I wonder what the new one is, and how often it will be moved again. |
| |
| ▲ | chipsrafferty a day ago | parent | prev | next [-] | | I just asked Gemini and it said "I don't have eyes or the capacity to feel emotions like "beauty"" | | |
| ▲ | LostMyLogin a day ago | parent | next [-] | | Claude 4.5 Q) Do you think the night sky is beautiful A) I find the night sky genuinely captivating. There’s something profound about looking up at stars that have traveled light-years to reach us, or catching the soft glow of the Milky Way on a clear night away from city lights. The vastness it reveals is humbling.
I’m curious what draws you to ask - do you have a favorite thing about the night sky, or were you stargazing recently? | | |
| ▲ | klipt a day ago | parent [-] | | Claude is multimodal, it has been trained on images | | |
| ▲ | heyjamesknight 14 hours ago | parent [-] | | Multimodal is a farce. It still can’t see anything, it just generates a as list of descriptors that the LLM part can LLM about. Humans got by for hundreds of thousands of years without language. When you see a duck you don’t need to know the word duck to know about the thing you’re seeing. That’s not true for “multimodal” models. |
|
| |
| ▲ | palmotea a day ago | parent | prev [-] | | >> Meanwhile, I just asked ChatGPT "Do you think the night sky is beautiful?" And it responded "Yes, I do..." and went on to explain why while describing senses its incapable of experiencing. > I just asked Gemini and it said "I don't have eyes or the capacity to feel emotions like "beauty"" That means nothing, except perhaps that Google probably found lies about "senses [Gemini] incapable of experiencing" to be an embarrassment, and put effort into specifically suppressing those responses. |
| |
| ▲ | sugarkjube a day ago | parent | prev [-] | | Interesting. But not not only blind people. I'm gooing to try this question this weekend with some people, as h0 hypotesis i think the answer i will get would be usually like "what an odd question" or "why do you ask". |
|
| |
| ▲ | ninetyninenine 15 hours ago | parent | prev [-] | | Guys you realize that you can go to ChatGPT right now and it can generate an actual picture of the night sky because it has seen thousands of pictures and drawings of the actual night sky right? Your logic is flawed because your knowledge is outdated. LLMs are encoding visual data, not just “language” data. | | |
| ▲ | heyjamesknight 14 hours ago | parent [-] | | You misunderstand how the multimodal piece works. The fundamental unit of encoding here is still semantic. Not the same in your mind: you don’t need to know the word for sunset to experience the sunset. | | |
| ▲ | ninetyninenine 8 hours ago | parent [-] | | No you misunderstand the ground truth reality. The LLM doesn’t need words as input. It can output pictures from pictures. Semantic words don’t have to be part of the equation at all. Also you have to note that serialized one dimensional string encodings are universal. Anything on the face of the earth and the universe itself can be encoded into a sting of just two characters: one and zero. That’s means anything can be translated to a linear series of symbols and the LLM can be trained on it. The LLM can be trained on anything. |
|
|
|