| ▲ | roenxi 2 days ago |
| I haven't heard a good argument for why AGI isn't already here. It has average humans beat and seems generally to be better-than-novice in any given field that requires intelligence. They play Go, they write music, they've read Shakespeare, they are better at empathy and conversation than most. What more are we asking AI to do? And can a normal human do it? |
|
| ▲ | Peritract 2 days ago | parent | next [-] |
| I think you should consider carefully whether AI is actually better at these things (especially any one given model at all of them), or if your ability to judge quality in these areas is flawed/limited. |
| |
| ▲ | roenxi 2 days ago | parent [-] | | So? Do I not count as a benchmark of basic intelligent now? I've got a bunch of tests and whatnot that suggest I'm a reasonably above average at thinking. There is this fascinating trend where people would rather bump humans out of the naturally intelligent category rather than admit AIs are actually already at an AGI standard. If we're looking for intelligent conversation AI is definitely above average. Above-average intelligence isn't a high-quality standard. Intelligence is nowhere near sufficient to get to high quality on most things. As seen with the current generations of AGI models. People seem to be looking for signs of wild superintelligences like being a polymath at the peak of human performance. | | |
| ▲ | Peritract 2 days ago | parent | next [-] | | A lot of people who are also above average according to a bunch of tests disagree with you. Even if we take 'above average' on some tests to mean in every area--above average at literacy, above average at music, above average at empathy--it's still clear that many people have higher standards for these things than you. I'm not saying definitively that this means your standards are unreasonably easy to meet, but I do think it's important to think about it, rather than just assume that--because it impresses you--it must be impressive in general. When AI surprises any one of us, it's a good idea to consider whether 'better than me at X' is the same as 'better than the average human at X', or even 'good at X'. | |
| ▲ | ACCount37 2 days ago | parent | prev [-] | | A major weak point for AIs is long term tasks and agentic behavior. Which is, as it turns out, its own realm of behavior that's hard to learn from text data, and also somewhat separate from g - the raw intelligence component. An average human still has LLMs beat there, which might be distorting people's perceptions. But task length horizon is going up, so that moat holding isn't a given at all. |
|
|
|
| ▲ | plastic-enjoyer 2 days ago | parent | prev | next [-] |
| > they are better at empathy and conversation than most Imagine the conversations this guy must have with people IRL lol |
| |
| ▲ | roenxi 2 days ago | parent [-] | | Do you not talk to ordinary people? They are not intelligent conversationalists. They tend to be more of the "lol" variety. | | |
| ▲ | irishcoffee 2 days ago | parent | next [-] | | > Do you not talk to ordinary people? They are not intelligent conversationalists. They tend to be more of the "lol" variety. Stating that easygoing people are not also intelligent conversationalist sounds like a _you_ problem dripping with ignorance. Maybe get off the socials for a bit or something, you might need a change of perspective. | |
| ▲ | lawn 2 days ago | parent | prev [-] | | I think you might be into something. I'm getting serious "lol" vibes from your comment. |
|
|
|
| ▲ | superultra 2 days ago | parent | prev | next [-] |
| I’d say that an increasingly more common strand is that the way LLMs work is so wildly different than how we humans operate that it is effectively an alien intelligence pretending to be human. We have never and still don’t fully understand why LLMs work the way they do. I’m of the opinion that AGI is an anthropomorphizing of digital intelligence. The irony is that as LLMs improve, they will both become better at “pretending” to be human, and even more alien in the way they work. This will become even more true once we allow LLMs to train themselves. If that’s the case than I don’t think that human criteria is really applicable here except in an evaluation of how it relates to us. Perhaps your list is applicable in LLM’s relativity to humans but many think we need some new metrics for intelligence. |
|
| ▲ | Ekaros 2 days ago | parent | prev | next [-] |
| I would expect sufficient "General Intelligence" to be able to correct itself in process. I hear way too often that you need to restart something to get it work. This to me doesn't sound sufficient yet for general intelligence. For that you should be able to leave it running all the time and learn and progress during run-time. We have bunch of tools for specific tasks. This doesn't again sound like general. |
|
| ▲ | kkapelon 2 days ago | parent | prev | next [-] |
| >What more are we asking AI to do? And can a normal human do it? 1. Learn/Improve yourself with each action you take
2. Create better editions/versions of yourself
3. Solve problem in areas that you were not trained for simply by trial and error where you yourself decide if what you are doing is correct or wrong |
|
| ▲ | oxag3n 2 days ago | parent | prev | next [-] |
| > What more are we asking AI to do? And can a normal human do it? Simple - go through an on-boarding training, chat to your new colleagues, start producing value. |
|
| ▲ | lynx97 2 days ago | parent | prev | next [-] |
| > they are better at empathy Are you serious or sarcastic? Do you really consider this empty type of sycophancy as empathy? |
| |
| ▲ | roenxi 2 days ago | parent [-] | | Compared to the average human? Yes. Most people are distressingly bad at empathy to the point where just repeating what they just heard back to an interlocutor in a stressful situation could be considered an advanced technique. The average standard of empathy isn't that far away from someone who sees beatings as a legitimate form of communication. Humans suck at empathy, especially outside a tight in-group. But even in-group they lack ability. | | |
| ▲ | lynx97 2 days ago | parent | next [-] | | I am sorry for you. You must surround yourself with a lot of awful people. That is pretty sad to read. Get out of whatever you are stuck in, it can't be good for you. | | |
| ▲ | roenxi 2 days ago | parent | next [-] | | The stats are something like 1 in 10 people experience domestic violence. Unless someone takes a vow of silence and goes to live in the wilderness there is no way to avoid awful people. They're just people. The average standard is not high. Although I suppose an argument could be made that wife-beaters are actually just evil rather than being low-empathy but I think the point is still clear enough. | | |
| ▲ | dmurvihill 2 days ago | parent | next [-] | | What you are saying is that 9 out of 10 never experience domestic violence despite cohabitating with 10-20 other people during their lifetime. | | |
| ▲ | roenxi a day ago | parent [-] | | No, what I'm saying is that around 6-8 out of 10 people are worse at empathy than a chatbot, in my estimation. And even if that gets knocked down a little I still don't see how people would argue that humans have some unassailable edge. Chatbots are an AGI system. Especially the omni-models. |
| |
| ▲ | lynx97 2 days ago | parent | prev [-] | | I don't know why you picked that particular example to make your point. I do notice though that you framed it in a pretty sexist way. You realize the dark figure of men getting abused by their wives is higher then the media reports? In any case, my point is, violence in relationships happens both ways. Why that confirms that humans are in general not capable of being empathy is beyond me. My point still stands. You cant fix the whole world. BUT, you definitely can make sure you surround yourself with decent people, at least to a certain extend. I know the drill. I have a disability, and I had (and have) to deal with people treating me in a very inappropriate way. Patronisation, not being taken serious, you name it, I know it. But that still didn't make me the frustrated kind of person you seem to be. You have a choice. Just drop toxic people and you will see, most humans can be pretty decent. | | |
| ▲ | roenxi 2 days ago | parent [-] | | > You realize the dark figure of men getting abused by their wives is higher then the media reports? In any case, my point is, violence in relationships happens both ways. Yes. That is in fact pretty much exactly what I'm arguing. People are often horrible. > BUT, you definitely can make sure you surround yourself with decent people... People generally can't. Otherwise there'd be a bunch more noticeable social stratification to isolate abusive spouses instead of it being politely ignored. And if people could, you would - you note in the next sentence that you can't being dealt with in an inappropriate way. And you aren't even trying to identify people who are generally low empathy, you're just trying to find people who don't treat you badly. > me the frustrated kind of person you seem to be. The irony in a thread on empathy. What frustration? Being an enthusiastic human-observer isn't usually frustrating. Some days I suppose. But that sort of guess is the type of thing that AIs don't tend to do - they typically do focus rather carefully on the actual words used and ideas being expressed. | | |
| ▲ | lynx97 2 days ago | parent [-] | | An AI (LLM) neither focuses on words nor on ideas. What you are promoting is plain escapism, which sounds rather unhealthy to me. To each their own. But really, get some help. There are ways, many ways, to deal with a depression, other then waiting for a digital god. |
|
|
| |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | gregoryl 2 days ago | parent | prev [-] | | Truly, you need to spend time with literally anyone other than the people you currently engage with. | | |
| ▲ | roenxi 2 days ago | parent [-] | | If you object to HN you didn't have to create an account. And I still reckon even a sycophantic AI would still have managed more empathy in its response. They tend to be a bit wordy and attempt to actually engage with the substance of what people say too. | | |
| ▲ | Capricorn2481 2 days ago | parent [-] | | > If you object to HN They didn't even mention HN. Are you saying the people you associate with are just on HN? Don't spend all your time on HN or weigh your opinions of humanity on it. People on here are probably the least representative of social society. That's not rejecting it, that's just common sense. |
|
|
|
|
|
| ▲ | kjhkjhksdhksdhk 2 days ago | parent | prev | next [-] |
| exist in realtime. they don't, we do. |
| |
| ▲ | popoflojo 2 days ago | parent | next [-] | | That's an interesting bar. What is real time? One day they are likely to be faster than us at any response. | |
| ▲ | ACCount37 2 days ago | parent | prev [-] | | No, you pretend you do. You got 200ms of round trip delay across your nervous system. Some of the modern AI robotics systems already have that beat, sensor data to actuator action. | | |
| ▲ | irishcoffee 2 days ago | parent [-] | | > Some of the modern AI robotics systems already have that beat, sensor data to actuator action. What do LLMs have to do with this? You ever see a machine beat a speed cube? So we’ve had “AI” all along and never knew it?! Oh right, comparing meatspace messaging speeds to copper or fiber doesn’t make sense. Good point. | | |
| ▲ | ACCount37 2 days ago | parent [-] | | Look up Gemini Robotics-ER 1.5 and the likes. Anyone who's trying to build universal AI-driven robots converges on architectures like that. Larger language-based models driving smaller "executive" models that operate in real time at a high frequency. |
|
|
|
|
| ▲ | exasperaited 2 days ago | parent | prev [-] |
| > they are better at empathy and conversation than most. Do you know actual people? Even literal sociopaths are a bit better at empathy than ChatGPT (I know because I have met a couple). And as for conversation? Are you serious? ChatGPT does not converse in a meaningful sense at all. |
| |
| ▲ | roenxi 2 days ago | parent [-] | | Sure, I assume some sociopaths would have extremely high levels of cognitive empathy. It is really a question of semantics - but the issue is I don't think the people arguing against AGI can define their terms at all without the current models being AGI or falling into the classic Diogenes behold! a man! problem of the definition not really capturing anything useful - like intelligence. Traditionally the Turing test has been close to what people mean, but for obvious reasons nobody cares about it any more. |
|