| ▲ | dmurvihill 2 days ago |
| This says it all: > I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy. You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear? |
|
| ▲ | obruchez 2 days ago | parent | next [-] |
| It's difficult to know what people really believe, especially after only a few minutes of discussion, but I would say most people I talk to don't believe AGI is even possible. And they probably think their life won't be changed much by LLMs, AI, etc. |
| |
| ▲ | dmurvihill 2 days ago | parent | next [-] | | I believe AGI is possible. Also that LLMs are a dead end as far as that goes. | |
| ▲ | roenxi 2 days ago | parent | prev [-] | | I haven't heard a good argument for why AGI isn't already here. It has average humans beat and seems generally to be better-than-novice in any given field that requires intelligence. They play Go, they write music, they've read Shakespeare, they are better at empathy and conversation than most. What more are we asking AI to do? And can a normal human do it? | | |
| ▲ | Peritract 2 days ago | parent | next [-] | | I think you should consider carefully whether AI is actually better at these things (especially any one given model at all of them), or if your ability to judge quality in these areas is flawed/limited. | | |
| ▲ | roenxi 2 days ago | parent [-] | | So? Do I not count as a benchmark of basic intelligent now? I've got a bunch of tests and whatnot that suggest I'm a reasonably above average at thinking. There is this fascinating trend where people would rather bump humans out of the naturally intelligent category rather than admit AIs are actually already at an AGI standard. If we're looking for intelligent conversation AI is definitely above average. Above-average intelligence isn't a high-quality standard. Intelligence is nowhere near sufficient to get to high quality on most things. As seen with the current generations of AGI models. People seem to be looking for signs of wild superintelligences like being a polymath at the peak of human performance. | | |
| ▲ | Peritract 2 days ago | parent | next [-] | | A lot of people who are also above average according to a bunch of tests disagree with you. Even if we take 'above average' on some tests to mean in every area--above average at literacy, above average at music, above average at empathy--it's still clear that many people have higher standards for these things than you. I'm not saying definitively that this means your standards are unreasonably easy to meet, but I do think it's important to think about it, rather than just assume that--because it impresses you--it must be impressive in general. When AI surprises any one of us, it's a good idea to consider whether 'better than me at X' is the same as 'better than the average human at X', or even 'good at X'. | |
| ▲ | ACCount37 2 days ago | parent | prev [-] | | A major weak point for AIs is long term tasks and agentic behavior. Which is, as it turns out, its own realm of behavior that's hard to learn from text data, and also somewhat separate from g - the raw intelligence component. An average human still has LLMs beat there, which might be distorting people's perceptions. But task length horizon is going up, so that moat holding isn't a given at all. |
|
| |
| ▲ | plastic-enjoyer 2 days ago | parent | prev | next [-] | | > they are better at empathy and conversation than most Imagine the conversations this guy must have with people IRL lol | | |
| ▲ | roenxi 2 days ago | parent [-] | | Do you not talk to ordinary people? They are not intelligent conversationalists. They tend to be more of the "lol" variety. | | |
| ▲ | irishcoffee 2 days ago | parent | next [-] | | > Do you not talk to ordinary people? They are not intelligent conversationalists. They tend to be more of the "lol" variety. Stating that easygoing people are not also intelligent conversationalist sounds like a _you_ problem dripping with ignorance. Maybe get off the socials for a bit or something, you might need a change of perspective. | |
| ▲ | lawn 2 days ago | parent | prev [-] | | I think you might be into something. I'm getting serious "lol" vibes from your comment. |
|
| |
| ▲ | superultra 2 days ago | parent | prev | next [-] | | I’d say that an increasingly more common strand is that the way LLMs work is so wildly different than how we humans operate that it is effectively an alien intelligence pretending to be human. We have never and still don’t fully understand why LLMs work the way they do. I’m of the opinion that AGI is an anthropomorphizing of digital intelligence. The irony is that as LLMs improve, they will both become better at “pretending” to be human, and even more alien in the way they work. This will become even more true once we allow LLMs to train themselves. If that’s the case than I don’t think that human criteria is really applicable here except in an evaluation of how it relates to us. Perhaps your list is applicable in LLM’s relativity to humans but many think we need some new metrics for intelligence. | |
| ▲ | Ekaros 2 days ago | parent | prev | next [-] | | I would expect sufficient "General Intelligence" to be able to correct itself in process. I hear way too often that you need to restart something to get it work. This to me doesn't sound sufficient yet for general intelligence. For that you should be able to leave it running all the time and learn and progress during run-time. We have bunch of tools for specific tasks. This doesn't again sound like general. | |
| ▲ | kkapelon 2 days ago | parent | prev | next [-] | | >What more are we asking AI to do? And can a normal human do it? 1. Learn/Improve yourself with each action you take
2. Create better editions/versions of yourself
3. Solve problem in areas that you were not trained for simply by trial and error where you yourself decide if what you are doing is correct or wrong | |
| ▲ | oxag3n 2 days ago | parent | prev | next [-] | | > What more are we asking AI to do? And can a normal human do it? Simple - go through an on-boarding training, chat to your new colleagues, start producing value. | |
| ▲ | lynx97 2 days ago | parent | prev | next [-] | | > they are better at empathy Are you serious or sarcastic? Do you really consider this empty type of sycophancy as empathy? | | |
| ▲ | roenxi 2 days ago | parent [-] | | Compared to the average human? Yes. Most people are distressingly bad at empathy to the point where just repeating what they just heard back to an interlocutor in a stressful situation could be considered an advanced technique. The average standard of empathy isn't that far away from someone who sees beatings as a legitimate form of communication. Humans suck at empathy, especially outside a tight in-group. But even in-group they lack ability. | | |
| ▲ | lynx97 2 days ago | parent | next [-] | | I am sorry for you. You must surround yourself with a lot of awful people. That is pretty sad to read. Get out of whatever you are stuck in, it can't be good for you. | | |
| ▲ | roenxi 2 days ago | parent | next [-] | | The stats are something like 1 in 10 people experience domestic violence. Unless someone takes a vow of silence and goes to live in the wilderness there is no way to avoid awful people. They're just people. The average standard is not high. Although I suppose an argument could be made that wife-beaters are actually just evil rather than being low-empathy but I think the point is still clear enough. | | |
| ▲ | dmurvihill 2 days ago | parent | next [-] | | What you are saying is that 9 out of 10 never experience domestic violence despite cohabitating with 10-20 other people during their lifetime. | | |
| ▲ | roenxi a day ago | parent [-] | | No, what I'm saying is that around 6-8 out of 10 people are worse at empathy than a chatbot, in my estimation. And even if that gets knocked down a little I still don't see how people would argue that humans have some unassailable edge. Chatbots are an AGI system. Especially the omni-models. |
| |
| ▲ | lynx97 2 days ago | parent | prev [-] | | I don't know why you picked that particular example to make your point. I do notice though that you framed it in a pretty sexist way. You realize the dark figure of men getting abused by their wives is higher then the media reports? In any case, my point is, violence in relationships happens both ways. Why that confirms that humans are in general not capable of being empathy is beyond me. My point still stands. You cant fix the whole world. BUT, you definitely can make sure you surround yourself with decent people, at least to a certain extend. I know the drill. I have a disability, and I had (and have) to deal with people treating me in a very inappropriate way. Patronisation, not being taken serious, you name it, I know it. But that still didn't make me the frustrated kind of person you seem to be. You have a choice. Just drop toxic people and you will see, most humans can be pretty decent. | | |
| ▲ | roenxi 2 days ago | parent [-] | | > You realize the dark figure of men getting abused by their wives is higher then the media reports? In any case, my point is, violence in relationships happens both ways. Yes. That is in fact pretty much exactly what I'm arguing. People are often horrible. > BUT, you definitely can make sure you surround yourself with decent people... People generally can't. Otherwise there'd be a bunch more noticeable social stratification to isolate abusive spouses instead of it being politely ignored. And if people could, you would - you note in the next sentence that you can't being dealt with in an inappropriate way. And you aren't even trying to identify people who are generally low empathy, you're just trying to find people who don't treat you badly. > me the frustrated kind of person you seem to be. The irony in a thread on empathy. What frustration? Being an enthusiastic human-observer isn't usually frustrating. Some days I suppose. But that sort of guess is the type of thing that AIs don't tend to do - they typically do focus rather carefully on the actual words used and ideas being expressed. | | |
| ▲ | lynx97 2 days ago | parent [-] | | An AI (LLM) neither focuses on words nor on ideas. What you are promoting is plain escapism, which sounds rather unhealthy to me. To each their own. But really, get some help. There are ways, many ways, to deal with a depression, other then waiting for a digital god. |
|
|
| |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | gregoryl 2 days ago | parent | prev [-] | | Truly, you need to spend time with literally anyone other than the people you currently engage with. | | |
| ▲ | roenxi 2 days ago | parent [-] | | If you object to HN you didn't have to create an account. And I still reckon even a sycophantic AI would still have managed more empathy in its response. They tend to be a bit wordy and attempt to actually engage with the substance of what people say too. | | |
| ▲ | Capricorn2481 2 days ago | parent [-] | | > If you object to HN They didn't even mention HN. Are you saying the people you associate with are just on HN? Don't spend all your time on HN or weigh your opinions of humanity on it. People on here are probably the least representative of social society. That's not rejecting it, that's just common sense. |
|
|
|
| |
| ▲ | kjhkjhksdhksdhk 2 days ago | parent | prev | next [-] | | exist in realtime. they don't, we do. | | |
| ▲ | popoflojo 2 days ago | parent | next [-] | | That's an interesting bar. What is real time? One day they are likely to be faster than us at any response. | |
| ▲ | ACCount37 2 days ago | parent | prev [-] | | No, you pretend you do. You got 200ms of round trip delay across your nervous system. Some of the modern AI robotics systems already have that beat, sensor data to actuator action. | | |
| ▲ | irishcoffee 2 days ago | parent [-] | | > Some of the modern AI robotics systems already have that beat, sensor data to actuator action. What do LLMs have to do with this? You ever see a machine beat a speed cube? So we’ve had “AI” all along and never knew it?! Oh right, comparing meatspace messaging speeds to copper or fiber doesn’t make sense. Good point. | | |
| ▲ | ACCount37 2 days ago | parent [-] | | Look up Gemini Robotics-ER 1.5 and the likes. Anyone who's trying to build universal AI-driven robots converges on architectures like that. Larger language-based models driving smaller "executive" models that operate in real time at a high frequency. |
|
|
| |
| ▲ | exasperaited 2 days ago | parent | prev [-] | | > they are better at empathy and conversation than most. Do you know actual people? Even literal sociopaths are a bit better at empathy than ChatGPT (I know because I have met a couple). And as for conversation? Are you serious? ChatGPT does not converse in a meaningful sense at all. | | |
| ▲ | roenxi 2 days ago | parent [-] | | Sure, I assume some sociopaths would have extremely high levels of cognitive empathy. It is really a question of semantics - but the issue is I don't think the people arguing against AGI can define their terms at all without the current models being AGI or falling into the classic Diogenes behold! a man! problem of the definition not really capturing anything useful - like intelligence. Traditionally the Turing test has been close to what people mean, but for obvious reasons nobody cares about it any more. |
|
|
|
|
| ▲ | YetAnotherNick 2 days ago | parent | prev | next [-] |
| > artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy. This seems like a factually correct sentence. Emphasis on "potential". |
|
| ▲ | tim333 2 days ago | parent | prev | next [-] |
| You can be a bear and still think AI will be big one day. It's quite plausible that LLMs will remain limited and we don't find anything better for decades and the stocks crash. But saying AI will never be a big thing is just unrealistic. |
| |
| ▲ | Yizahi 2 days ago | parent [-] | | I think we should split definition somehow, between what LLMs can do today (or next few years) with how big a thing this particular capability can be (a derivative of the capability). And then what some future AI could do and with how big a thing that future capability could be. I regularly see people who distinguish between current and future capabilities, but then still lump societal impact (how big a thing could be) into one projection. The key bubble question is - if that future AI is sufficiently far away (for example if there will be a gap, a new "AI winter" for a few decades), then does this current capability justify the capital expenditures, and if not then by how much? | | |
| ▲ | tim333 2 days ago | parent [-] | | Yeah, and how long can OpenAI etc. hang on without making profits. |
|
|
|
| ▲ | sandworm101 2 days ago | parent | prev | next [-] |
| One upon a time in SF i was told that human-driven cars would be illegal, or too expensive to insure, by the end of the decade. That was last decade. The modern tech economy is all about bubbles biult and sustained by hype people. Vertical farming. Pot replacing alcohol. Blockchains replacing lawyers. The metaverse replacing everything. Sure, we are in an AI bubble but we aslo ride atop a dozen others. AI data centers in space? In five years? Really? No fiber connections? Does any sane person actually believe this? No. But if that is what keeps the billions flowing upwards then who am I to judge. |
| |
| ▲ | lynx97 2 days ago | parent | next [-] | | Not just in SF. "Journalists" love to pick up these enflated futuristic projections and run with 'em, since they sound so cozy and generate clicks. I still remember the "Google Car" craze from the early 2010er years. And if you tell people who read and believe this futuristic nonesense that it is enflated, you get pushback, because, yeah, why should a single person know better then a incentivized journalist... | |
| ▲ | TheAceOfHearts 2 days ago | parent | prev [-] | | I'm quite skeptical of the data centers in space claim, but I think a proof of concept can certainly be achieved in five years. I'm less convinced that we'll ever see widescale deployment of data center satellites. And to be fair, I've read that Google's timelines for this project extend far beyond a 5 year horizon. I think it's a rational research direction for them, since it gets people excited and historically many space-related innovations have been repurposed to benefit other industries. Best case scenario would be that research done in support of this data centers in space project leads to innovations that can be applied towards normal data centers. | | |
| ▲ | Yizahi 2 days ago | parent | next [-] | | Someone can build a server in space, pairing a puny underpowered rack with a handful of servers to a ginormous football field sized solar panel plus a heat radiator plus a heavy as hell insulated battery to survive being a planet shade every hour for tens of minutes. We can do that from existing components and launch on existing rockets, no problem. Why though? Why would anyone need a server in space in the first place? What is a benefit for that location, necessitating a cost an order of magnitude higher (or more) compared to a warehouse anywhere on the planet? | |
| ▲ | popoflojo 2 days ago | parent | prev | next [-] | | Do data centers on Earth have no employees present, and none who ever come on site for the life of the data center? Prove that out on earth and I will start to believe your space data center. | | |
| ▲ | dmurvihill 2 days ago | parent [-] | | I'm quite sure that can be done, if you jack up the price and pare down requirements enough. The question is, would the result be useful. |
| |
| ▲ | sandworm101 a day ago | parent | prev [-] | | Try asking for a 24/7 multi-gig data connection to a space server. Space suddenly doesnt seem so big once you start playing around with RF allocations. |
|
|
|
| ▲ | bitwize 2 days ago | parent | prev | next [-] |
| AI is changing the world and has changed the world already. See, AI is a field... and it's also a buzzword: once a technology passes out of fashion and becomes part of the fabric of computing, it is no longer called AI in the public imagination. GOFAI techniques, like rules engines and propositional-logic inference, were certainly considered AI in the 1970s and 1980s, and are still used, they're just no longer called that. The statistical methods behind machine learning, transformers, and LLMs are certainly game changers for the field. Whether they will usher in a revolutionary new economy, or simply be accepted as sometimes-useful computation techniques as their limitations and the boundaries of their benefits become more widely known, remains to be seen but I think it will be closer to the latter than the former. |
|
| ▲ | thenaturalist 2 days ago | parent | prev | next [-] |
| Also equating artificial intelligence with LLMs. I get that laymen and the media do it, but imo this looks really bad for an investor. |
| |
| ▲ | ACCount37 2 days ago | parent | next [-] | | What's the alternative? Is there literally any AI tech more promising and disruptive than LLMs? Or should we buy into that "it's not ackhtually AI" meme? | | |
| ▲ | charcircuit 2 days ago | parent | next [-] | | Visual reasoning models. Having a computer being able to understand what is happening in the real world is very useful. | | |
| ▲ | ACCount37 2 days ago | parent [-] | | Those are LLMs with an extra modality bolted to them. Which is good - that it works this well speaks of the generality of autoregressive transformers, and the "reasoning over image data" progress with things like Qwen3-VL is very impressive. It's a good capability to have. But it's not a separate thing from the LLM breakthrough at all. Even the more specialized real time robotics AIs often have a bag of transformers backed by an actual LLM. |
| |
| ▲ | ares623 2 days ago | parent | prev [-] | | The alternative is to be f*cking honest | | |
| ▲ | bluebarbet 2 days ago | parent | next [-] | | This contribution adds nothing to the conversation except gratuitous venom. | | |
| ▲ | Peritract 2 days ago | parent | next [-] | | I don't think that's fair; one of the most significant criticisms of the AI industry is the number of misleading claims made by its spokespeople, which has had a significant effect on public perception. The parent comment is a relevant expression of that. | |
| ▲ | dmurvihill 2 days ago | parent | prev [-] | | Well deserved and badly needed venom* |
| |
| ▲ | ACCount37 2 days ago | parent | prev [-] | | "Fucking honest" how? If I'm being fucking honest, then this generation of LLMs might already beat most humans on raw intelligence, AI progress shows no signs of stopping, and "it's not actually thinking" is just another "AI effect" cope that humans come up with to feel more important and more exceptional. Or is this not the "fucking honesty" you want? | | |
|
| |
| ▲ | askl 2 days ago | parent | prev [-] | | > but imo this looks really bad for an investor. Why? Would you expect an investor to understand what they're investing in? | | |
|
|
| ▲ | lm28469 2 days ago | parent | prev | next [-] |
| "My technosolutionist bubble says it's not a bubble, trust me bro" |
| |
| ▲ | paganel 2 days ago | parent | next [-] | | > technosolutionist I'm going to steal this for my arrr rspod conversations. | | | |
| ▲ | thenaturalist 2 days ago | parent | prev [-] | | „Just XYZ more billion, bro, and then we’re gonna have AGI! For real bro, pleaseeee!“ | | |
| ▲ | edhelas 2 days ago | parent | next [-] | | Why can't you just prompt a way to AGI without spending all that money? | | |
| ▲ | popoflojo 2 days ago | parent [-] | | Honestly this is the best response. If the AI was actually so great, it could create better AI, and the future would already be here | | |
| |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | Yizahi 2 days ago | parent | prev [-] | | "Sam Altman, a man best known for needing a few more billions at any given moment." (c) HN best-of-2025 :) |
|
|
|
| ▲ | lawn 2 days ago | parent | prev | next [-] |
| That AI have the potential to be extremely disruptive does not prevent the current speculative boom to be a bubble. People seem to have forgotten about the dotcom bubble. |
|
| ▲ | keybored 2 days ago | parent | prev | next [-] |
| I never talk to people who don’t wear suits. |
|
| ▲ | danybittel 2 days ago | parent | prev | next [-] |
| From the article: ...AI is currently the subject of great enthusiasm. If that enthusiasm doesn’t produce a bubble conforming to the historical pattern, that will be a first. |
|
| ▲ | re-thc 2 days ago | parent | prev [-] |
| > and you didn’t even _talk_ to a bear? You know how to? What language does it speak? |