| ▲ | johndhi 3 days ago |
| I liked this analysis quite a bit. A few reactions: (1) I don't think a tech company having a monopoly is necessary for a tech company to stop caring about their customers and focus on hype instead. Plenty of public tech companies do this, just to chase stock price and investors. (2) It's weirdly the opposite mindset that Bezos talked about in that famous old video where he talks about "creating the best experience for the customer" is his business strategy. Interesting. I think ultimately companies may be misguided -- because, in fact, ChatGPT is succeeding because they created a better search experience. And hype bandwagoners may fail because, long-term, customers don't like their products. In other words, this is a bad strategy. (3) what's weird about AI -- and I guess all hype trains -- is how part of me feels like it's hype, but part of me also sees the value in investing in it and its potential. The hype train itself and the crazy amount of money being spent on it almost defacto means it IS important and WILL be important. It's like a market and consumer interest has been created by the hype machine itself. |
|
| ▲ | jerf 3 days ago | parent | next [-] |
| "what's weird about AI -- and I guess all hype trains -- is how part of me feels like it's hype, but part of me also sees the value in investing in it and its potential." The DotCom bubble is an instructive historical example. Pretty much every wild promise made during the bubble has manifested, right down to delivering pet food. It's just that for the bubble to have been worthwhile, we would essentially have had the internet of 2015 or 2020 delivered in 2001. (And because people forget, it is not too far off to say that would be like trying to deliver the internet of 2020 on machines with specs comparable to a Nintendo Wii. I'm trying to pick a game console as a sort of touchpoint, and there probably isn't a perfect comparison, but based on the machines I had in 2000 the specs are roughly inline with a Wii, at least by the numbers. Though the Wii would have murdered my 2000-era laptop on graphics.) I don't know that the AI bubble will have a similar 20-year lag, but I also think it's out over its skis. What we have now is both extremely impressive, but also not justifying the valuations being poured into it in the here & now. There's no contradiction there. In fact if you look at history there's been all sorts of similar cases of promising technologies being grotesquely over-invested in, even though they were transformative and amazing. If you want to go back further in history, the railroad bubble also has some similarities to the Dot Com bubble. It's not that railroad wasn't in fact a completely transformative technology, it's just that the random hodgepodge of a hundred companies slapping random sizes and shapes of track in half-random places wasn't worth the valuations they were given. The promise took decades longer to manifest. |
| |
| ▲ | zahlman 3 days ago | parent | next [-] | | > (And because people forget, it is not too far off to say that would be like trying to deliver the internet of 2020 on machines with specs comparable to a Nintendo Wii. I'm trying to pick a game console as a sort of touchpoint, and there probably isn't a perfect comparison, but based on the machines I had in 2000 the specs are roughly inline with a Wii, at least by the numbers. Though the Wii would have murdered my 2000-era laptop on graphics.) It depresses me to think how much of the 2020 Internet (or 2025 Internet) that is actually of value really ought to be able to run on hardware that old. Or so I imagine, anyway. I wonder if anyone's tried to benchmark simple CSS transitions and SVG rendering on ancient CPUs. | | |
| ▲ | BobbyTables2 3 days ago | parent [-] | | Also in the amount of data. Ever remember waiting something like hour to watch a 60-second movie preview over dialup? I get a reminder every time I load a modern website in an area with very poor reception. Appears to not load at all —- not due to lack of connectivity but rather due the speeds and latencies being too slow for the amount of crap being fetched. GPRS and EDGE were many times faster than dialup — must have been a dream — but now utterly unusable. | | |
| ▲ | svachalek 3 days ago | parent | next [-] | | The sad part of it is, pretty much everything on that modern website that makes it too fat to load is completely unnecessary. You can create a beautiful, full featured page that's only a few kbytes. | |
| ▲ | zahlman 3 days ago | parent | prev [-] | | That is indeed one of the differences I had in mind. |
|
| |
| ▲ | lubujackson 3 days ago | parent | prev | next [-] | | It is a valuable and relevant lesson - when something wide and structural manifests (personal computing, the internet, smartphones, AI), lots of people will be able to see the coming future with high fidelity, but will tend to underestimate the speed of change. Because we gloss over all the many, many small challenges to get from point A to B. Yes, now it feels like something like smartphones came of age overnight and was always inevitable. But it really took more than a decade to reach the level of integration and polish that we now take for granted. UI on phone apps was terrible, speeds were terrible, screens resolutions were terrible, processing was minimal, battery didn't last, roaming charges/3g coverage, etc. For years, you couldn't pinch to zoom on an iPhone, stuff like that. All these structural problems were rubbed away over time and eventually forgotten. But so many of these small tweaks needed to take place before we could "fill in the blanks" and reach the level of ubiquity for something like an Uber driver using their phone for directions. | | |
| ▲ | mrob 3 days ago | parent | next [-] | | UI on phone apps still is terrible. Have you ever used a desktop with high-end gaming peripherals (fast monitor/keyboard/mouse), running a light desktop environment such as LXQt on Xorg, with animations disabled? The feeling of responsiveness leaves all mobile devices in the dust. Any modern CPU+SSD is fast enough, but good peripherals are still rare and make a huge difference. Most phones are still running 60Hz displays. A touchscreen is inherently clumsy compared to mouse+keyboard. Mobile UI feel is worse than desktop computers from the 90s. | | |
| ▲ | spiffotron 3 days ago | parent [-] | | "have you ever done this indescribably niche thing, which I believe renders your argument null and void, but is in fact not relatable in the slightest?" | | |
| ▲ | mrob 2 days ago | parent [-] | | Good UI latency used to be standard: https://danluu.com/input-lag/ (most recent HN discussion: https://news.ycombinator.com/item?id=33683278 ) The fact that it requires niche hardware and software (not "indescribably" niche) to achieve in modern times is a failure of the computer industry, not a problem with my argument. Low standards do not make mobile devices good. And touch screen are impossible to fix even with niche configurations. I have no way to shrink my fingers to pixel size, and no way to make them transparent. |
|
| |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | derefr 3 days ago | parent | prev [-] | | > And because people forget, it is not too far off to say that would be like trying to deliver the internet of 2020 on machines with specs comparable to a Nintendo Wii. I mean, we could totally have done that. There's nothing stopping you from delivering an experience like modern Amazon or Facebook or whatever in server-rendered HTML4. CSS3 and React get you fancy graphics and animations, and fast, no-repaint page transitions, but that's pretty much all they get you; we had everything else 25 years ago in MSIE6. You could have built a dynamically-computed product listing grid + shopping cart, or a dynamically-computed network-propagated news feed with multimedia post types + attached comment threads (save for video, which would have been impractical back then), on top of Perl CGI-bin scripts — or if you liked, a custom Apache module in C. And, in fact, some people did! There existed web services even in 1998 that did [various fragments of] these things! Most of them built in ASP or ColdFusion, mind you, and so limited to a very specific stack; but still, it was happening! It was just that the results were all incredibly jank, with no UX polish... but not because UX polish would have been impossible with the tools available at the time. (As I said, HTML4 was quite capable!) Rather, it was because all the professional HCI people were still mostly focused on native apps (with the few rare corporate UX vanguards "doing web stuff", working on siloed enterprise products like the MSDN docs); while the new and growing body of art-school "web design" types were all instead being trained mainly on the application of vertically-integrated design tools (ActiveX, Flash, maybe web layout via Photoshop 9-way slice export). | | |
| ▲ | lstamour 3 days ago | parent [-] | | I agree with most of this post, except the part where you could actually do it. I’ll be the first to admit that I was not in server rooms back then but I’ve heard from those who were. The biggest advantage Amazon had, for many years, over their competitors, is that they would take your order and tell you it was completed and wait to charge your card until it shipped because it was cheaper to write your order down than to spend expensive session compute waiting for the payment to go through. That kind of optimization was necessary because all the networks were slower or flaky then, including payment processing, and often relied on batch processing overnight that has become less visible today. Meanwhile on the client side, web technologies had a lot of implicit defaults assuming pages on sites rather than apps and experiences. For example, we didn’t originally have a way for JS to preserve back/forward buttons functionality when navigating in a SPA without using hash tags in the URL. Without CSS features for it, support for RTL and LTR on the same website was basically nonexistent. I won’t even get started on charset, poorer support for dates that persists to this day, limited offline modes in a time when being offline was more common, and how browsers varied tremendously across platforms and versions back then with their own unique set of JS APIs and unique ideas of how to render webpages. It took the original acid test and a bunch more tests that followed before we had anything close to cross browser standards for newer web features. I still remember the snowman hack to get IE to submit forms with UTF-8 encoding, and that wasn’t as bad as quirks mode or IE 5. Actually maybe I disagree with most of this post. Don’t get me wrong, I can see how it could have been done, but it’s reductive to the extreme to say the only reason web services were jank is because UX polish didn’t exist. If anything, the web is the reason UX is so good today - apps and desktop platforms continuously copied the web for the past 28 years, from Windows ME with single-click everywhere to Spotify and other electron apps invading the OS. I’m not going to devalue the HIG or equivalent, but desktop apps tended to evolve slowly, with each new OS release, while web apps evolved quickly, with each new website needing to write its own cross platform conventions and thus needing its own design language. |
|
|
|
| ▲ | nottorp 3 days ago | parent | prev | next [-] |
| > ChatGPT is succeeding because they created a better search experience Funny enough, no "AI" prophet is mentioning that, in spite of it being the most useful thing about LLMs. What I wonder is how long it will last. LLMs are being fed their own content by now, and someone will surely want to "monetize" it after the VC money starts to dry up a bit. At least two paths to entshittification. |
| |
| ▲ | api 3 days ago | parent [-] | | "A junior intern who has memorized the Internet" is how one member of our team described it and it's still one of the best descriptions of these things I've heard. Sometimes I think these things are more like JPEGs for knowledge expressed as language. They're more AM (artificial memory) than AI (artificial intelligence). It's a blurry line though. They can clearly do things that involve reasoning, but it's arguably because that's latent in the training data. So a JPEG is an imperfect analogy since lossy image compressors can't do any reasoning about images. | | |
| ▲ | nottorp 3 days ago | parent | next [-] | | > They can clearly do things that involve reasoning No. > but it's arguably because that's latent in the training data. The internet is just bigger than what a single human can encounter. Plus a single human isn't likely to be able to afford to pay for all that training data the "AI" peddlers have pirated :) | | |
| ▲ | peterlk 3 days ago | parent | next [-] | | A dismissive “no” is not a helpful addition to this discussion. The truth is much more interesting and subtle than “no”. Directed stochastic processes that reach a correct conclusion of novel logic problems more often than chance means that something interesting is happening, and it’s sensible to call that process “reasoning”. Does it mean that we’ve reached AGI? No. Does it mean the process reflects exactly what humans do? No. But dismissing “reasoning” out of hand also dismisses genuinely interesting phenomena. | | |
| ▲ | LamaOfRuin 3 days ago | parent | next [-] | | Only if you redefine "reasoning". This is something that the generative AI industry has succeeded in convincing many people of, but that doesn't mean everyone has to accede to that change. It's true that something interesting is happening. GP did not dispute that. That doesn't make it reasoning, and many people still believe that words should have meaning in order to discuss things intelligently. Language is ultimately a living thing and will inevitably change. This usually involves people fighting the change and no one know ahead of time which side will win. | | |
| ▲ | peterlk 3 days ago | parent | next [-] | | I don't think we need to redefine reasoning. Here's the definition of "reason" (the verb): "think, understand, and form judgments by a process of logic" If Claude 4 provides a detailed, logical breakdown in its "reasoning" (yeah, that usage is overloaded), then we could say that there was logical inference involved. "But wait!", I already hear someone saying, "That token output is just the result of yet another stochastic process, and isn't directing the AI in a deterministic, logical way, and thus it is not actually using logic; it's just making something that looks convincingly like logic, but is actually a hallucination of some stochastic process". And I think this is a good point, but I find it difficult to convince myself that what humans are doing is so different that we cannot use the word "reasoning". As a sidenote, I am _very_ tired of the semantic quagmire that is the current AI industry, and I would really appreciate a rigorous guide to all these definitions. | |
| ▲ | zahlman 3 days ago | parent | prev | next [-] | | > Only if you redefine "reasoning". This is something that the generative AI industry has succeeded in convincing many people of, but that doesn't mean everyone has to accede to that change. I agree. However, they can clearly do a reasonable facsimile of many things that we previously believed required reasoning to do acceptably. | | |
| ▲ | quesera 3 days ago | parent [-] | | Right -- we know that LLMs cannot think, feel, or understand. Therefore whenever they produce output that looks like the result of those things, we must either be deceived by a reasonable facsimile, or we simply misapprehended their necessity in the first place. But, do we understand the human brain as well as we understand LLMs? Obviously there's something different, but is it just a matter of degrees? LLMs have greater memory than humans, and lesser ability to correlate it. Correlation is powerful magic. That's pattern matching though, and I don't see a fundamental reason why LLMs won't get better at it. Maybe never as good as (smart) humans are, but with their superior memory, maybe that will often be adequate. | | |
| ▲ | CyberDildonics 2 days ago | parent [-] | | > they produce output that looks like the result of those things Is a cardboard cutout human to some degree? Is a recording a voice? What about a voice recording in a phone menu? > LLMs have greater memory than humans, So does a bank of hard drives by that metric. | | |
| ▲ | quesera 2 days ago | parent [-] | | (Memory Access + Correlation Skills) is a decent proxy for several of the many kinds of human intelligence. HDDs don't have correlation skills, but LLMs do. They're just not smart-human-level "good", yet. I am not sure whether I believe AGI will happen. To be meaningful, it would have to be above the level of a smart human. Building an army of disincorporated average-human-intelligence actors would be economically "productive" though. This is the future I see us trending toward today. Most humans are not special. This is dystopian, of course. Not in the "machines raise humans for energy" sort of way, but probably no less socially destructive. | | |
| ▲ | CyberDildonics 2 days ago | parent [-] | | HDDs don't have correlation skills, but LLMs do So which is it, the memory or the correlation? I'll give you a hint, this is a trick question. | | |
| ▲ | quesera 2 days ago | parent [-] | | I never suggested that it was one or the other. I think it's very obviously both. (and these two qualities are likely necessary, but not sufficient) | | |
| ▲ | CyberDildonics 2 days ago | parent [-] | | So according to you there is a threshold where someone who can't remember enough or correlate things stops being human? | | |
| ▲ | quesera 2 days ago | parent [-] | | Stops exhibiting human intelligence, on at least some of the many axes thereof, yes definitely. I feel like you're trying to gotcha me into some corner, but I'm not sure you're reading my comments fully. Or perhaps I'm being less clear than I think. I don't mean to be ungracious, but am I missing something here? | | |
| ▲ | CyberDildonics 2 days ago | parent [-] | | It's not a gotcha, I just don't think you're thinking through the implications of what you're saying when you think only in terms of being able to fake thought with statistics. | | |
| ▲ | quesera 2 days ago | parent [-] | | I'm saying that recall+correlation is sometimes enough to emulate some level of some forms of human intelligence. How frequently? How high? And which forms? These metrics are in flux. Today is very different from a few months ago. Enough to perform at the level of an ordinary retail service employee? I think this is probably within reach, soon. Do you think that's naive? |
|
|
|
|
|
|
|
|
| |
| ▲ | ToValueFunfetti 3 days ago | parent | prev [-] | | It would be useful to supply a definition if your point is that others' definition is wrong. Are you saying they don't deduct inferences from premises? Is it "deduct" that you take issue with? | | |
| ▲ | zahlman 3 days ago | parent [-] | | They do not perform voluntary exploration of the consequences of applying logical rules for deduction; at best they pattern-match. Their model of conceptual meaning (which last I checked still struggles with negation, meta-reference and even simply identifying irrelevant noise) is not grounded in actual observational experience, but only in correlations between text tokens. I think it should be abundantly clear that what ChatGPT does when you ask it to play chess is fundamentally different from what Stockfish does. It isn't just weak and doesn't just make embarrassing errors in generating legal moves (like a blindfolded human might); it doesn't actually "read" and it generates post-hoc rationalization for its moves (which may not be at all logically sound) rather than choosing them with purpose. There are "reasoning models" that improve on this somewhat, but cf. https://news.ycombinator.com/item?id=44455124 from a few weeks ago, and my commentary there https://news.ycombinator.com/item?id=44473615 . | | |
| ▲ | ToValueFunfetti 3 days ago | parent [-] | | Okay, sure. My intuition is that LLMs reason at about a three-year-old level which appears more impressive because of their massive memories. By your definition and criticism, I take it that you wouldn't describe a three-year-old as capable of reasoning, so we're probably on the same page. |
|
|
| |
| ▲ | exasperaited 3 days ago | parent | prev | next [-] | | > A dismissive “no” is not a helpful addition to this discussion. Yes, your "no" must be more upbeat! Even if it's correct. You must be willing to temper the truth of it with something that doesn't hurt the feelings of the massive. > Does it mean that we’ve reached AGI? No. Does it mean the process reflects exactly what humans do? No. But here it's fine to use a "No." because these are your straw men, right? Is it just wrong to use a "No." when it's not in safety padding for the overinvested? | |
| ▲ | GoblinSlayer 3 days ago | parent | prev | next [-] | | I have a hunch it can reflect what humans do "A junior intern who has memorized the Internet and talks without thinking on permanent autopilot". We're just surprised how much humans can do without thinking. | |
| ▲ | southernplaces7 3 days ago | parent | prev [-] | | >A dismissive “no” is not a helpful addition to this discussion. Neither are wide-eyed claims stemming from drinking too much LLM company koolaid. Blatantly mistaken claims don't need more than a curt answer. Why don't I go ahead and claim chatGPT has a soul, so to then get angry if my claim is dismissed? |
| |
| ▲ | sothatsit 3 days ago | parent | prev [-] | | > No. You are missing the forest for the trees by dismissing this so readily. LLMs can solve IMO-level math problems, debug quite difficult bugs in moderately sized codebases, and write prototypes for very unique and weird coding projects. They solve difficult reasoning problems, and so I find it mystifying that people still work so hard to justify their belief that they're "not actually reasoning". They are flawed reasoners in some sense, but it seems ludicrous to me to suggest that they are not reasoning at all when they generalise to new logical problems so well. Do you think humans are logical machines? No, we are not. Therefore, do we not reason? | | |
| ▲ | southernplaces7 3 days ago | parent [-] | | >Do you think humans are logical machines? No, we are not. Therefore, do we not reason? No, but we are conscious, and we know we are conscious, which doesn't require being a logical being too. LLMs on the other hand aren't conscious and there's zero evidence that they are. Thus, they don't reason, since this, unlike logic, does require consciousness. Why not avoid re-definining things into a salad mix of poor logic until you can pretend that something with no evidence in its favor is real. | | |
| ▲ | sothatsit 2 days ago | parent [-] | | The idea that reasoning requires conciousness is very silly. That's not to mention that conciousness is such a poorly defined term in the first place. |
|
|
| |
| ▲ | zahlman 3 days ago | parent | prev [-] | | > "A junior intern who has memorized the Internet" ... who can also type at superhuman speeds, but has no self-awareness, creativity or initiative. |
|
|
|
| ▲ | Snarwin 3 days ago | parent | prev | next [-] |
| > ChatGPT is succeeding because they created a better search experience Or perhaps because Google created a worse search experience. |
| |
| ▲ | leptons 3 days ago | parent [-] | | The only thing I find bad about Google search now is their "AI" summary, which is often just wrong. I can deal with ads in the search results, I expect them, and I have no doubt ads will be shown in ChatGPT search results too, because they are bleeding money. And ChatGPT is under no obligation to show you anything with any accuracy, which is what the underlying tech is based on - guessing. Thanks, I'll take my chances with actual search results. | | |
| ▲ | sheiyei 3 days ago | parent | next [-] | | I think AI evangelists avoid talking about search because AI is why every internet search engine sucks. I use DuckDuckGo and Startpage, and when trying to find answers to nontrivial questions I get ONLY AI spam sites as results. (Come up with 1500 semi-specific SEO-friendly articles vaguely relating to subject the theme of website N. Then repeat until N=100 And shortly enough, AI is not even going to help with searching – it will eat its own s*t and offer that as answers. Unless the AI giants find some >99.99% way to filter out their own toxic waste from the training data. | | |
| ▲ | zahlman 3 days ago | parent | next [-] | | There was a period I remember fondly when the SEO slop was still mostly human generated because AI couldn't quite do it yet, and I was getting much better results from DDG than from Google. Now it's all despair-inducing. | |
| ▲ | strange_quark 3 days ago | parent | prev [-] | | What you're saying is totally correct about AI slop polluting the web, but the reason they don't want to talk about it is because the second they frame "AI" as slightly better search, it invites all sorts of unfavorable financial comparisons. It also deflates the hype because now you're talking about a better version of a thing everyone is familiar with instead of some magical new buzzword like "agent" or "RAG" or whatever we're going to be talking about next year when agents don't work. |
| |
| ▲ | ChrisMarshallNY 3 days ago | parent | prev | next [-] | | That reminds me of this post, here, from a couple of weeks ago[0]. [0] https://news.ycombinator.com/item?id=44615801 | |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | zahlman 3 days ago | parent | prev | next [-] |
| > It's weirdly the opposite mindset that Bezos talked about in that famous old video where he talks about "creating the best experience for the customer" is his business strategy. A big part of hype as a business strategy is to convince potential customers that you intend to create the best experience for them. And the simplest approach to that task is to say it outright. > in fact, ChatGPT is succeeding because they created a better search experience. Sure. But they don't market it like that, and a large fraction of people reporting success with ChatGPT don't seem to be characterizing their experiences that way. Even if you discount the people explicitly attempting to, well, chat with it. |
|
| ▲ | jnpnj 3 days ago | parent | prev | next [-] |
| And there's a part of me that kinda think "lots of money will mean some large scale investments in labs that might result in some rare findings". Even though the hype + investor chase feels very shallow to me too. |
|
| ▲ | 3 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | z3ugma 2 days ago | parent | prev | next [-] |
| Even if hype bandwagon fails in the long term it is still probably an optimal investment strategy which is IMO why we see so much enshittification. If there is 1 ChatGPT or Amazon- level product a decade, you are not likely to be an early investor in it. A reliable play is to invest in many companies, enshittify them, extract small rents reliably rather than betting the farm on a good product. |
|
| ▲ | mattigames 3 days ago | parent | prev [-] |
| "Created a better search experience" sure, just like eating human meat would create a better dinning experience because there is a lot of humans near you, at the end of the day what "AI" is doing is butchering a bunch of websites and giving you a blend, in the progress making you no longer required to enter those websites that needed you to do so to survive, because they are monetized by third party ads or some paid offering. |