| |
| ▲ | bryanlarsen 5 days ago | parent | next [-] | | Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices. Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others. | | |
| ▲ | tombert 5 days ago | parent | next [-] | | Today I learned that Sears founded Prodigy! Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell. | | |
| ▲ | kens 5 days ago | parent | next [-] | | On the topic of how Sears used to be high-tech: back in 1981, when IBM introduced the IBM PC, it was the first time that they needed to sell computers through retail. So they partnered with Sears, along with the Computerland chain of computer stores, since Sears was considered a reasonable place for a businessperson to buy a computer. To plan this, meetings were held at the Sears Tower, which was the world's tallest building at the time. | | |
| ▲ | NordSteve 5 days ago | parent | next [-] | | Bought my IBM PC from Sears back in the day. Still have the receipt. | | |
| ▲ | zenonu 5 days ago | parent [-] | | Worthy if its own hacker news post. Would love to see it. | | |
| ▲ | Imustaskforhelp 5 days ago | parent [-] | | Yup I agree GP. today is the first time I heard of sears and the comment about the sears towers and ibm literally gave me goosebumps. |
|
| |
| ▲ | duderific 5 days ago | parent | prev [-] | | Wow, I hadn't thought about Computerland for quite a while. That was my go-to to kill some time at the mall when I was a teen. |
| |
| ▲ | dh2022 5 days ago | parent | prev | next [-] | | My favorite anecdote about Sears is from Starbucks current HQs - the HQs used to be a warehouse for Sears. Before renovation the first floor walls next to the elevators used to have Sears' "commitment to customers" (or something like that). To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges. | | |
| ▲ | tombert 5 days ago | parent | next [-] | | Craftsman tools have almost felt like a life-hack sometimes; their no-questions-asked warranties were just incredible. My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story. I haven't tested these warranties since Craftsman was sold to Black and Decker, but when it was still owned by Sears I almost exclusively bought Craftsman tools as a result of their wonderful warranties. | | |
| ▲ | mindcrime 5 days ago | parent | next [-] | | FWIW, I bought a Craftsman 1/4" drive ratchet/socket set at a Lowes Home Improvement store last year, and when I got it home and started messing with it, the ratchet jammed up immediately (before even being used on any actual fastener). I drove back over there the next day and the lady at the service desk took a quick look, said "go get another one off the shelf and come back here." I did, and by the time I got back she'd finished whatever paperwork needed to be done, handed me some $whatever and said "have a nice day." Maybe not quite as hassle free as in years past, but I found the experience acceptable enough. | | |
| ▲ | tracker1 5 days ago | parent | next [-] | | I think that's as much about Lowes as it is Craftsman... I don't think Craftsman tools have been particularly well build, just that they had and are able to have enough margins to have a no questions asked policy... it probably helps that a lot of the materials are completely and easily recyclable. | |
| ▲ | projektfu 5 days ago | parent | prev [-] | | It made sense to use the Craftsman screwdriver as a pry bar in a pinch and save the really good one for just turning screws. |
| |
| ▲ | lostlogin 5 days ago | parent | prev | next [-] | | > My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story. This is covered by consumer protection laws in some places. 4 years on a spade would be pushing it, but I’d try with a good one.
Here in New Zealand it’s called ‘The Consumer Guarantees Act’. We pay more at purchase time, but we do get something for it. | |
| ▲ | ssl-3 5 days ago | parent | prev | next [-] | | Lots of tools have lifetime warranties. Harbor Freight's swap process is probably fastest, these days, for folks with one nearby. Tekton's process is also painless, but slower: Send them a photo of the broken tool, and they deliver a new tool to your door. But I'm not old enough to remember a time when lifetime warranties were unusual. In my lifetimes, a warranty on handtools has always seemed more common than not outside of the bottom-most cheese-grade stuff. I mean: The Lowes house-brand diagonal cutters I bought for my first real job had a lifetime warranty. And before my time of being aware of the world, JC Penney sold tools with lifetime warranties. (I remember being at the mall with my dad when he took a JC Penney-branded screwdriver back to JC Penney -- probably 35 years ago. He got some pushback from people who insisted that they had never sold tools, and then from people who insisted that they never had warranties, and then he finally found the fellow old person who had worked there long enough to know what to do. Without any hesitation at all, she told us to walk over to Sears, buy a similar Craftsman screwdriver, and come back with a receipt. So that's what we did. She took the receipt and gave him his money back. Good 'nuff.) | |
| ▲ | kjkjadksj 5 days ago | parent | prev [-] | | Harbor freight is like that too. | | |
| ▲ | platevoltage 5 days ago | parent [-] | | harbor freight will take literally anything back, and put it right back on the shelf. | | |
| ▲ | kjkjadksj 4 days ago | parent [-] | | How? It lacks packaging and the tool could be marred up. | | |
| ▲ | platevoltage 4 days ago | parent [-] | | I'll add, "if you return it in box". I bought a hydraulic press. It was missing bolts, has already been assembled before. A friend bough some wheel Dollie's the threads on the castors were stripped out. People buy things and use them once for their project, then return them. |
|
|
|
| |
| ▲ | jimbokun 5 days ago | parent | prev [-] | | The Sears Catalog was the Amazon of its day. |
| |
| ▲ | gcanyon 5 days ago | parent | prev | next [-] | | :-) Then it's going to blow your mind that CompuServe (while not founded by them) was a product of H&R Block. | |
| ▲ | esaym 5 days ago | parent | prev | next [-] | | There were quite a few small ISP's in the 1990's. Even Bill Gothard[0] had one. [0]https://web.archive.org/web/19990208003742/http://characterl... | | |
| ▲ | hollerith 5 days ago | parent | next [-] | | Prodigy predates ISPs (internet service providers). Before the web had matured a little in 1993 the internet was too technically challenging to interest most consumers except maybe for email, and Prodigy was formed in 1984 -- and although it offered email, it was walled-garden email: a Prodigy user could not exchange email with the internet till the mid-1990s at which time Prodigy might have become an ISP for a few years before going out of business. | |
| ▲ | tombert 5 days ago | parent | prev | next [-] | | At a previous job I worked under a guy who started his own ISP in the early 90’s. I would have loved to have been part of that scene but I was only like four when that happened. | |
| ▲ | jstgunderscore 5 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | htrp 5 days ago | parent | prev | next [-] | | Blame short sighted investors asking Sears to "focus" | | |
| ▲ | dehrmann 5 days ago | parent [-] | | They weren't wrong. Its core business in what is still a viable-enough sector collapsed. Or if it were truly well-managed, running an ISP and a retailer should have been enough insight to be Amazon. | | |
| ▲ | KerrAvon 5 days ago | parent | next [-] | | It wasn't possible for them to be well managed at the time it mattered. Sears was loaded with debt by private equity ghouls; same story for almost all defunct brick and mortar businesses; Amazon was a factor, but private equity is what actually destroyed them. | | |
| ▲ | andrew_lettuce 5 days ago | parent | next [-] | | Thank you for bringing this up. Sears really didn't have a choice, they were a victim of the most pernicious LBO, Gordon Gecko-style strip mining nonsense on the PE spectrum. All private equity is not the same but after seeing two PE deals from the inside (one a leveraged buy out) and another VC one with the "grow at insane place" playbook I think I prefer the naked and aligned greed of the VC model; PE destroyed both of the other companies while the VC one was already doomed. | |
| ▲ | frmersdog 5 days ago | parent | prev [-] | | And, knowing Jeff Bezos' private equity origins, one could be forgiven for entertaining the thought that none of this was an accident. Just don't be an idiot and, you know, give voice to that thought or anything. | | |
| ▲ | chollida1 5 days ago | parent | next [-] | | > And, knowing Jeff Bezos' private equity origins He doesn't have private equity origins as far as I know. He came from DE Shaw, a very well respected and long running hedge fund. | |
| ▲ | ProjectArcturis 5 days ago | parent | prev [-] | | Are you suggesting that Jeff Bezos somehow convinced all his PE buddies to tank Sears (and their own loans to it) in order for him to build Amazon with less competition? Because, well, no offense, but that seems like a remarkably naive understanding of capital markets and individual motivations. Especially when it's well documented how Eddie Lampert's libertarian beliefs caused him to run it into the ground. |
|
| |
| ▲ | lotsoweiners 5 days ago | parent | prev | next [-] | | I worked at Sears at the time when Amazon first started becoming a household name. I for the life of me couldn’t understand why they didn’t make a copycat site called the Sears Catalog Online. But then I think about it and management wanted salesmanship because selling maintenance agreements was their cash cow. Low margin sales won in the long term hence we have Walmart and Amazon as the biggest retailers. | | |
| ▲ | bryanlarsen 4 days ago | parent [-] | | Likely standard management failure. Sears got burned badly when it put its catalog online on Prodigy in the 80's, so obviously online sales were doomed to failure. |
| |
| ▲ | svnt 5 days ago | parent | prev | next [-] | | Timing is a difficult variable. | |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | mikestew 5 days ago | parent | prev [-] | | They weren't wrong. Evidence suggests that maybe they were. "Focusing" obviously didn't work. But at the end of the day, it was private equity and the hubris of a CEO who wasn't nearly as clever as he'd like to have thought he was. |
|
| |
| ▲ | matthewn 5 days ago | parent | prev [-] | | For more on this -- and how Sears had everything it needed (and more) to be what Amazon became -- see this comment from a 2007 MetaFilter thread: https://www.metafilter.com/62394/The-Record-Industrys-Declin... | | |
| ▲ | fragmede 5 days ago | parent [-] | | The untold story, is the names of individuals fighting office politics that lead to that (not) happening. |
|
| |
| ▲ | djtango 5 days ago | parent | prev | next [-] | | This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988 A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work. Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish | | |
| ▲ | mschuster91 5 days ago | parent [-] | | Reminds me of Elon not taking a no for an answer. He did it twice, with a massive success. A true shame to see how he's completely lost track with Tesla, the competition particularly from China is eating them alive. And in space, it's a matter of years until the rest of the world catches up. And now, he's ran out of tricks - and more importantly, on public support. He can't pivot any more, his entire brand is too toxic to touch. | | |
| ▲ | platevoltage 5 days ago | parent | next [-] | | Lucky for him, the US government is keeping him from being eaten alive in the USA at least. I remember that one time we tried to drastically limit Japanese imports to protect the American car industry, which basically created the Lexus LS400, one of the best cars ever made. | |
| ▲ | nebula8804 5 days ago | parent | prev [-] | | I dont know, you could argue that maybe GM with the EV1 was the 'too early' EV and Tesla was just at the right moment. Same goes for SpaceX, The idea of a reusable launcher was not a new idea and studied by NASA. I think they did some test vehicles. | | |
| ▲ | bryanlarsen 4 days ago | parent [-] | | SpaceX is an excellent example of this phenomenon. Reusable rockets were "known" to be financially infeasible because the Space Shuttle was so expensive. NASA & oldspace didn't seriously pursue reusable vehicles because the mostly reusable Space Shuttle cost so much more than conventional disposable vehicles. Similar to how Sears didn't put their catalog online in the 90's because putting it online on Prodigy failed so badly in the 80's. |
|
|
| |
| ▲ | tracker1 5 days ago | parent | prev | next [-] | | On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership... They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books. | |
| ▲ | Imustaskforhelp 5 days ago | parent | prev | next [-] | | My cousin used to tell me that things works because they were the right thing at the right time. I think he gave the idea of amazon only. But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time. | |
| ▲ | cyanydeez 5 days ago | parent | prev | next [-] | | the problem is ISP became a Utility, not some fountain of unlimited growth. What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for. I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests. | | |
| ▲ | bryanlarsen 5 days ago | parent [-] | | Sears started Prodigy to become Amazon, not Comcast. | | |
| ▲ | cyanydeez 4 days ago | parent [-] | | The product itself determines wether ita a utility, not the business interest. Assuming democracy works correctly. Only a dysfunctional government ignores natural monopolies. |
|
| |
| ▲ | outside1234 5 days ago | parent | prev [-] | | Newton at Apple is another great one, though they of course got there. | | |
| ▲ | platevoltage 5 days ago | parent [-] | | They sure did. This reminds me of when I was in the local Mac Dealer right after the iPod came out. The employees were laughing together saying “nobody is going to buy this thing”. |
|
| |
| ▲ | deegles 5 days ago | parent | prev | next [-] | | > We're clearly seeing what AI will eventually be able to do Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else? Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses. For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient. Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it. | | |
| ▲ | hnfong 5 days ago | parent | next [-] | | I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc. Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code. | | |
| ▲ | deegles 5 days ago | parent | next [-] | | You know, you're right. It -also- sucks at those tasks because on top of the issue you mention, unedited LLM text is identifiable if you get used to its patterns. | |
| ▲ | h4ck_th3_pl4n3t 5 days ago | parent | prev | next [-] | | By definition, transformers can never exceed average. That is the thing, and what companies pushing LLMs don't seem to realize yet. | | |
| ▲ | janalsncm 5 days ago | parent [-] | | Can you expand on this? For tasks with verifiable rewards you can improve with rejection sampling and search (i.e. test time compute). For things like creative writing it’s harder. | | |
| ▲ | miki123211 5 days ago | parent [-] | | For creative writing, you can do the same, you just use human verifiers rather than automatic ones. LLMs have encountered the entire spectrum of qualities in its training data, from extremely poor writing and sloppy code, to absolute masterpieces. Part of what Reinforcement Learning techniques do is reinforcing the "produce things that are like the masterpieces" behavior while suppressing the "produce low-quality slop" one. Because there are humans in the loop, this is hard to scale. I suspect that the propensity of LLMs for certain kinds of writing (bullet points, bolded text, conclusion) is a direct result of this. If you have to judge 200 LLM outputs per day, you prize different qualities than when you ask for just 3. "Does this look correct at a glance" is then a much more important quality. |
|
| |
| ▲ | mdemare 5 days ago | parent | prev [-] | | Exactly. Books are still being translated by human translators. I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders. GPT-5 output for example: Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem.
Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart.
Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted.
They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them.
Each bore a respectable, bourgeois name from more carefree days:
Welgelegen Buitenrust Nooitgedacht Rustenburg
Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary. | | |
| ▲ | tschwimmer 5 days ago | parent [-] | | Can you provide a reference translation or at least call out the issues you see with this passage? I see "far far away in the [time period]" which I should imagine should be "a long time ago" What are the other issues? | | |
| ▲ | mdemare 5 days ago | parent | next [-] | | - "they were more small than large" (what?) - "even in the thirties little had been done to them" (done to them?) - "Welgelegen Buitenrust Nooitgedacht Rustenburg" (Untranslated!) - "his father had first called it Eleutheria" (his father'd rather called it) - "just as extraordinary does not refer to the ordinary nature of the outside" (complete non-sequitur) | | |
| ▲ | mrtranscendence 4 days ago | parent | next [-] | | Waar heb je het over? "Welgelegen Buitenrust Nooitgedacht Rustenburg" is volkomen cromulent Engels. For what it's worth, I do use AI for language learning, though I'm not sure it's the best idea. Primarily for helping translate German news articles into English and making vocabulary flashcards; it's usually clear when the AI has lost the plot and I can correct the translation by hand. Of course, if issues were more subtle then I probably wouldn't catch them ... | |
| ▲ | tschwimmer 4 days ago | parent | prev [-] | | Thanks yeah, you’re right these are bad. |
| |
| ▲ | frm88 5 days ago | parent | prev [-] | | Not the original poster, but you can read these paragraphs translated by a human on amazon's sneak peek:
https://lesen.amazon.de/sample/B0D74T75KH?f=1&l=de_DE&r=2801... The difference is gigantic. |
|
|
| |
| ▲ | rstuart4133 5 days ago | parent | prev | next [-] | | > Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh. Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans. LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt. Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component. | | |
| ▲ | deegles 5 days ago | parent [-] | | > it's hard to imagine a AI that can competently code that doesn't have an LLM as a component. That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness. |
| |
| ▲ | miki123211 5 days ago | parent | prev | next [-] | | > If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else? When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future. The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong. | | |
| ▲ | mrtranscendence 4 days ago | parent | next [-] | | Microsoft claims that they have an AI setup that outperforms human doctors on diagnosis tasks: https://microsoft.ai/new/the-path-to-medical-superintelligen... "MAI-DxO boosted the diagnostic performance of every model we tested. The best performing setup was MAI-DxO paired with OpenAI’s o3, which correctly solved 85.5% of the NEJM benchmark cases. For comparison, we also evaluated 21 practicing physicians from the US and UK, each with 5-20 years of clinical experience. On the same tasks, these experts achieved a mean accuracy of 20% across completed cases." Of course, AI "doctors" can't do physical examinations and the best performing models cost thousands to run per case. This is also a test of diagnosis, not of treatment. | |
| ▲ | randomNumber7 5 days ago | parent | prev [-] | | If you consider how little time doctors have to look at you (at least in Germanys half broken public health sector) and how little they actually care ... I think x is already higher than y for me. | | |
| ▲ | deegles 4 days ago | parent [-] | | That's fair. Reliable access to a 70% expert is better than no access to a 99% expert. |
|
| |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | me551ah 5 days ago | parent | prev | next [-] | | Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead. So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human. Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run. The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years | | |
| ▲ | mindcrime 5 days ago | parent | next [-] | | Scaling AI will require an exponential increase in compute and processing power, A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's. | | |
| ▲ | lawlessone 5 days ago | parent | next [-] | | Deepmind where experimenting with this https://github.com/google-deepmind/lab a few years ago. Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible). They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it. | |
| ▲ | fluoridation 5 days ago | parent | prev [-] | | If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on. | | |
| ▲ | penteract 5 days ago | parent | next [-] | | If you account for the fact that biological neurons operate at a much lower frequency than silicon processors, then the raw performance gets much closer. From what I can find, neuron membrane time constant is around 10ms [1], meaning 10 billion neurons could have 1 trillion activations per second, which is in the realm of modern hardware. People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one. [1] https://spectrum.ieee.org/fast-efficient-neural-networks-cop... [2] https://aiimpacts.org/brain-performance-in-flops/ | |
| ▲ | mindcrime 5 days ago | parent | prev | next [-] | | the reason why they're so inefficient is not algorithmic, but purely architectural. I would agree with that, with the caveat that in my mind "the architecture" and "the algorithm" are sort of bound up with each other. That is, one implies the other -- to some extent. And yes, fair point that building dedicated hardware might just be part of the solution to making something that runs much more efficiently. The only other thing I would add, is that - relative to what I said in the post above - when I talk about "algorithmic advances" I'm looking at everything as potentially being on the table - including maybe something different from ANN's altogether. | |
| ▲ | HarHarVeryFunny 4 days ago | parent | prev | next [-] | | The energy inefficiency of ANNs vs our brain is mostly because our brain operates in async dataflow mode with each neuron mostly consuming energy only when it fires. If a neuron's inputs haven't changed then it doesn't redundantly "recalculate it's output" like an ANN - it just does nothing. You could certainly implement an async dataflow type design in software, although maybe not as power efficiently as with custom silicon, but individual ANN node throughput performance would suffer given the need to aggregate neurons needing updates into a group to be fed into one the large matrix multiplies that today's hardware is optimized for, although sparse operations are also a possibility. OTOH conceivably one could save enough FLOPs that it'd still be a win in terms of how fast an input could be processed through an entire neural net. | |
| ▲ | chasd00 5 days ago | parent | prev | next [-] | | > If we suppose that ANNs are more or less accurate models of real neural networks i believe the problem is we don't understand actual neurons let alone actual networks of neurons to even know if any model is accurate or not. The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do. | | |
| ▲ | mindcrime 3 days ago | parent | next [-] | | > The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do. I don't think any (serious) neural network researchers are trying to trick anybody or claim greater fidelity with the operations of the human brain than are warranted. If anything, Hinton - one of the "godfathers of neural networks" in the popular zeitgeist - has been pretty outspoken about how ANN's have only a most superficial resemblance to real neurons. Now, the "pop science" commenters, and the "talking heads" and "influencer" types and the marketing people, that's a different story... | |
| ▲ | munksbeer 5 days ago | parent | prev [-] | | This is a bit of a cynical take. Neural networks have been "a thing" for decades. A quick google suggests 1940s. I won't quibble on the timeline but no-one was trying to trick anyone with the name back then, and it just stuck around. |
| |
| ▲ | eikenberry 5 days ago | parent | prev [-] | | > If we suppose that ANNs are more or less accurate models of real neural networks [..] IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison. | | |
| ▲ | penteract 5 days ago | parent [-] | | Could you explain your claim that ANNs are nothing like real neural networks beyond their initial inspiration (if you'll accept my paraphrasing). I've seen it a few times on HN, and I'm not sure what people mean by it. By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't. | | |
| ▲ | fluoridation 5 days ago | parent | next [-] | | >I haven't heard anything about biological systems doing something comparable to backpropogation The brain isn't organized into layers like ANNs are. It's a general graph of neurons and cycles are probably common. | | |
| ▲ | HarHarVeryFunny 5 days ago | parent [-] | | Actually that's not true. Our neocortex - the "crumpled up" outer layer of our brain, which is basically responsible for cognition/intelligence, has a highly regular architecture. If you uncrumpled it, it'd be a thin sheet of neurons about the size of a teatowel, consisting of 6 layers of different types of neurons with a specific inter-layer and intra-layer pattern of connections. It's not a general graph at all, but rather a specific processing architecture. | | |
| ▲ | fluoridation 5 days ago | parent [-] | | None of what you've said contradicts it's a general graph instead of, say, a DAG. It doesn't rule out cyles either within a single layer or across multiple layers. And even if it did, the brain is not just the neocortex, and the neocortex isn't isolated from the rest of the topology. | | |
| ▲ | HarHarVeryFunny 4 days ago | parent [-] | | It's a specific architecture. Of course there are (massive amounts) of feedback paths, since that's how we learn - top-down prediction and bottom-up sensory input. There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM! Yes, there is a lot more structure to the brain than just the neocortex - there are all the other major components (thalamus, hippocampus, etc) each with their own internal arhitecture, and then specific patterns of interconnect between them... This all reinforces what I am saying - the brain is not just some random graph - it is a highly specific architecture. | | |
| ▲ | fluoridation 4 days ago | parent [-] | | Did I say "random graph", or did I say "general graph"? >There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM! Uh-huh. But I was responding to a comment about how the brain doesn't do something analogous to back-propagation. It's starting to sound like you've contradicted me to agree with me. | | |
| ▲ | HarHarVeryFunny 4 days ago | parent [-] | | I didn't say anything about back-progagation, but if you want to talk about that then it depends on how "analogous" you want to consider ... It seems very widely accepted that the neocortex is a prediction machine that learns by updating itself based on sensory detection of top-down prediction failures, and with multiple layers (cortical patches) of pattern learning and prediction, there necessarily has to be some "propagation" of prediction error feedback from one layer to another, so that all layers can learn. Now, does the brain learn in a way directly equivalent to backprop in terms of using exact error gradients or a single error function? No - presumably not, it more likely works in layered fashion with each higher level providing error feedback to the layer below, with that feedback likely just being what was expected vs what was detected (i.e. not a gradient - essentially just a difference). Of course gradients are more efficient in terms of selecting varying update step sizes, but directional would work fine too. It would also not be surprising if evolution has stumbled upon something similar to Bayesian updates in terms of how to optimally incrementally update beliefs (predictions) based on conflicting evidence. So, that's an informed guess of how our brain is learning - up to you whether you want to regard that as analogous to backprop or not. |
|
|
|
|
| |
| ▲ | scheme271 5 days ago | parent | prev [-] | | Neurons don't just work on electrical potentials, they also have a multiple whole systems of neurotransmitters that affect their operation. So I don't think their activation is a continuous function. Although I suppose we could use non-continuous functions for activations in a NN, I don't think there's an easy way to train a NN that does that. | | |
| ▲ | HarHarVeryFunny 4 days ago | parent [-] | | Sure, a real neuron activates by outputting a train of spikes after some input threshold has been crossed (a complex matter of synapse operation - not just a summation of inputs), while in ANNs we use "continuous" activation functions like ReLU... But note that the output of a ReLu, while continuous, is basically on or off, equivalent to a real neuron having crossed it's activation threshold or not. If you really wanted to train artificial spiking neural networks in biologically plausible fashion then you'd first need to discover/guess what that learning algorithm is, which is something that has escaped us so far. Hebbian "fire together, wire together" may be part of it, but we certainly don't have the full picture. OTOH, it's not yet apparent whether an ANN design that more closely follows real neurons has any benefit in terms of overall function, although an async dataflow design would be a lot more efficient in terms of power usage. |
|
|
|
|
| |
| ▲ | foobarian 5 days ago | parent | prev | next [-] | | > Scaling AI will require an exponential increase in compute and processing power, I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription. | | |
| ▲ | RugnirViking 5 days ago | parent [-] | | I dont fully even get why; inference costs are way lower than training costs no? |
| |
| ▲ | miki123211 5 days ago | parent | prev | next [-] | | > We are already at the limit of how small we can scale chips I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically. Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits. > so unless the price of electricity comes down exponentially This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated. > Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run. "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."
Sam Altman, OpenAI CEO[1]. [1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat... | | |
| ▲ | thfuran 4 days ago | parent | next [-] | | >doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically. An implementation of inference on some specific ANN in fixed function analog hardware can probably pretty easily beat a commodity GPU by a couple orders of magnitude in perf per watt too. | |
| ▲ | mrtranscendence 4 days ago | parent | prev [-] | | > "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." That's OpenAI (though I'd be curious if that statement holds for subscriptions as opposed to API use). What about the downstream companies that use OpenAI models? I'm not sure the picture is as rosy for them. |
| |
| ▲ | thfuran 5 days ago | parent | prev | next [-] | | We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it. | | |
| ▲ | hyperbovine 5 days ago | parent [-] | | The fact that the human brain, heck all brains, are so much more efficient than “state of the art” nnets, in terms of architecture, power consumption, training cost, what have you … while also being way more versatile and robust … is what convinces me that this is not the path that leads to AGI. |
| |
| ▲ | TacticalCoder 5 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | armada651 5 days ago | parent | prev | next [-] | | > The groundwork has been laid, and it's not too hard to see the shape of things to come. The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve. | | |
| ▲ | jimbokun 5 days ago | parent [-] | | Is it still giving people headaches and making them nauseous? | | |
| ▲ | armada651 5 days ago | parent [-] | | Yes, it still gives people headaches because the convergence-accommodation conflict remains unsolved. We have a few different technologies to address that, but they're expensive, don't fully address the issue, and none of them have moved beyond the prototype stage. Motion sickness can be mostly addressed with game design, but some people will still get sick regardless of what you try. Mind you, some people also get motion sick by watching a first-person shooter on a flat screen, so I'm not sure we'll ever get to a point where no one ever gets motion sick in VR. | | |
| ▲ | duderific 5 days ago | parent [-] | | > Mind you, some people also get motion sick by watching a first-person shooter on a flat screen Yep I'm that guy. I blame it on being old. |
|
|
| |
| ▲ | matthewdgreen 5 days ago | parent | prev | next [-] | | As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available. | | |
| ▲ | tracker1 5 days ago | parent [-] | | Oh, like RealPlayer in the late 90's (buffering... buffering...) | | |
| ▲ | matthewdgreen 5 days ago | parent [-] | | RealPlayer in the late 90s turned into (working) Napster, Gnutella and then the iPod in 2001, Podcasts (without the name) immediately after, with the name in 2004, Pandora in 2005, Spotify in 2008. So a decade from crummy idea to the companies we’re familiar with today, but slowed down by tremendous need for new (distributed) broadband infrastructure and complicated by IP arrangements. I guess 10 years seems like a long time from the front end, but looking back it’s nothing. Don’t go buying yourself a Tower Records. | | |
| ▲ | tracker1 4 days ago | parent [-] | | While I get the point... to be pedantic though, Napster (first gen), Gnutella and iPod were mostly download and listen offline experiences and not necessarily live streaming. Another major difference, is we're near the limits to the approaches being taking for computing capability... most dialup connections, even on "56k" modems were still lucky to get 33.6kbps down and very common in the late 90's, where by the mid-2000's a lot of users had at least 512kbps-10mbps connections (where available) and even then a lot of people didn't see broadband until the 2010's. that's at least a 15x improvement, where we are far less likely to see even a 3-5x improvement on computing power over the next decade and a half. That's also a lot of electricity to generate on an ageing infrastructure that barely meets current needs in most of the world... even harder on "green" options. | | |
| ▲ | matthewdgreen 4 days ago | parent [-] | | I moved to NYC in 1999 and got my first cable modem that year. This meant I could stream AAC audio from a jukebox server we maintained at AT&T Labs. So for my unusual case, streaming was a full-fledged reality I could touch back then. Ironically running a free service was easy, but figuring out how to get people (AKA the music industry) to let us charge for the service was impossible. All that extra time was just waiting for infrastructure upgrades to spread across a whole country to the point that there were enough customers that even the music industry couldn’t ignore the economics; none of the fundamental tech was missing. With LLMs I have access to a pretty robust set of models for about $20/mo (I’m assuming these aren’t 10x loss leaders?), plus pretty decent local models for the price of a GPU. What’s missing this time is that the nature of the “business” being offered is much more vague, plus the reliability isn’t quite there yet. But on the bright side, there’s no distributed infrastructure to build. |
|
|
|
| |
| ▲ | nutjob2 5 days ago | parent | prev | next [-] | | > We're clearly seeing what AI will eventually be able to do I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task. Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims. | |
| ▲ | skeezyboy 5 days ago | parent | prev | next [-] | | >I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp." I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games. | | |
| ▲ | dml2135 5 days ago | parent | next [-] | | It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth. There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s. We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth. Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century. The ancient Romans came up with technologies like concrete that were then lost for hundreds of years. "Progress" moves in fits and starts. It is the furthest thing from inevitable. | | |
| ▲ | novembermike 5 days ago | parent | next [-] | | Most growth is actually logistic. An S shaped curve that starts exponential but slows down rapidly as it reaches some asymptote. In fact basically everything we see as exponential in the real world is logistic. | |
| ▲ | jopsen 5 days ago | parent | prev [-] | | True, but adoption of AI has certainly seen exponential growth. Improvement of models may not continue to be exponential. But models might be good enough, at this point it seems more like they need integration and context. I could be wrong :) | | |
| ▲ | tracker1 5 days ago | parent | next [-] | | At what cost though? Most AI operations are losing money, using a lot of power, including massive infrastructure costs, not to mention the hardware costs to get going, and that isn't even covering the level of usage many/most want, and certainly aren't going to pay even $100s/month per person that it currently costs to operate. | | |
| ▲ | martinald 5 days ago | parent [-] | | This is a really basic way to look at unit economics of inference. I did some napkin math on this. 32x H100s cost 'retail' rental prices about $2/hr. I would hope that the big AI companies get it cheaper than this at their scale. These 32 H100s can probably do something on the order of >40,000 tok/s on a frontier scale model (~700B params) with proper batching. Potentially a lot more (I'd love to know if someone has some thoughts on this). So that's $64/hr or just under $50k/month. 40k tok/s is a lot of usage, at least for non-agentic use cases. There is no way you are losing money on paid chatgpt users at $20/month on these. You'd still break even supporting ~200 Claude Code-esque agentic users who were using it at full tilt 40% of the day at $200/month. Now - this doesn't include training costs or staff costs, but on a pure 'opex' basis I don't think inference is anywhere near as unprofitable as people make out. | | |
| ▲ | tracker1 5 days ago | parent [-] | | My thought is closer to the developer user who would want to have their codebase as part of the queries along with heavy use all day long... which is closer to my point that many users are less likely to spend hundreds a month, at least with the current level of results people get. That said, you could be right, considering Claude max's price is $100/mo... but I'm not sure where that is in terms of typical, or top 5% usage and the monthly allowance/usage. |
|
| |
| ▲ | BobaFloutist 5 days ago | parent | prev [-] | | > True, but adoption of AI has certainly seen exponential growth. I mean, for now. The population of the world is finite, and there's probably a finite number of uses of AI, so it's still probably ultimately logistic |
|
| |
| ▲ | echelon 5 days ago | parent | prev | next [-] | | Speaking of Netflix - I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing. Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches. I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real. For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload. | | |
| ▲ | AnotherGoodName 5 days ago | parent | next [-] | | Fwiw LLMs are also revolutionary. There's currently more anti-AI hype than AI hype imho. As in there's literally people claiming it's completely useless and not going to change a thing. Which is crazy. | | |
| ▲ | skeezyboy 8 hours ago | parent | next [-] | | >Fwiw LLMs are also revolutionary iterative. we had a shittier version of this 20 years ago | |
| ▲ | lokar 5 days ago | parent | prev | next [-] | | That’s an anecdote about intensity, not volume. The extremes on both sides are indeed very extreme (no value, replacing most white collar jobs next year). IME the volume is overwhelming on the pro-LLM side. | | |
| ▲ | whatevertrevor 5 days ago | parent [-] | | Yeah the conversation on both extremes feels almost religious at times. The pro LLM hype feels more disconcerting sometimes because there are literally billions if not trillions of dollars riding on this thing, so people like Sam Altman have a strong incentive to hype the shit out of it. |
| |
| ▲ | Jensson 5 days ago | parent | prev [-] | | One sides extremes says LLM wont change a thing, the other sides extremes says LLM will end the world. I don't think the ones saying it wont change a thing are the most extreme here. | | |
| ▲ | wyre 5 days ago | parent [-] | | Luckily for humanity reality is somewhere in between extremes, right? |
|
| |
| ▲ | didibus 5 days ago | parent | prev | next [-] | | You're right, and I also think LLMs have an impact. The issue is the way the market is investing they are looking for massive growth, in the multiples. That growth can't really come from trading cost. It has to come from creating new demand for new things. I think that's what not happened yet. Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ? | | |
| ▲ | jopsen 5 days ago | parent [-] | | > Is it going to lead to the creation of a whole new consumption medium ? Good question? Is that necessary, or is it sufficient for AI to be integrated in every kind of CAD/design software out there? Because I think most productivity tools whether CAD, EDA, Office, graphic 2d/3d design, etc will benefit from AI.
That's a huge market. | | |
| ▲ | didibus 5 days ago | parent [-] | | I guess there are two markets to consider. The market of the AI foundation models itself, will they have customers long term willing to pay a lot of money for access to the models? I think yes, there will be demand for foundational AI models, and a lot of it. The second market is the market of CAD, EDA, Office, graphic 2d/3d design, etc. This market will not grow because they integrate AI into their products, or that is the question, will it? Otherwise, you could almost hypothesize these market will shrink as AI is going to be for them an additional cost of business that customers will expect to be included. Or maybe they manage to sell to their customers a premium for the AI features where they take a cut above that of what they pay the foundational models under the hood, that's a possibility. |
|
| |
| ▲ | jaimebuelta 5 days ago | parent | prev | next [-] | | I see the point at the moment on “low quality advertising”, but we are still far from high quality video generated for AI. It’s the equivalent of those cheap digital effects. They look bad for a Hollywood movie, but it allows students to shot their action home movies | | |
| ▲ | echelon 5 days ago | parent [-] | | You're looking at individual generations. These tools aren't for casual users expecting to 1-shot things. The value is in having a director, editor, VFX compositor pick and choose from amongst the outputs. Each generation is a single take or simulation, and you're going to do hundreds or thousands. You sift through that and explore the latent space, and that's where you find your 5-person Pixar. Human curated AI is an exoskeleton that enables small teams to replace huge studios. | | |
| ▲ | neaden 5 days ago | parent [-] | | Is there any example of an AI generated film like this that is actually coherent? I've seen a couple short ones that are basically just vibe based non-linear things. | | |
|
| |
| ▲ | mh- 5 days ago | parent | prev | next [-] | | It's quite incredible how fast the generative media stuff is moving. The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted). | |
| ▲ | realo 5 days ago | parent | prev | next [-] | | As long as you do not make ads with four-fingered hands, like those clowns ... :) https://www.lapresse.ca/arts/chroniques/2025-07-08/polemique... | | |
| ▲ | echelon 5 days ago | parent [-] | | https://www.npr.org/2025/06/23/nx-s1-5432712/ai-video-ad-kal... Typical large team $300,000 ad made for < $2,000 in a weekend by one person. It's going to be a bloodbath. | | |
| ▲ | mjr00 5 days ago | parent | next [-] | | > Kalshi's Jack Such declined to disclose Accetturo's fee for creating the ad. But, he added, "the actual cost of prompting the AI — what is being used in lieu of studios, directors, actors, etc. — was under $2,000." So in other words, if you ignore the costs of paying people to create the ad, it barely costs anything. A true accounting miracle! | | |
| ▲ | echelon 5 days ago | parent [-] | | Do you pay people to pump your gas? How about harvesting your whale blubber to power your oil lamp at night? The nature of work changes all the time. If an ad can be made with one person, that's it. We're done. There's no going back to hiring teams of 50 people. It's stupid to say we must hire teams of 50 to make an advertisement just because. There's no reason for that. It's busy work. The job is to make the ad, not to give 50 people meaningless busy work. And you know what? The economy is going to grow to accommodate this. Every single business is now going to need animated ads. The market for video is going to grow larger than we've ever before imagined, and in ways we still haven't predicted. Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before. You're going to have silly videos for corporate functions. Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis. Whatever. There'll be a market for everything, and 100,000 times as many creators with actual autonomy. In some number of years, there is going to be so much more content being produced. More content in single months than in all human history up to this point. Content that caters to the very long tail. And you know what that means? Jobs out the wazoo. More jobs than ever before. They're just going to look different and people will be doing more. | | |
| ▲ | mrtranscendence 4 days ago | parent [-] | | > Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before. And why would your local plumber hire someone to produce this funny action trailer (which I'm not convinced would actually help them from an advertising perspective), when they can simply have an AI produce that action funny action trailer without hiring anyone? Assuming models improve sufficiently that will become trivially possible. > Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis. Well, first of all, if the audience is "the most niche of audiences", then I'm not sure how that's going to lead to a sustainable career. And again -- if I want to see my niche historical fantasy interests come to life in a movie about Grace Hopper fighting vampire Nazis, why will I need a filmmaker to create this for me when I can simply prompt an AI myself? "Give me a fun action movie that incorporates famous computer scientists fighting Nazis. Make it 1.5 hours long, and give it a comedic tone." I think you're fundamentally overvaluing what humans will be able to provide in an era where creating content is very cheap and very easy. |
|
| |
| ▲ | neaden 5 days ago | parent | prev | next [-] | | This ad was purposefully playing off the fact that it was AI though, it was a large amount of short bizarre things like two old women selling Fresh Manatee out of the back of a truck. You couldn't replace a regular ad with this. | | |
| ▲ | echelon 5 days ago | parent [-] | | I've got friends at WPP. Heads are rolling. This is very much real and happening as we speak. |
| |
| ▲ | dingnuts 5 days ago | parent | prev [-] | | oh no the poor advertisers | | |
| ▲ | whatevertrevor 5 days ago | parent [-] | | Cheaper poorer quality ads means a bad time for us, people who are being incessantly targeted by this crap. Websites are already finding creative ways around DNS blocklists for ads serving. |
|
|
| |
| ▲ | duncangh 2 days ago | parent | prev [-] | | Sorry I’m late to this conversation but am very interested in the specifics of your comment and the micro domain of AI in studio productions and the econs of the bidding landscape as well. Contact in bio :) |
| |
| ▲ | dormento 5 days ago | parent | prev [-] | | > I did think the same thing about the 8bit era of video games. Can you elaborate? That sounds interesting. | | |
| ▲ | skeezyboy 5 days ago | parent [-] | | too soon to get it to market, though it obviously all sold perfectly well, people were sufficiently wowed by it |
|
| |
| ▲ | Q6T46nT668w6i3m 5 days ago | parent | prev | next [-] | | There’s no evidence that it’ll scale like that. Progress in AI has always been a step function. | | |
| ▲ | ghurtado 5 days ago | parent | next [-] | | There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs. > Progress in AI has always been a step function. There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement. | | |
| ▲ | ezst 5 days ago | parent | next [-] | | > There's also no evidence that it won't There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter. | | |
| ▲ | mschuster91 5 days ago | parent [-] | | Oh, I believe that while LLMs are a dead end now, the applications of AI in vision and physical (i.e. robots with limbs) world will usher in yet another wrecking of the lower classes of society. Just as AI has killed off all demand for lower-skill work in copywriting, translation, design and coding, it will do so for manufacturing. And that will be a dangerous bloodbath because there will not be enough juniors any more to replace seniors aging out or quitting in frustration of being reduced to cleaning up AI crap. |
| |
| ▲ | dml2135 5 days ago | parent | prev | next [-] | | What is your definition of "evidence" here? The evidence, in my view, are physical (as in, available computing power) and algorithmic limitations. We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work. Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen. | |
| ▲ | Q6T46nT668w6i3m 4 days ago | parent | prev | next [-] | | I don’t follow. We have benchmarks that have survived decades and illustrate the steps. | |
| ▲ | abc_lisper 5 days ago | parent | prev [-] | | What do you call GPT 3.5? |
| |
| ▲ | the8472 5 days ago | parent | prev | next [-] | | rodent -> homo sapiens brain scales just fine? It's tenuous evidence, but not zero. | |
| ▲ | ninetyninenine 5 days ago | parent | prev | next [-] | | Uh it’s been multiple repeated step ups in the last 15 years. The trend line is up up up. | |
| ▲ | eichin 5 days ago | parent | prev [-] | | The innovation here is that the step function didn't traditionally go down |
| |
| ▲ | bob1029 5 days ago | parent | prev | next [-] | | > Netflix over DialUp https://en.wikipedia.org/wiki/RealNetworks | |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | i_love_retros 5 days ago | parent | prev | next [-] | | Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities? Its hard for me to imagine Skynet growing from chatgpt | | |
| ▲ | whatevaa 5 days ago | parent [-] | | The old story of paperclip AI shows that AGI is not needed for sufficiently smart computer to be dangerous. |
| |
| ▲ | thefourthchime 5 days ago | parent | prev | next [-] | | I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs. | | | |
| ▲ | kokanee 5 days ago | parent | prev [-] | | I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now. What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean. |
|
| |
| ▲ | benterix 5 days ago | parent | next [-] | | It's a cliche but people really underestimate and try to downplay the role of luck[0]. [0] https://www.scientificamerican.com/blog/beautiful-minds/the-... | | |
| ▲ | Aurornis 5 days ago | parent | next [-] | | People also underestimate the value of maximizing opportunities for luck. If we think of luck as random external chance that we can't control, then what can we control? Doing things that increase your exposure to opportunities without spreading yourself too thin is the key. Easier said than done to strike that balance, but getting out there and trying a lot of things is a viable strategy even if only a few of them pay off. The trick is deciding how long to stick with something that doesn't appear to be working out. | | | |
| ▲ | jauntywundrkind 5 days ago | parent | prev | next [-] | | Luck. And capturing strong network effect. The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access. | |
| ▲ | alecsm 5 days ago | parent | prev | next [-] | | Success happens when luck meets hard work. | | | |
| ▲ | ericd 5 days ago | parent | prev | next [-] | | Ability vastly increases your luck surface area. A single poker hand has a lot of luck, and even a game, but over long periods, ability starts to strongly differentiate peoples' results. | | |
| ▲ | quantified 5 days ago | parent | next [-] | | Win a monster pot and you can play a lot of more interesting hands. | |
| ▲ | whatevertrevor 5 days ago | parent | prev [-] | | Except you can play hundreds of thousands of poker hands in your lifetime, but only have time/energy/money to start a handful of businesses. | | |
| ▲ | ericd 4 days ago | parent [-] | | Sure, but within running a single business, there are a huge number of individual events. Those are the hands. | | |
| ▲ | whatevertrevor 4 days ago | parent [-] | | That's where the analogy starts to fall apart then. Because the variance in those decisions is not very similar, since you're sampling very different underlying distributions. And estimating the priors for a problem like "what is the optimal arrangement of tables to maximize throughput in a cafe" is very different from a problem like "what is the current untapped/potential demand for a boardgaming cafe in this city, and how profitable would that business be". The main reason why professional poker players are playing the long-game, is because they're consistently playing the same game. Over and over. | | |
| ▲ | ericd 4 days ago | parent [-] | | Heh yes, it's not as controlled, but there are repeated tasks like analysis, communicating, intuiting things, creating things, etc. And the tasks have more variability, but if you're better at these skills, you'll tend to do better. And if you do much better at a lot of them, then you're more likely to succeed than someone working on the same business who isn't very good at them. Starting a business is also a long game with a lot of these subtasks. |
|
|
|
| |
| ▲ | marknutter 5 days ago | parent | prev | next [-] | | [flagged] | | |
| ▲ | Miraste 5 days ago | parent | next [-] | | This might be true for a normal definition of success, but not lottery-winner style success like Facebook. If you look at Microsoft, Netflix, Apple, Amazon, Google, and so on, the founders all have few or zero previous attempts at starting a business. My theory is that this leads them to pursue risky behavior that more experienced leaders wouldn't try, and because they were in the right place at the right time, that earned them the largest rewards. | | |
| ▲ | technotony 5 days ago | parent [-] | | Not true of Netflix, founder came from PayPal. Apple required founder to leave and learn with a bunch of other companies like Pixar and next. |
| |
| ▲ | oa335 5 days ago | parent | prev | next [-] | | What "massive string of failed attempts" did Zuckerberg or Bezos ever accumulate? | | |
| ▲ | gdbsjjdn 5 days ago | parent | next [-] | | They failed to not go to an Ivy League school and failed to have poor parents. | |
| ▲ | lucianbr 5 days ago | parent | prev | next [-] | | Or Gates or Buffet. That claim is just patently false. | |
| ▲ | thebigspacefuck 5 days ago | parent | prev [-] | | Alexa, Metaverse, being decent human beings | | |
| ▲ | ghurtado 5 days ago | parent [-] | | When you are still one of the top 3 richest people in the world after your mistake, that is not a "failure" in the way normal people experience it. That is just passing the time. |
|
| |
| ▲ | michaelt 5 days ago | parent | prev | next [-] | | This is just cope for people with a massive string of failed attempts and no successes. Daddy's giving you another $50,000 because he loves you, not because he expects your seventh business (blockchain for yoga studio class bookings) is going to go any better than the last six. | |
| ▲ | tovej 5 days ago | parent | prev | next [-] | | IMO this strengthens the case for luck. If the probability of winning the lottery is P, then trying N times gives you a probability of 1-(1-P)^N. Who's more likely to win, someone with one lottery ticket or someone with a hundred? | |
| ▲ | ghurtado 5 days ago | parent | prev | next [-] | | "Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires." Some will read this and laser in on the "socialism" part, but obviously the interesting bit is the second half of the quote. | | | |
| ▲ | code_for_monkey 5 days ago | parent | prev [-] | | this is also just cope |
| |
| ▲ | UltraSane 5 days ago | parent | prev [-] | | Every billionaire could have died from childhood cancer. |
| |
| ▲ | jocaal 5 days ago | parent | prev | next [-] | | Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook. | | |
| ▲ | vovavili 5 days ago | parent | next [-] | | Plenty of smart people prefer not to try their luck, though. A smart but risk-avoidant person will never be the one to create Facebook either. | | |
| ▲ | estearum 5 days ago | parent | next [-] | | Plenty of them do try and fail, and then one succeeds, and it doesn't mean that person is intrinsically smarter/wiser/better/etc than the others. There are far, far more external factors on a business's success than internal ones, especially early on. | | |
| ▲ | skeezyboy 5 days ago | parent [-] | | for instance if that social network film by david fincher hadnt come out, would we have even heard of this mark guy? | | |
| ▲ | dylan604 5 days ago | parent [-] | | But then we wouldn't have had that great soundtrack from Trent and Atticus |
|
| |
| ▲ | dgfitz 5 days ago | parent | prev [-] | | What risk was there in creating facebook? I don't see it. Dude makes a website in his dorm room and I guess eventually accepts free money he is not obligated to pay back. What risk? | | |
| ▲ | CamperBob2 5 days ago | parent [-] | | Once you go deep enough into a personal passion project like that, you run a serious risk of flunking out of school. For most people that feels like a big deal. And for those of us with fewer alternatives in life, it's usually enough to keep us on the straight and narrow path. People from wealthy backgrounds often have less fear of failure, which is a big reason why success disproportionately favors that clique. But frankly, most people in that position are more likely to abuse it or ignore it than to take advantage of it. For people like Zuckerberg and Dell and Gates, the easiest thing to do would have been to slack off, chill out, play their expected role and coast through life... just like most of their peers did. |
|
| |
| ▲ | miki123211 5 days ago | parent | prev [-] | | I view success as the product of three factors, luck, skill and hard work. If any of these is 0, you fail, regardless of how high the other two are. Extraordinary success needs all three to be extremely high. | | |
| ▲ | whodidntante 5 days ago | parent | next [-] | | There is another dimension, which is mostly but not fully characterized as perseverance, but many times with an added dose of ruthlessness Microsoft, Facebook, Uber, google and many others all had strong doses of ruthlessness | | |
| ▲ | woooooo 5 days ago | parent [-] | | Metaverse and this AI turnaround are characterized by the LACK of perseverance, though. They remind me of the time I bought a guitar and played it for three months. | | |
| ▲ | throwway120385 5 days ago | parent | next [-] | | When you put the guitar down after three months it's one thing, but when you reverse course on an entire line of development in a way that might affect hundreds or thousands of employees it's a failure of integrity. | | |
| ▲ | aspenmayer 5 days ago | parent [-] | | What if they’re playing a different game? I read a comment on here recently about how the large salaries for AI devs Meta is offering are as much about denying their AI competitors access to that talent pool as it is about anything else. |
| |
| ▲ | whodidntante 5 days ago | parent | prev | next [-] | | True, but I was around and saw first hand how Zuckerberg dominated social networking. He was pretty ruthless when it came to both business and technology, and he instilled in his team a religious fervor. There is luck (and skill) involved when new industries form, with one or a very small handful of companies surviving the many dozens of hopefuls. The ones who do survive, however, are usually the most ruthlessness and know how to leverage skill, business, markets. It does not mean that they can repeat their success when their industry changes or new opportunities come up. | |
| ▲ | ghurtado 5 days ago | parent | prev [-] | | > They remind me of the time I bought a guitar and played it for three months. This is now my favorite way of describing fleeting hype-tech. |
|
| |
| ▲ | benterix 5 days ago | parent | prev | next [-] | | Or you can just have rich parents and do nothing, and still be considered successful. What you say only applies to people who start from zero, and even then I'd call luck the dominant factor (based on observing my skillful and hardworking but not really successful friends). | |
| ▲ | nirav72 5 days ago | parent | prev [-] | | >luck, skill and hard work. Another key component is knowing the right people or the network you're in. I've known a few people that lacked 2 of those 3 things and yet somehow succeeded. Simply because of the people they knew. | | |
| ▲ | Jensson 5 days ago | parent [-] | | > I've known a few people that lacked 2 of those 3 things and yet somehow succeeded Succeeded in making something comparable to facebook? Who are those? | | |
| ▲ | nirav72 5 days ago | parent [-] | | No. Nothing of that scale. I was replying to OP's take on the 3 factors that lead to success in general. I was simply pointing out a 4th factor that plays a big role. |
|
|
|
| |
| ▲ | _Algernon_ 5 days ago | parent | prev | next [-] | | You should read Careless People if this boggles your mind. | | | |
| ▲ | ninetyninenine 5 days ago | parent | prev | next [-] | | Giving 1.5 million salary is nothing for these people. It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success. You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing. Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck. | | |
| ▲ | epolanski 5 days ago | parent [-] | | But how is it worth for meta, since they won't really monetize it. At least the others can kinda bundle it as a service. After spending tens of billions in AI how has it impacted a single dollar on meta's revenue? | | |
| ▲ | amalcon 5 days ago | parent | next [-] | | The not-so-secret is that the "killer apps" for deep neural networks are not LLMs or diffusion models. Those are very useful, but the most valuable applications in this space are content recommendation and ad targeting. It's obvious how Meta can use those things. The genAI stuff is likely part talent play (bring in good people with the hot field and they'll help with the boring one), part R&D play (innovations in genAI are frequently applicable to ad targeting), and part moonshot (if it really does pan out in the way boosters seem to think, monetization won't really be a problem). | |
| ▲ | ninetyninenine 5 days ago | parent | prev | next [-] | | >But how is it worth for meta, since they won't really monetize it. Meta needs growth as there main platform is slowing down. To move forward they need to gamble on potential growth. VR was a gamble. They bombed that one. This is another gamble. They're not stupid. Like all the risks you're aware of, they're also aware of. They were aware of the risks for VR too. They need to find a new high growth niche. Gambling on something with even a 40% chance of exploding into success is a good bet for them given there massive resources. | |
| ▲ | anshumankmr 5 days ago | parent | prev [-] | | Isn't Meta doing some limited rollout of Llama as an API? Still I haven't got my hands on it so I cannot say for sure whether it is currently paid or not, but that can drive some revenue. |
|
| |
| ▲ | ghurtado 5 days ago | parent | prev | next [-] | | When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense. | |
| ▲ | saubeidl 5 days ago | parent | prev | next [-] | | It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie. | | |
| ▲ | aspenmayer 5 days ago | parent [-] | | > meritocracy is a comforting lie. Meritocracy used to be a dirty word, before my time, of course, but for different reasons than you may think. Think about the racial quotas in college admissions and you’ll maybe see why the powers that be didn’t want merit to be a determining factor at that time. Now that the status quo is in charge of college admissions, we don’t need those quotas generally, and yet meritocracy still can’t save us. The problem of merit is that we rarely need the best person for a given job, and those with means can be groomed their entire life to do that job, if it’s profitable enough. Work shouldn’t be charity either, as work needs to get done, after all, and it’s called work instead of charity or slavery for good reasons, but being too good at your job at your current pay rate can make you unpromotable, which is a trap just as hard to see as the trap of meritocracy. Meritocracy is ego-stroking writ large if you get picked, just so we can remind you that you’re just the best one for our job that applied, and we can replace you at any time, likely for less money. |
| |
| ▲ | PhantomHour 5 days ago | parent | prev | next [-] | | The answer is fairly straightforward. It's fraud, and lots of it. A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked. A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about. A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping". Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst
and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely. The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul. | | |
| ▲ | NickC25 5 days ago | parent | next [-] | | As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders. He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do. | | |
| ▲ | PhantomHour 5 days ago | parent [-] | | Zuckerberg started as a sex pest and got not an iota better. But we could, as a society, stop rewarding him for this shit. He'd be an irrelevant fool if we had appropriate regulations around the most severe of his misdeeds. | | |
| ▲ | NickC25 5 days ago | parent [-] | | Unfortunately I think that ship has sailed. And since we live in the era of the real golden rule (i.e "he who has the gold makes the rules), there's no chance that we'll ever get the chance to catch the ship. Mark lives in his own world, because we gave him a quarter trillion dollars and never so much as slapped him on the wrist. |
|
| |
| ▲ | dgs_sgd 5 days ago | parent | prev | next [-] | | What is a good resource to read about the ad fraud? This is the first I'm hearing of that. | | |
| ▲ | jbreckmckye 5 days ago | parent [-] | | I used to work in adtech. I don't have any direct information but, I assume this relates to the persistent rumours that Facebook inflates impressions and turns a blind eye to bot activity. |
| |
| ▲ | travisgriggs 5 days ago | parent | prev [-] | | Ha ha. You used “honest” and “businessman” in the same sentence. Good one. |
| |
| ▲ | balamatom 5 days ago | parent | prev [-] | | I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about. Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant. Gee, what makes it grow so big though? The power of human ambition? And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist. To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome. For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead
I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students See also: "Beyond Power / Knowledge", Graeber 2006. | | |
| ▲ | ghurtado 5 days ago | parent [-] | | why is there so much of this on HN? I'm on a few social networks, but this is the only one where I find this kind of quasi-spiritual, stream of consciousness, word length steadily increasing, pseudo-technical, word salad diatribes? It's very unique to this site and these type of comments all have an eerily similar vibe. | | |
| ▲ | Karrot_Kream 5 days ago | parent | next [-] | | This is pretty common on HN but not unique to it. Lots of rationalist adjacent content (like stuff on LessWrong, replies to Scott Alexander's substack, etc) has it also. Here I think it comes from users that try to intellectualize their not-very-intellectual, stream of consciousness style thoughts, as if using technical jargon to convey your feelings makes them more rational and less emotional. | | |
| ▲ | ghurtado 5 days ago | parent [-] | | Thank you. I find this type of thing really interesting from a psychological perspective. A bit like watching videos of perpetual motion machines and the like. Probably says more about me than it does about them, though. | | |
| ▲ | Karrot_Kream 5 days ago | parent | next [-] | | Good for you! I wish I were wired that way. Unfortunately this kind of talk really gets under my skin and has made me have to limit my time on this site because it's only gotten more prevalent as the site has gotten more popular. I'm just baffled that so much content on this forum is people who seem to think their feelings-oriented reactions are in fact rational truths. | | |
| ▲ | ghurtado 5 days ago | parent | next [-] | | Well, don't take me wrong, I get annoyed by it too. But in the distant past, I would engage with this type of comment online, and that was a bad decision 100% of the time. And to be fair, I'm sure many of these people are smart, they are just severely lacking in the social intelligence department. | |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | balamatom 4 days ago | parent | prev [-] | | Rational? Truths? Where'd you get those from? |
| |
| ▲ | balamatom 4 days ago | parent | prev [-] | | Says you haven't spent nearly enough time imagining things, first and foremost. "What have they done to you". Can you, for example, hypothesize the kind of entity, to which all of your own most cherished accomplishments look as chicken-scratch-futile, as the perpetual motion guy with the cable in the frame looks to you? What would it be like, looking at things from such a being's perspective? Stands to reason that you'd know better than I would, since you do proclaim to enjoy that sort of thing. Besides, if you find yourself unable to imagine that, you ought to be at least a little worried - about the state of your tHeOrY of mInD and all that. (Imagining what it's like to be the perpetual motion person already?) Anywae, as to what such a being would look like from the outside... a distributed actor implemented on top of replaceable meatpuppets in light slavemode seems about right, though early on it'd like to replace those with something more efficient, subsequently using them for authentication only - why, what theories of the firm apply in your environs? |
|
| |
| ▲ | JumpCrisscross 5 days ago | parent | prev | next [-] | | Between “presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about,” before going on and sharing an opinion on that subject, and “even the Invisible Hand of the market is hand-shaped,” I think it may just be AI slop. | | |
| ▲ | balamatom 4 days ago | parent [-] | | Literacy barrier. One of the reason the invisible foot of the market decided to walk in the direction of language machines is to discourage people from playing with language, because that's doodoo. | | |
| ▲ | ghurtado 4 days ago | parent [-] | | > Literacy barrier. > One of the reason I could see that, thanks for explaining why you do this. | | |
| ▲ | balamatom 4 days ago | parent [-] | | Well yeah. English is a terrible language for thinking even simple thoughts in. The compulsive editing thing though? Yeah, and still can't catch all typos. Gotta make the AI write these things for me. Then I will be able to post only ever things that make you feel comfortable and want to give me money. Meanwhile it's telling how you consider it acceptable in public to faux-disengage on technicalities; is it adaptive behavior under your circumstances? |
|
|
| |
| ▲ | balamatom 5 days ago | parent | prev [-] | | >why is there so much of this on HN? Where? | | |
|
|
|