Remix.run Logo
1oooqooq 2 days ago

railways only lost investor money because everyone was investing in a national Monopoly, so the when we did get the Monopoly everyone else lost everything. sounds like a skill problem. plenty of value was created and remain in use for decades, completely different from Slop today.

windexh8er 2 days ago | parent [-]

Not to mention that rail only got better as more was built out. With LLMs the more you allow them to create, to scrape, and to replace deterministic platforms that can do the same thing better and faster - the further down the rabbit hole we all go.

I look around and the only people that are shilling for AI seem to be selling it. There are those that are also in a bubble and that's all they hear day in and out. We keep hearing how far the 'intelligence' of these models has come (models aren't intelligent). There are some low hanging fruit edge cases, but just again today I spent an extra hour thinking I could shortcut a PoC by having LLMs bang out the framework. I leveraged all the latest versions of Opus, Kimi, GLM and Grok. For a very specific ask (happened to be building a quick testing setup for PaddleOCR) none of them got it right. Even when asking for very specific aspects of the solution I had in mind Opus was off the rails and "optimizing" within a turn or two.

I probably ended up using about 20% of the structure it gave me - but I could have easily gone back to another project that I've done where that framework actually had more thought put into it.

I really wish the state of the art was better. I don't use LLMs for searching much as I believe it's a waste of resources. But the polarization from the spin pieces by C-levels on top of the poor performance by general models for very specific asks looks nothing like the age of rail.

Do I believe that there are good use cases for small targeted models built on rich training data? I do. But that's not the look and feel from most of what we're seeing out there today. The bulk of it is prompt engineering on top of general models. And the AI slop from the frontier players is so recognizable and overused now that I can't believe anyone still isn't looking at any of this and immediately second guessing the validity. And these are not hallucinations we're seeing because these LLMs are not intelligent. They lack cognition - they are not truly thinking or reasoning.

Again - if LLMs were capable of mass replacement of workers today OpenAI wouldn't be selling anyone a $20/month subscription, or even a $200 one. They'd be selling directly to those C-levels the utopia of white collar replacements that doesn't exist today.