Remix.run Logo
positron26 2 hours ago

> most of the people that were the loudest won't say they were wrong

I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.

It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.

It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.

plastic-enjoyer an hour ago | parent [-]

> It's laziness because they have little CS fundamentals to base such claims on

So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.

positron26 an hour ago | parent [-]

> However, current predictions about the future of software development (and the world in general) are speculative.

It's amazing to me how those willing to seize on the speculative nature of any ANY uncertainty cannot recognize the inherent uncertainty of the inverse.

> what CS fundamentals do you need

1. Tarski's undefinability theorem 2. Gödel's incompleteness theorems 3. Curry Howard correspondence

And a lot of exposure to deductive reasoning, vague ideas of automated theorem proving and formalization.

I won't pretend its easy, but let's be clear, a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so until the train hits them.

There are 2-3 minor architectural changes in between now and what I would identify as a completely unbounded AGI with clearly discernible dynamic, self-defined objective functions and self-defined procedures for training and inference. It can be done in megabytes. Oh god. Get me out of this forum. I wish to return to my code editor.

Flashtoo 33 minutes ago | parent [-]

What exactly are you claiming here? That a handful of theorems about the limits of mathematics and provability somehow combine to show that the current LLM-based AI developments will inevitably live up to what is expected of them? And that this is obvious to a select few? That all seems unlikely, to say the least.