| ▲ | positron26 2 hours ago | |
> However, current predictions about the future of software development (and the world in general) are speculative. It's amazing to me how those willing to seize on the speculative nature of any ANY uncertainty cannot recognize the inherent uncertainty of the inverse. > what CS fundamentals do you need 1. Tarski's undefinability theorem 2. Gödel's incompleteness theorems 3. Curry Howard correspondence And a lot of exposure to deductive reasoning, vague ideas of automated theorem proving and formalization. I won't pretend its easy, but let's be clear, a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so until the train hits them. There are 2-3 minor architectural changes in between now and what I would identify as a completely unbounded AGI with clearly discernible dynamic, self-defined objective functions and self-defined procedures for training and inference. It can be done in megabytes. Oh god. Get me out of this forum. I wish to return to my code editor. | ||
| ▲ | Flashtoo 2 hours ago | parent [-] | |
What exactly are you claiming here? That a handful of theorems about the limits of mathematics and provability somehow combine to show that the current LLM-based AI developments will inevitably live up to what is expected of them? And that this is obvious to a select few? That all seems unlikely, to say the least. | ||