| ▲ | hnlmorg 2 hours ago | |
I’ve worked in the AI space and I understand how LLMs work as a principle. But we don’t know the magic contained within a model after it’s been trained. We understand how to design a model, and how models work at a theoretical level. But we cannot know how well it will be at inference until we test it. So much of AI research is just trial and error with different dials repeated tweaked until we get something desirable. So no, we don’t understand these models in the same way we might understand how an hashing algorithm works. Or a compression routine. Or an encryption cypher. Or any other hand-programmed algorithm. I also run Linux. But that doesn’t change how the two major platforms behave and that, as software developers, we have to support those platforms. Open source hardware is great but it’s not on the same league of price and performance as proprietary hardware. Agentic AI doesn’t make me feel hopeless either. I’m just describing what I’d personally define as a “golden age of computing”. | ||