| ▲ | noduerme 9 hours ago |
| I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us? The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate. So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful. |
|
| ▲ | hamasho 8 hours ago | parent | next [-] |
| > The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true.
This really resonates with me.
If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
We are using AI for a lot of small tasks inside big systems, or even for designing the entire architecture, and we still need to validate the answers by ourselves, at least for the foreseeable future.
But outsourcing thinking reduces a lot of brain powers to do that, because it often requires understanding problems' detailed structure and internal thinking path.In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools. |
| |
| ▲ | chickensong an hour ago | parent | next [-] | | If you don't have building codes, you can totally yolo build a small house, no calculator needed. It may not be a great house, just like vibeware may not be great, but also, you have something. I'm not saying this is ideal, but maybe there's another perspective to consider as well, which is lowering barriers to entry and increased ownership. Many people can't/won't/don't do what it takes to build things, be it a house or an app, if they're starting from zero knowledge. But if you provide a simple guide they can follow, they might end actually building something. They'll learn a little along the way, make it theirs, and end up with ownership of their thing. As an owner, change comes from you, and so you learn a bit more about your thing. Obviously whatever gets built by a noob isn't likely to be of the same caliber as a professional who spent half their life in school and job training, but that might be ok. DIY is a great teacher and motivator to continue learning. Contrast to high barriers to entry, where nothing gets built and nothing gets learned, and the user is left dependent on the powers that be to get what he wants, probably overpriced, and with features he never wanted. If you're a rocket surgeon and suddenly outsource all your thinking to a new and unpredictable machine, while you get fat and lazy watching tv, that's on you. But for a lot of people who were never going to put in years of preparation just to do a thing, vibing their idea may be a catalyst for positive change. | |
| ▲ | zephen 7 hours ago | parent | prev [-] | | > If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them. I think past successes have led to a category error in the thinking of a lot of people. For example, the internet, and many constituent parts of the internet, are built on a base of fallible hardware. But mitigated hardware errors, whether equipment failures, alpha particles, or other, are uncorrelated. If you had three uncorrelated calculators that each worked 99.99% of the time, and you used them to check each other, you'd be fine. But three seemingly uncorrelated LLMs? No fucking way. | | |
| ▲ | noduerme 3 hours ago | parent | next [-] | | There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post. | |
| ▲ | firejake308 4 hours ago | parent | prev [-] | | The LLMs are not uncorrelated, though, they're all trained on the same dataset (the Internet) and subject to most of the same biases |
|
|
|
| ▲ | knollimar 5 hours ago | parent | prev | next [-] |
| It's funny, I'm working on trying to get LLMs to place electrical devices, and it silently developed opinions that my switches above countertops should be at 4 feet and not the 3'10 I'm asking for (the top cannot be above 4') |
| |
| ▲ | noduerme 3 hours ago | parent [-] | | That's quite funny, and almost astonishing, because I'm not an architect, and that scenario just came out of my head randomly as I wrote it. It seemed like something an architect friend of mine who passed away recently, and was a big fan of Douglas Adams, would have joked about. Maybe I just channeled him from the afterlife, and maybe he's also laughing about it. |
|
|
| ▲ | MrDarcy 8 hours ago | parent | prev [-] |
| On the other hand the incorrect values may drive architects to think more critically about what their tools are producing. |
| |
| ▲ | noduerme 3 hours ago | parent [-] | | On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects. |
|