| ▲ | nickysielicki 5 hours ago | ||||||||||||||||
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights. This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good. > The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead. Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem. | |||||||||||||||||
| ▲ | notahacker 5 hours ago | parent [-] | ||||||||||||||||
Have to admit I'm really struggling with the idea that the Wright brothers didn't do much thinking because they were self taught, never mind the idea that figuring out aeronautics from reading every publication they could get their hands on, intuiting wing warping and experimenting by hand-building mechanical devices looks much like asking Claude to make a CRUD app... | |||||||||||||||||
| |||||||||||||||||