Remix.run Logo
lo_fye a day ago

>>It won’t design your domain layer with the care and consideration that >>comes from deep experience and hard-won knowledge.

What if every time you had an Aha! moment, you blogged about it in detail. Many people do. AI ingests those blog posts. It uses what they say when writing new code, or assessing existing code. It does use hard-won knowledge; it just wasn't hard-won by AI itself.

DontchaKnowit a day ago | parent | next [-]

The problem is that someone elses aha might not apply to your situation. The AI cant reason and generalize like a human can to apply lessons from someone else to you slightly different situation

ath3nd a day ago | parent | prev [-]

To me, knowledge is about knowing things, intelligence is about being able to apply your knowledge at the right context and for the right reasons.

The current crop of LLMs has a lot of knowledge, but severely lacks on the "intelligence" part. Sure, it can "guess" how to write a unit test consistent with your codebase (based on examples), but for those 1% where you need to make a deviation from the rule, it's completely clueless how to do it properly. Guessing is not intelligence, although it might appear masked as such.

Don't get me wrong, the "guessing" part is sometimes uncannily good, but it just can't replace real reasoning.