| ▲ | kccqzy 8 hours ago | |||||||||||||||||||||||||
But do you actually treat LLMs as glorified autocomplete or treat them as puzzle solvers where you give them difficult tasks beyond your own intellect? Recently I wrote a data transformation pipeline and I added a note that the whole pipeline should be idempotent. I asked Claude to prove it or find a counterexample. It found one after 25 minutes of thinking; I reasonably estimate that it would take me far longer, perhaps one whole day. I couldn’t care less about using Claude to type code I already knew. | ||||||||||||||||||||||||||
| ▲ | CoolGuySteve 8 hours ago | parent | next [-] | |||||||||||||||||||||||||
"give them difficult tasks beyond your own intellect?" Lol no, I've yet to find a model with those properties. Sounds like a fast track to AI psychosis. The domain I work in doesn't have enough public documentation for these models to be particularly helpful without a lot of handholding though. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | shimman 8 hours ago | parent | prev [-] | |||||||||||||||||||||||||
This says more about you than the "intellect" of these nondeterministic probability programs. Can you provide actual context to what was beyond your ability and how you're able to determine if the solution was correct? Finding out that all these comments that reference the "magical incantation" tend to be full of hot air. Maybe yours is different. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||