| ▲ | KoolKat23 2 days ago | |
I understand your viewpoint. LLM's these days have reasoning and can learn in context. They do touch reality, your feedback. It's also proven mathematically. Other people's scientific papers are critiqued and corrected as new feedback arrives. This is no different to claude code bash testing and fixing it's own output errors recursively until the code works. They already deal with unknown combinations all day, our prompting. Yes it is brittle though. They are also not very intelligent yet. | ||