| ▲ | nyeah 2 hours ago | |
Good point. We choose certain tests to perform. We choose certain test results to pay attention to. We don't just keep chatting about (reviewing) the code. We do something else. If lies are all we have, then how is this behavior possible? | ||
| ▲ | ajross 2 hours ago | parent [-] | |
LLMs can write and run tests though. You're cherry picking my little bit of wordsmithing. Obviously we aren't always wrong. I'm saying that our thought processes stem from hallucinatory connections and are routinely wrong on first cut, just like those of an LLM. Actually I'm going farther than that and saying that the first cut token stream out of an AI is significantly more reliable than our personal thoughts. Certainly than mine, and I like to think I'm pretty good at this stuff. | ||