| ▲ | Mordisquitos 4 hours ago | |
You can prompt LLMs to scan thousands of documents to generate text validating your hunches. In some cases those validated hunches may even be correct. | ||
| ▲ | Eisenstein 21 minutes ago | parent [-] | |
It's easy to get an LLM to make any argument you like based on whatever data is available. Those arguments are going to be trivially bad if that data is bad. | ||