▲ | mmooss 5 hours ago | |
You don't trust it yet, like a new human assistant you might hire - will they be able to handle all the variables? Eventually, they earn your trust and you start offloading everything to their inbox. | ||
▲ | paulryanrogers 5 hours ago | parent | next [-] | |
No, not like a human assistant. Competent humans will use logical reasoning, non-digital signals like body language and audible clues, and know the limits of their knowledge, so are more likely to ask for missing input. Humans will also be more predictable. | ||
▲ | binarymax 5 hours ago | parent | prev [-] | |
LLMs don’t learn. They’re static. You could try to fine tune, or continually add longer and longer context, but in the end you hit a wall. |