| ▲ | winddude 3 hours ago | |
> This has very little to do with someone making the LLM too human but rather a core limitation of the transformer architecture itself. It has almost everything to do with it. Models have been fine-tuned to generate outputs that humans prefer. | ||