| ▲ | liquid_thyme a day ago | |
I like to think of LLMs as idiot savants. Exceptional at certain tasks, but might also eat the table cloth if you stop paying attention at the wrong time. With humans, you can kind of interview/select for a more normalized distribution of outcomes, with outliers being less probable, but not impossible. | ||