▲ | kuhewa 9 days ago | |
> Writing is complex, LLMs once had subhuman performance, And now they can easily replace mediocre human performance, and since they are tuned to provide answers that appeal to humans that is especially true for these subjective value use cases. Chip design doesn't seem very similar. Seems like a case where specifically trained tools would be of assistance. For some things, as much as generalist LLMs have surprised at skill in specific tasks, it is very hard to see how training on a broad corpus of text could outperform specific tools — for first paragraph do you really think it is not dubious to think a model trained on text would outperform Stockfish at chess? | ||
▲ | tim333 9 days ago | parent [-] | |
When people say LLM I think they are often thinking of neural network approaches in general rather than just text based even if the letters do stand for language model. And there's overlap eg. Gemini does language but is multi modal. If you skip that you get things like AlphaZero which did beat Stockfish https://en.wikipedia.org/wiki/AlphaZero |