| ▲ | vntok 4 hours ago | ||||||||||||||||||||||
2 years ago, LLMs failed at answering coherently. Last year, they failed at answering fast on optimized servers. Now, they're failing at answering fast on underpowered handheld devices... I can't wait to see what they'll be failing to do next year. | |||||||||||||||||||||||
| ▲ | ezst 4 hours ago | parent | next [-] | ||||||||||||||||||||||
Probably the one elephant in the roomy thing that matters: failing to say they don't know/can't answer | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | BirAdam an hour ago | parent | prev [-] | ||||||||||||||||||||||
The speed on a constrained device isn't entirely the point. Two years ago, LLMs failed at answering coherently. Now... You're absolutely right. Now, LLMs are too slow to be useful on handheld devices, and the future of LLMs is brighter than ever. LLMs can be useful, but quite often the responses are about as painful as LinkedIn posts. Will they get better? Maybe. Will they get worse? Maybe. | |||||||||||||||||||||||
| |||||||||||||||||||||||