| ▲ | codebje 7 hours ago | |||||||
It depends on the purpose for the model. AFAIK LLMs aren't particularly capable at researching answers, relying more on having 'truth' baked in to their weights, so if it takes 12 months to train up a crowd-trained LLM it'll be 12 months behind the times. How serious a risk is poisoned weights? Can we leverage the cryptobros into using LLM training as a proof of work? | ||||||||
| ▲ | MarsIronPI 5 hours ago | parent [-] | |||||||
What? I use Qwen 3.5 35B-A3B and it definitely knows how and when to do web searches to fill in gaps in its knowledge. | ||||||||
| ||||||||