| ▲ | ed_elliott_asc 13 hours ago | ||||||||||||||||||||||||||||
It all sounds a bit too marketing-ey to me “we have this amazing model that is too good to release” but the goal is still AGI? Ok right. | |||||||||||||||||||||||||||||
| ▲ | estearum 13 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
What's incoherent about that? | |||||||||||||||||||||||||||||
| ▲ | upmind 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
The goal for anthropic is safe AGI. A) this model is dangerous in the hands of consumers. B)They do not want China to train on these models. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | 9x39 13 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
The missing piece is the reminder that scarcity still exists. Whether its actually scarcity or hype building or a bit of column a, bit of column b is TBD. Then again, the new models seem more expensive, they slashed the tokens thrown around in thinking, and put up limit speedbumps so it’s probably not all gaslighting about compute bottlenecks. | |||||||||||||||||||||||||||||