| ▲ | dmk 5 hours ago | |||||||||||||||||||||||||||||||||||||
The benchmarks are cool and all but 1M context on an Opus-class model is the real headline here imo. Has anyone actually pushed it to the limit yet? Long context has historically been one of those "works great in the demo" situations. | ||||||||||||||||||||||||||||||||||||||
| ▲ | pants2 4 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
Paying $10 per request doesn't have me jumping at the opportunity to try it! | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | nomel 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
Has a "N million context window" spec ever been meaningful? Very old, very terrible, models "supported" 1M context window, but would lose track after two small paragraphs of context into a conversation (looking at you early Gemini). | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | awestroke 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
Opus 4.5 starts being lazy and stupid at around the 50% context mark in my opinion, which makes me skeptical that this 1M context mode can produce good output. But I'll probably try it out and see | ||||||||||||||||||||||||||||||||||||||