| ▲ | pennaMan 7 hours ago |
| >It’s literally text prediction, isn’t it? you are discovering that the favorite luddite argument is bullshit |
|
| ▲ | ericjmorey 6 hours ago | parent | next [-] |
| I don't consider these researchers luddites. https://machinelearning.apple.com/research/illusion-of-think... https://arxiv.org/abs/2508.01191 |
|
| ▲ | DrewADesign 7 hours ago | parent | prev [-] |
| Feel free to elucidate if you want to add anything to this thread other than vibes. |
| |
| ▲ | electroglyph 7 hours ago | parent [-] | | after you go from from millions of params to billions+ models start to get weird (depending on training) just look at any number of interpretability research papers. Anthropic has some good ones. | | |
| ▲ | HumanOstrich 6 hours ago | parent | next [-] | | > things start to get weird > just look at research papers You didn't add anything other than vibes either. | |
| ▲ | Barbing 3 hours ago | parent | prev | next [-] | | Interesting, what kind of weird? | |
| ▲ | DrewADesign 6 hours ago | parent | prev [-] | | Getting weird doesn’t mean calling it text prediction is actually ‘bullshit’? Text prediction isn’t pejorative… |
|
|