| ▲ | lolz404 2 days ago | |
This article does little to support its claims but was a good primer to dive into some topics. They are cool new tools use them where you can but there is a ton of research still left to do. Just lols at the hubris silicon valley will make something so smart it extincts humankind. It'll happen from the lack of water and heated planet first :) The stocastic parrot argument is still debated but more nuanced than before. Although the original author still stands by the statement. Evidence of internal planning per model. Anthropic Attribution Graphs Research with some rhyming did support it but gemma didn't. The idea of "understanding" is still up for debate as well. Sure, when models are directly trained on data there is representation. Othello-GPT Studies was one way to support but that was during training so some interal representation was created. Out of distribution task will collapse to confabulation. Apple's GSM-Symbolic Research seems to support that. Chain of thought is a helpful tool but is untrustworthy at best. Anthropic themselves have showed this https://www.anthropic.com/research/reasoning-models-dont-say... | ||