| ▲ | SpicyLemonZest 3 hours ago | |
I just don't think that's correct. When I ask Claude to solve something for me, it takes a number of actions on my computer which are neither writing text nor interpreting the done token. It executes the build, debugs tests, et cetera. Sometimes it spawns mini-mes when it thinks that would be helpful! I think saying this is all "autocomplete" is a category error, like saying that you shouldn't talk about clicking buttons or running programs because it's all just electrically charged silicon under the hood. | ||
| ▲ | happygoose an hour ago | parent | next [-] | |
technically, it does all that by outputting text, like `run_shell_command("cargo build")` as part of its response. But you could easily say similar things about humans. To me, "autocomplete" seems like it describes the purpose of a system more than how it functions, and these agents clearly aren't designed to autocomplete text to make typing on a phone keyboard a bit faster. I feel like people compare it to "autocomplete" because autocomplete seems like a trivial, small, mundane thing, and they're trying to make the LLMs feel less impressive. It's a rhetorical trick that is very overused at this point. | ||
| ▲ | zzzeek an hour ago | parent | prev [-] | |
yup, or "I played a first person shooter and shot lots of bad guys" wrong! pushed buttons on your playstation in response to graphical simulations, duh | ||