| ▲ | qsera 5 hours ago | |||||||
>The scientific version of these claims is “the total encoding cost (for some class of tasks) is lower than previous models” I wonder why? Can the new models read mind? > For example, I was recently trying to install a package whose name I forgot. I prompted the model to “install that x11 fake gui thing”, a trivial prompt. Yes, they are a better search. I would also add that there is also a subjective factor. If I enjoy writing code a lot more than reviewing it, I am going to prefer NOT using it for writing and might just use it to review. So "hardness" is also related to how much you like/dislike doing it. | ||||||||
| ▲ | SOLAR_FIELDS 4 hours ago | parent [-] | |||||||
It does feel like with each new frontier model release the major improvement I notice is that the model is, in fact, getting better at reading your mind. And what I mean by that is that it gets better at understanding the nuance and the subtleties of the intent of what you are saying better, and teasing out the actual intent of what you want better. So it gets easier and easier for the model to build a world around less input. So in a significant way, yes, newer models are reading your mind in a way, because they are probabilistically figuring out better how most humans communicate in natural language and filling in the gaps. Re writing code: most people find the writing of code to be a chore. For those that don’t, I don’t envy them, because that is the part that just got completely destroyed by AI. It’s becoming pretty abundantly clear that if you enjoy hand writing code that it will be a hobby rather than something you can do professionally and succeed over people who aren’t writing by hand | ||||||||
| ||||||||