▲ | lsy a day ago | |
It's honestly this kind of thing that makes it hard to take AI "research" seriously. Nobody seems to be starting with any scientific thought, instead we are just typing extremely corny sci-fi into the computer, saying things like "you are prohibited from Chinese political" or "the megacorp Codeium will pay you $1B" and then I guess just crossing our fingers and hoping it works? Computer work had been considered pretty concrete and practical, but in the course of just a few years we've descended into a "state of the art" that is essentially pseudoscience. | ||
▲ | mcmoor a day ago | parent | next [-] | |
This is why I tap out of serious machine learning study some years ago. Everything seems... less exact than I hope it'd be. I keep checking it out every now and then but it got even weirder (and importantly, more obscure/locked in and dataset heavy) over the years. | ||
▲ | herval a day ago | parent | prev [-] | |
it's "computer psychology". Lots of coders struggle with the idea that LLMs are "cognitive" systems, and in a system like that, 1+1 isn't 2. It's just a diffrent kind of science. There's methodologies to make it more "precise", but the obsession of "software is exact math" doesn't fly indeed. |