| ▲ | twodave 2 days ago | |
I agree mostly, though personally I expect LLMs to basically give me whitewashing. They don't innovate. They don't push back enough or take a step back to reset the conversation. They can't even remember something I told them not to do 2 messages ago unless I twist their arm. This is what they are, as a technology. They'll get better. I think there's some impact associated with this, but it's not a doomsday scenario like people are pretending. We are talking about trying to build a thing we don't even truly understand ourselves. It reminds me of That Hideous Strength where the scientists are trying to imitate life by pumping blood into the post-guillotine head of a famous scientist. Like, we can make LLMs do things where we point and say, "See! It's alive!" But in the end people are still pulling all the strings, and there's no evidence that this is going to change. | ||
| ▲ | ben_w a day ago | parent [-] | |
Yup, I think that's fair. I'm not sure how many humans know how to be genuinely innovative; nor if it's learnable; and also, assuming that it is learnable, whether or not known ML is sample-efficient enough learn that skill from however many examples currently exist. As you say, we don't understand what we're trying to build. It's remarkable how far we got without understanding what we build: for all that "cargo cult" is seen as a negative in the 20th century onwards, we didn't understand chemistry for thousands of years but still managed cement, getting metals from ores, explosives, etc. Then we did figure out chemistry and one of the Nobel prizes in it led to both chemical weapons and cheap fertiliser. We're all over the place. | ||