▲ | hodgehog11 4 days ago | |
This makes sense on its face, but the flaw in the logic here is the implicit assumption that current procedures extract all information available in the datasets. We know this is not even remotely close to being true. Many decades ago, statisticians made a similar erroneous assumption that maximum likelihood estimators, which also minimize entropy, are "optimal" in terms of saturating error. The fact that you can do better by smarter regularisation is the key to why DL works in the first place. I'm no shill for AI, but you're going to need a better argument for why runaway AI up to obscene levels of performance is not theoretically possible. There are quite a few people, including some of my colleagues, that are looking in earnest but so far no one has found one. |