| ▲ | krackers 3 days ago | |
The mean minimizes L2 norm, so I guess there's some connection there if you derive OLS by treating X, Y as random variables and trying to estimate Y conditioned on X in a linear form. "L[Y|X] = E[Y] + a*(X - E[X])" If the dataset truly is linear then we'd like this linear estimator to be equivalent to the conditional expectation E[Y|X], so we therefore use the L2 norm and minimize E[(Y - L[Y|X])^2]. Note that we're forced to use the L2 norm since only then will the recovered L[Y|X] correspond to the conditional expectation/mean. I believe this is similar to the argument other commenter mentioned of being BLUE. The random variable formulation makes it easy to see how the L2 norm falls out of trying to estimate E[Y|X] (which is certainly a "natural" target). I think the Gauss-Markov Theorem provides more rigorous justification under what conditions our estimator is unbiased, that E[Y|X=x] = E[Lhat | X=x] (where L[Y|X] != LHat[Y|X] because we don't have access to the true population when we calculate our variance/covariance/expectation) and that under those conditions, Lhat is the "best": that Var[LHat | X=x] <= Var[Lh' | X=x] for any other unbiased linear estimator Lh'. | ||