| ▲ | joeycastillo 15 hours ago | |||||||
A question for those who think LLM’s are the path to artificial intelligence: if a large language model trained on pre-1913 data is a window into the past, how is a large language model trained on pre-2025 data not effectively the same thing? | ||||||||
| ▲ | _--__--__ 15 hours ago | parent | next [-] | |||||||
You're a human intelligence with knowledge of the past - assuming you were alive at the time, could you tell me (without consulting external resources) what exactly happened between arriving at an airport and boarding a plane in the year 2000? What about 2002? Neither human memory nor LLM learning creates perfect snapshots of past information without the contamination of what came later. | ||||||||
| ▲ | block_dagger 15 hours ago | parent | prev | next [-] | |||||||
Counter question: how does a training set, representing a window into the past, differ from your own experience as an intelligent entity? Are you able to see into the future? How? | ||||||||
| ||||||||
| ▲ | ex-aws-dude 15 hours ago | parent | prev [-] | |||||||
A human brain is a window to the person's past? | ||||||||