| ▲ | netdevphoenix 3 hours ago | |||||||||||||
> If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right. Isn't this obvious? There is not enough latent knowledge of math there to enable current LLMs to approximate anything resembling an integral. | ||||||||||||||
| ▲ | RobRivera 3 hours ago | parent | next [-] | |||||||||||||
Its obvious to me. Its obvious to you. It isnt obvious to the person I am responding to, and it isnt obvious to majority of individuals I speak with on the matter (which is why AI, personally, is in the bucket of religion amd politics for polite conversation to simply avoid) | ||||||||||||||
| ||||||||||||||
| ▲ | Talanes 3 hours ago | parent | prev [-] | |||||||||||||
Now what if we ask the LLM to write about social media? Do you think the output would be similar to what you'd get if we had a time machine to bring the actual man back and have him form his own thoughts firsthand? | ||||||||||||||
| ||||||||||||||