| ▲ | RobRivera 3 hours ago | |||||||||||||||||||||||||||||||||||||
Actually this is the crux and the nuance which makes discussing LLM specifics a pain in the general space. If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right. Instead what you will receive is a text that follows a statistically derived most likely (in accordance to the perplexity tuning) response to such a question. | ||||||||||||||||||||||||||||||||||||||
| ▲ | simianwords 25 minutes ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
>If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right. this shows that you have very less idea on how llm's work. LLM that is trained only on john steinbeck will not work at all. it simply does not have the generalised reasoning ability. it necessarily needs inputs from every source possible including programming and maths. You have completely ignored that LLMs have _generalised_ reasoning ability that it derives from disparate sources. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | netdevphoenix 3 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
> If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right. Isn't this obvious? There is not enough latent knowledge of math there to enable current LLMs to approximate anything resembling an integral. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||