Remix.run Logo
jgeada 2 days ago

we're pretty much training these models on the entirety of human recorded information (good & bad); sure, we can run larger and larger models, but it seems that fundamentally we've hit a wall in that none of these models are immune from hallucinations and the constant generation of "sounds likely but is false" sentences.

The approach is fundamentally flawed, you don't get AGI by building a sentence predictor.

garymarcus a day ago | parent | next [-]

exactly. and the counterarguments boil down to “na na” and hope.

istjohn 2 days ago | parent | prev [-]

[flagged]