| |
| ▲ | Ray20 5 days ago | parent | next [-] | | > But what if it becomes "good enough", that for most intents and purposes, small models can be "good enough" It's simple: then we'll make our intents and purposes bigger. | |
| ▲ | Almondsetat 5 days ago | parent | prev [-] | | Because the true goal is AGI, not just nice little tools to solve subsets of problems. The first company which can achieve human level intelligence will just be able to self-improve at such a rate as to create a gigantic moat | | |
| ▲ | jurgenburgen 2 days ago | parent | next [-] | | There’s no evidence that the current architectures will reach AGI levels. Of course OpenAI wants you to think they will rule the world but if we’ve reached the plateau of LLM capabilities regardless of the amount of compute we throw at them then local models will soon be good enough. | |
| ▲ | 9rx 5 days ago | parent | prev | next [-] | | > The first company which can achieve human level intelligence will just be able to... They say prostitution is the oldest industry of all. We know how to achieve human-level intelligence quite well. The outstanding challenge is figuring out how to produce an energy efficient human-level intelligence. | |
| ▲ | Dylan16807 4 days ago | parent | prev [-] | | There's no particular reason to assume a human level AI would be able to improve itself any better than the thousands of human level humans that designed it. | | |
| ▲ | Almondsetat 4 days ago | parent [-] | | Sure, but: that single human with the intelligence of a top tier engineer of scientist will have immediate access to all human knowledge. Plus, what do you think happens the moment its optimizes itself to run in 2, 4, 8, 16, etc. parallel instances? | | |
| ▲ | Dylan16807 4 days ago | parent [-] | | Well, A) "top tier engineer/scientist" is a significant step above generic human, B) the human engineers/scientists also have immediate access to the same database, C) The humans have been optimizing it for even longer, so what makes us think the AI can optimize itself even a couple percent? For example, if the number of AIs you can run per petaflop started to scale with the cube root of researcher-years, then even if your researcher AIs are quite fast and you can double your density in a couple years, hitting 5x will take a decade and hitting 10x will approach half a century. |
|
|
|
|