| ▲ | atleastoptimal 9 hours ago |
| This will happen with most offerings made by the major AI labs. Inference is expensive, and the closer they get to AGI, the higher the opportunity to use compute for inference rather than training, especially if it’s for making what is essentially entertainment that many people
hate on principle. |
|
| ▲ | davebranton 9 hours ago | parent [-] |
| Indeed. But they won't get to "AGI", because that goal isn't even remotely defined. A "human-level" intelligence implies a large number of properties that cannot exist inside an inference machine. Dreams, for example, might be considered to be a part of "human-level" intelligence. Will the machine dream? What happens if you turn a "human-level" intelligence off? Did you kill someone? AGI is a pipe dream - and moreover it's not even something that anyone actually wants. |
| |
| ▲ | supern0va 9 hours ago | parent | next [-] | | >Will the machine dream? You seem to be mixing up intelligence and consciousness. Not only does intelligence exist outside of humans, and even mammals, but it exists outside of brains and even neurons. For example, slime molds have fascinating problem solving abilities: https://www.nature.com/articles/nature.2012.11811 It is clear that whatever we are...creating/growing with LLMs, it is very unlike human intelligence, but it is nonetheless some type of intelligence. | |
| ▲ | atleastoptimal 9 hours ago | parent | prev [-] | | agi just means a machine, system or whatever that can do anything as least as well as a human. The details dont matter as much as its ability to match humans in everything they are paid money to do. And obviously if such a system existed, the benefits (and risks) would be enormous, though the risks are smaller if
you control it vs someone else, which is why every company is racing towards it. |
|