▲ | pstuart 5 days ago | ||||||||||||||||
I understand your sentiment, but AI is the new internet -- despite the hype it's not going away. The ability to have true personal AI agent that you would own would be quite empowering. Out of all the industry players I'd put Apple as the least bad option to have that happen with. | |||||||||||||||||
▲ | hollerith 5 days ago | parent [-] | ||||||||||||||||
>Out of all the industry players I'd put Apple as the least bad option To be the least bad option, Apple would need to publish a plan for keeping an AI under control so that it stays under control even if it undergoes a sharp increase in cognitive capability (e.g., during training) or alternatively a plan to prevent an AI's capability from ever rising to a level that requires the aforementioned control. I haven't seen anything out of Apple suggesting that Apple's leaders understand that a plan of the first kind or the second kind is necessary. Most people who have written about the topic in detail put Anthropic as the least bad option because out of all the groups with competitive offerings, their leadership has written in the most detail about the need for a plan and about their particular (completely inadequate IMHO) plan. I myself put Google as the least bad option -- the slightly less awful option, to be precise -- with large uncertainty because Google wasn't pushing capabilities hard till OpenAI and Anthropic put it in a situation in which it either had to start pushing hard or risk falling so far behind it wouldn't be able to catch up. Consequently, I use Gemini as my LLM service. In particular, Google risked finding itself in a situation in which it cannot create a competitive offering because it doesn't have access to enough data collected from users of LLMs and generative AIs and cannot get enough data because it cannot attract users. While it was the leading lab, Google was proceeding slowly and at least one industry insider claims credibly that the slowness was deliberately chosen to reduce the probability of an AI catastrophe. I must stress that no one has an adequate plan for avoiding an AI catastrophe while continuing to push capabilities, and IMHO no one is likely to devise one in time, so would be great if no one did any more frontier AI research at all till humanity itself becomes more cognitively capable. | |||||||||||||||||
|