| ▲ | Sivart13 a day ago |
| > Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Something I've never understood: why do AGI perverts think that a superintelligence is any more likely to "cure cancer" than "create unstoppable super-cancer" |
|
| ▲ | tensor a day ago | parent | next [-] |
| AI will do neither of those things because curing or creating cancer requires physical experiments and trials on real people or animals, as does all science outside of computer science (which is often more math than science). I can see AI being helpful in generating hypothesis, or potential compounds to synthesize, or helping with literature search, but science is a physical process. You don't generally do science just by sitting there and pondering, despite what the movies suggest. |
| |
| ▲ | ozten a day ago | parent | next [-] | | There are a few fully automated wet labs and many semi-autonomous. They are called "Cloud Labs", and they will only become more plentiful. AI can identify and execute the physical experiments after using simulations to filter and score the candidate hypotheses. | | |
| ▲ | rooftopzen a day ago | parent [-] | | Sorry but your concept of AI is marketing driven. It's probabilistic, understanding is past your pay grade. | | |
| ▲ | tensor a day ago | parent [-] | | They're actually right in that there are several attempts to create automated labs to speed up the physical part. But in reality there are only a handful and they are very very narrowly scoped. But yes, potentially in some narrow domains this will be possible, but it still only automates a part of the whole process when it comes to drugs. How a drug operates on a molecular test chip is often very different than how it works in the body. |
|
| |
| ▲ | rooftopzen a day ago | parent | prev [-] | | Exactly - AI allows for intersections in concepts from training data; up to the user to make sense of it. Thanks for stating this (I end up repeating same thing in every conversation, but is common sense). |
|
|
| ▲ | siva7 a day ago | parent | prev | next [-] |
| Somehow it never crossed my mind, but human civilization could plausibly end in the next 10 years. Many thought if then the cause would be a nuclear war, turns out it's more like the 90's movie 12 monkeys. I would love to be proven wrong, yet there is no international regulation on AI. |
| |
| ▲ | aeve890 a day ago | parent [-] | | >I would love to be proven wrong, yet there is no international regulation on AI. What are the chances of advancing in AI regulation before any monumental fuck up that changes the public opinion to "yeah this thing is really dangerous". Like a Hiroshima or Chernobyl but for AI. | | |
| ▲ | cyberpunk a day ago | parent [-] | | Zero. Too much skin in the game all around for the train to stop now. |
|
|
|
| ▲ | wongarsu a day ago | parent | prev | next [-] |
| I'm not sure he's even talking about AGI (which feels unusual for Altman). He might be talking about GPT5 in agentic workflows. Or whatever their next model will be called |
|
| ▲ | a day ago | parent | prev [-] |
| [deleted] |