| ▲ | gordonhart 5 hours ago |
| Remember when GPT-2 was “too dangerous to release” in 2019? That could have still been the state in 2026 if they didn’t YOLO it and ship ChatGPT to kick off this whole race. |
|
| ▲ | WarmWash 4 hours ago | parent | next [-] |
| I was just thinking earlier today how in an alternate universe, probably not too far removed from our own, Google has a monopoly on transformers and we are all stuck with a single GPT-3.5 level model, and Google has a GPT-4o model behind the scenes that it is terrified to release (but using heavily internally). |
| |
| ▲ | vineyardmike 2 hours ago | parent | next [-] | | This was basically almost real. Before ChatGPT was even released, Google had an internal-only chat tuned LLM. It went "viral" because some of the testers thought it was sentient and it caused a whole media circus. This is partially why Google was so ill equipped to even start competing - they had fresh wounds of a crazy media circus. My pet theory though is that this news is what inspired OpenAI to chat-tune GPT-3, which was a pretty cool text generator model, but not a chat model. So it may have been a necessary step to get chat-llms out of Mountain View and into the real world. https://www.scientificamerican.com/article/google-engineer-c... https://www.theguardian.com/technology/2022/jul/23/google-fi... | |
| ▲ | brador 2 hours ago | parent | prev | next [-] | | Now think about how often the patent system has stifled and stalled and delayed advancement for decades per innovation at a time. Where would we be if patents never existed? | | |
| ▲ | sarchertech 2 hours ago | parent | next [-] | | Who knows? If we’d never moved on from trade secrets to patents, we might be a hundred years behind. | |
| ▲ | cma 2 hours ago | parent | prev [-] | | To be fair, Google has a patent on the transformer architecture. Their page rank patent monopoly probably helped fund the R&D. | | |
| |
| ▲ | nsxwolf 2 hours ago | parent | prev [-] | | It would have been nice for me to be able to work a few more years and be able to retire | | |
| ▲ | dimitrios1 2 hours ago | parent [-] | | will your retirement be enjoyable if everyone else around you is struggling? |
|
|
|
| ▲ | minimaxir 4 hours ago | parent | prev | next [-] |
| They didn't YOLO ChatGPT. There were more than a few iterations of GPT-3 over a few years which were actually overmoderated, then they released a research preview named ChatGPT (that was barely functional compared to modern standards) that got traction outside the tech community because it was free, and so the pivot ensued. |
|
| ▲ | nikcub 4 hours ago | parent | prev | next [-] |
| I also remember when the playstation 2 required an export control license because it's 1GFLOP of compute was considered dangerous that was also brilliant marketing |
|
| ▲ | gildenFish an hour ago | parent | prev | next [-] |
| In 2019 the technology was new and there was no 'counter' at that time. The average persons was not thinking about the presence and prevalence of ai in the way we do now. It was kinda like a having muskets against indigenous tribes in the 14-1500s vs a machine gun against a modern city today. The machine gun is objectively better but has not kept up pace with the increase in defensive capability of a modern city with a modern police force. |
|
| ▲ | jefftk 5 hours ago | parent | prev | next [-] |
| That's rewriting history. What they said at the time: > Nearly a year ago we wrote in the OpenAI Charter : “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas. -- https://openai.com/index/better-language-models/ Then over the next few months they released increasingly large models, with the full model public in November 2019 https://openai.com/index/gpt-2-1-5b-release/ , well before ChatGPT. |
| |
| ▲ | gordonhart 5 hours ago | parent | next [-] | | > Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper. I wouldn't call it rewriting history to say they initially considered GPT-2 too dangerous to be released. If they'd applied this approach to subsequent models rather than making them available via ChatGPT and an API, it's conceivable that LLMs would be 3-5 years behind where they currently are in the development cycle. | |
| ▲ | IshKebab 5 hours ago | parent | prev [-] | | They said: > Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT‑2 along with sampling code (opens in a new window). "Too dangerous to release" is accurate. There's no rewriting of history. | | |
| ▲ | tecleandor 4 hours ago | parent [-] | | Well, and it's being used to generate deceptive, biased, or abusive language at scale. But they're not concerned anymore. | | |
| ▲ | girvo 2 hours ago | parent [-] | | They've decided that the money they'll make is too important, who cares about externalities... It's quite depressing. |
|
|
|
|
| ▲ | ModernMech 2 hours ago | parent | prev [-] |
| Yeah, and Jurassic Park wouldn't have been a movie if they decided against breeding the dinosaurs. |