Remix.run Logo
rdtsc 10 hours ago

> But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

A step in the positive direction, at least they don't have to pretend any longer.

It's like Google and "don't be evil". People didn't get upset with Google because they were more evil than others, heck, there's Oracle, defense contractors and the prison industrial system. People were upset with them because they were hypocrites. They pretended to be something they were not.

estearum 5 hours ago | parent | next [-]

No it's actually possible for organizations to work safely for long periods of time under complex and conflicting incentives.

We should stop putting the bar on the floor for some of the (allegedly) most brilliant and capable minds in the world.

paganel 5 minutes ago | parent [-]

In a capitalistic society (such as ours) I find what you’re describing close to impossible, at least when it comes to large enough organizations. The profit motive ends up conquering all, and that is by design.

wolvoleo 3 hours ago | parent | prev | next [-]

I don't really agree. People are plenty upset with palantir and broadcom for being evil for example and I don't see their motto promiong they won't be.

tsunamifury 9 hours ago | parent | prev [-]

I worked at Google for 10 years in AI and invented suggestive language from wordnet/bag of words.

As much as what you are saying sounds right I was there when sundar made the call to bury proto LLM tech because he felt the world would be damaged for it.

And I don’t even like the guy.

zbentley 4 hours ago | parent [-]

> sundar made the call to bury proto LLM tech

Then where did nano banana and friends come from? Did Google reverse course? Or were you referring to something else being buried?

gradys 3 hours ago | parent | next [-]

This was long before. Google had conversational LLMs before ChatGPT (though they weren’t as good in my recollection), and they declined to productize. There was a sense at the time that you couldn’t productize anything with truly open ended content generation because you couldn’t guarantee it wouldn’t say something problematic.

See Facebook’s Galactica project for an example of what Google was afraid would happen: https://www.technologyreview.com/2022/11/18/1063487/meta-lar...

tsunamifury 3 hours ago | parent | prev [-]

Neema was running a fully fledged Turing passing chatbot in 2019. It was suppressed. Then written about in open source and openAI copied it. Then Google was forced to compete.

This is all well known history.