Remix.run Logo
phkahler 3 hours ago

Google has been doing more R&D and internal deployment of AI and less trying to sell it as a product. IMHO that difference in focus makes a huge difference. I used to think their early work on self-driving cars was primarily to support Street View in thier maps.

brokencode 2 hours ago | parent | next [-]

There was a point in time when basically every well known AI researcher worked at Google. They have been at the forefront of AI research and investing heavily for longer than anybody.

It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

But they are in full gear now that there is real competition, and it’ll be cool to see what they release over the next few years.

hosh 2 hours ago | parent | next [-]

I also think the presence of Sergey Brin has been making a difference in this.

refulgentis 2 hours ago | parent | next [-]

Ex-googler: I doubt it, but am curious for rationale (i know there was a round of PR re: him “coming back to help with AI.” but just between you and me, the word on him internally, over years and multiple projects, was having him around caused chaos b/c he was a tourist flitting between teams, just spitting out ideas, but now you have unclear direction and multiple teams hearing the same “you should” and doing it)

pstuart an hour ago | parent | next [-]

That makes sense. A "secret shopper" might be a better way to avoid that but wouldn't give him the strokes of being the god in the room.

LightBug1 an hour ago | parent | prev [-]

Oh ffs, we have an external investor who behaves like that. Literally set us back a year on pet nonsense projects and ideas.

hungryhobbit an hour ago | parent | prev [-]

Please, Google was terrible about using the tech the had long before Sundar, back when Brin was in charge.

Google Reader is a simple example: Googl had by far the most popular RSS reader, and they just threw it away. A single intern could have kept the whole thing running, and Google has literal billions, but they couldn't see the value in it.

I mean, it's not like being able to see what a good portion of America is reading every day could have any value for an AI company, right?

Google has always been terrible about turning tech into (viable, maintained) products.

vinkelhake an hour ago | parent | next [-]

Is there an equivalent to Godwin's law wrt threads about Google and Google Reader?

See also: any programming thread and Rust.

scarmig 19 minutes ago | parent [-]

I'm convinced my last groan will be reading a thread about Google paper clipping the world, and someone will be moaning about Google Reader.

burgreblast an hour ago | parent | prev | next [-]

I never get the moaning about killing Reader. It was never about popularity or user experience.

Reader had to be killed because it [was seen as] a suboptimal ad monetization engine. Page views were superior.

Was Google going to support minimizing ads in any way?

DiggyJohnson an hour ago | parent | prev | next [-]

How is this relevant? At best it’s tangentially related and low effort

jamespo an hour ago | parent | prev [-]

Took a while but I got to the google reader post. Self host tt-rss, it's much better

smallnix an hour ago | parent | prev [-]

> It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.

I always thought they deliberately tried to contain the genie in the bottle as long as they could

AlfredBarnes 3 hours ago | parent | prev | next [-]

It has always felt to me that the LLM chatbots were a surprise to Google, not LLMs, or machine learning in general.

raphlinus 2 hours ago | parent [-]

Not true at all. I interacted with Meena[1] while I was there, and the publication was almost three years before the release of ChatGPT. It was an unsettling experience, felt very science fiction.

[1]: https://research.google/blog/towards-a-conversational-agent-...

hibikir an hour ago | parent | next [-]

The surprise was not that they existed: There were chatbots in Google way before ChatGPT. What surprised them was the demand, despite all the problems the chatbots have. The pig problem with LLMs was not that they could do nothing, but how to turn them into products that made good money. Even people in openAI were surprised about what happened.

In many ways, turning tech into products that are useful, good, and don't make life hell is a more interesting issue of our times than the core research itself. We probably want to avoid the valuing capturing platform problem, as otherwise we'll end up seeing governments using ham fisted tools to punish winners in ways that aren't helpful either

diamondage an hour ago | parent [-]

The uptake forced the bigger companies to act. With image diffusion models too - no corporate lawyer would let a big company release a product that allowed the customer to create any image...but when stable diffusion et al started to grow like they did...there was a specific price of not acting...and it was high enough to change boardroom decisions

nasretdinov 2 hours ago | parent | prev [-]

Well, I must say ChatGPT felt much more stable than Meena when I first tried it. But, as you said, it was a few years before ChatGPT was publicly announced :)

AbstractH24 2 hours ago | parent | prev [-]

Google and OpenAI are both taking very big gambles with AI, with an eye towards 2036 not 2026. As are many others, but them in particular.

It'll be interesting to see which pays off and which becomes Quibi