Remix.run Logo
daxfohl 4 hours ago

I wonder how the internet would have been different if claws had existed beforehand.

I keep thinking something simpler like Gopher (an early 90's web protocol) might have been sufficient / optimal, with little need to evolve into HTML or REST since the agents might be better able to navigate step-by-step menus and questionnaires, rather than RPCs meant to support GUIs and apps, especially for LLMs with smaller contexts that couldn't reliably parse a whole API doc. I wonder if things will start heading more in that direction as user-side agents become the more common way to interact with things.

throwaway13337 3 hours ago | parent | next [-]

This is the future we need to make happen.

I would love to subscribe to / pay for service that are just APIs. Then have my agent organize them how I want.

Imagine youtube, gmail, hacker news, chase bank, whatsapp, the electric company all being just apis.

You can interact how you want. The agent can display the content the way you choose.

Incumbent companies will fight tooth and nail to avoid this future. Because it's a future without monopoly power. Users could more easily switch between services.

Tech would be less profitable but more valuable.

It's the future we can choose right now by making products that compete with this mindset.

stephen_cagle an hour ago | parent | next [-]

Biggest question I have is maybe... just maybe... LLM's would have had sufficient intelligence to handle micropayments. Maybe we might not have gone down the mass advertising "you are the product" path?

Like, somehow I could tell my agent that I have a $20 a month budget for entertainment and a $50 a month budget for news, and it would just figure out how to negotiate with the nytimes and netflix and spotify (or what would have been their equivalent), which is fine. But would also be able to negotiate with an individual band who wants to directly sell their music, or a indie game that does not want to pay the Steam tax.

I don't know, just a "histories that might have been" thought.

throwaway13337 an hour ago | parent [-]

Maybe we needed to go through this dark age to appreciate that sort of future.

This sort of thing is more attractive now that people know the alternative.

Back then, people didn't want to pay for anything on the internet. Or at least I didn't.

Now we can kill the beasts as we outprice and outcompete.

Feels like the 90s.

charcircuit 2 hours ago | parent | prev | next [-]

Why wouldn't there be monopoly power? Popular API providers would still have a lot of power.

SV_BubbleTime 2 hours ago | parent [-]

If I can get videos from YouTube or Rumble or FloxyFlib or your mom’s personal server in her closet… I can search them all at once, the front end interface is my LLM or some personalized interface that excels in it’s transparency, that would definitely hurt Google’s brand.

charcircuit an hour ago | parent [-]

Controlling the ability to be recommended and monetized to billions of people is still powerful.

galkk 42 minutes ago | parent | prev | next [-]

What is in it _for them_?

Where and how do they make money?

daxfohl 2 hours ago | parent | prev [-]

I don't exactly mean APIs. (We largely have that with REST). I mean a Gopher-like protocol that's more menu based, and question-response based, than API-based.

mncharity 2 hours ago | parent | prev | next [-]

Yesterday IMG tag history came up, prompting a memory lane wander. Reminding me that in 1992-ish, pre `www.foo` convention, I'd create DNS pairs, foo-www and foo-http. One for humans, and one to sling sexps.

I remember seeing the CGI (serve url from a script) proposal posted, and thinking it was so bad (eg url 256-ish character limit) that no one would use it, so I didn't need to worry about it. Oops. "Oh, here's a spec. Don't see another one. We'll implement the spec." says everyone. And "no one is serving long urls, so our browser needn't support them". So no big query urls during that flexible early period where practices were gelling. Regret.

mejutoco 2 hours ago | parent | prev | next [-]

Any website could in theory provide api access. But websites do not want this in general: remember google search api? Agents will run into similar restrictions for some cases as apis. It is not a technical problem imo, but an incentives one.

daxfohl 2 hours ago | parent | next [-]

The rules have changed though. They blocked api access because it helped competitors more than end users. With claws, end users are going to be the ones demanding it.

I think it means front-end will be a dead end in a year or two.

cobertos 2 hours ago | parent | prev [-]

Can you explain how Google Search API fits into your point? I don't know enough about it

fsloth 3 hours ago | parent | prev [-]

> if claws had existed beforehand.

That's literally not possible would be my take. But of course just intuition.

The dataset used to train LLM:s was scraped from an internet. The data was there mainly due to the user expansion due to www, and the telco infra laid during and after dot-com boom that enabled said users to access web in the first place.

The data labeling which underpins the actual training, done by masses of labour, on websites, could not have been scaled as massively and cheaply without www scaled globally with affordable telecoms infra.