Remix.run Logo
ricardo81 2 days ago

I wonder if unknown /s powers persuaded us to homogenise things which ultimately suited AI training for AI to be viable.

- search engine algorithms used be be the main place of information discovery. Before 200x it would involve not using javascript for any text you wanted to be readable by a bot

- "best viewed in x browser" which happened in the late 90s and early 00s. If a website looked crap, use the other browser.

- social graph metadata. Have a better image, title, description for people who see a snippet of your page on a social network

Nowadays everything is best viewed in Chrome/Safari, Firefox does have some issues.

Google owns the majority of the search market.

Facebook/Twitter/Linkedin at least in the Western world drive most social traffic.

I would guess the 'taste' of AI has been predetermined by these strong factors on the web.

An alternative could be a DMOZ like directory with cohorts voting on the value of things, maybe with the help of AI. It does seem like the web has been 'shaped' for the past 15 years or so.

rhetocj23 2 days ago | parent [-]

Lol youre giving too much credit to certain people.

People have trouble thinking 2 years out, let alone 5, 10, 15, 20 years...

ricardo81 2 days ago | parent [-]

What certain people do you mean?

To me it's undeniable that the web has become more centralised, more homogenised, and certain agents find that very convenient.

even wiki(pedia|data) is very convenient for large scale training, and most of their sources are from the 'open' web.