| |
| ▲ | dredmorbius 5 hours ago | parent | next [-] | | Google's entire (initial) claim-to-fame was "PageRank", referring both to the ranking of pages and co-founder Larry Page, which strongly prioritised a relevance attribute over raw keyword findings (which then-popular alternatives such as Alta Vista, Yahoo, AskJeeves, Lycos, Infoseek, HotBot, etc., relied on, or the rather more notorious paid-rankings schemes in which SERP order was effectively sold). When it was first introduced, Google Web Search was absolutely worlds ahead of any competition. I remember this well having used them previously and adopted Google quite early (1998/99). Even with PageRank result prioritisation is highly subject to gaming. Raw keyword search is far more so (keyword stuffing and other shenanigans), moreso as any given search engine begins to become popular and catch the attention of publishers. Google now applies other additional ordering factors as well. And of course has come to dominate SERP results with paid, advertised, listings, which are all but impossible to discern from "organic" search results. (I've not used Google Web Search as my primary tool for well over a decade, and probably only run a few searches per month. DDG is my primary, though I'll look at a few others including Kagi and Marginalia, though those rarely.) <https://en.wikipedia.org/wiki/PageRank> "The anatomy of a large-scale hypertextual Web search engine" (1998) <http://infolab.stanford.edu/pub/papers/google.pdf> (PDF) Early (1990s) search engines: <https://en.wikipedia.org/wiki/Search_engine#1990s:_Birth_of_...>. | | |
| ▲ | saltysalt 5 hours ago | parent [-] | | PageRank was an innovative idea in the early days of the Internet when trust was high, but yes it's absolutely gamed now and I would be surprised if Google still relies on it. Fair play to them though, it enabled them to build a massive business. | | |
| ▲ | marginalia_nu 4 hours ago | parent | next [-] | | Anchor text information is arguably a better source for relevance ranking in my experience. I publish exports of the ones Marginalia is aware of[1] if you want to play with integrating them. [1] https://downloads.marginalia.nu/exports/
grab 'atags-25-04-20.parquet' | | |
| ▲ | dredmorbius 3 hours ago | parent | next [-] | | Though I'd think that you'd want to weight unaffiliated sites' anchor text to a given URL much higher than an affiliated site. "Affiliation" is a tricky term itself. Content farms were popular in the aughts (though they seem to have largely subsided), firms such as Claria and Gator. There are chumboxes (Outbrain, Taboola), and of course affiliate links (e.g., to Amazon or other shopping sites). SEO manipulation is its own whole universe. (I'm sure you know far more about this than I do, I'm mostly talking at other readers, and maybe hoping to glean some more wisdom from you ;-) | | |
| ▲ | marginalia_nu 3 hours ago | parent [-] | | Oh yeah, there's definitely room for improvement in that general direction. Indexing anchor texts is much better than page rank, but in isolation, it's not sufficient. I've also seen some benefit fingerpinting the network traffic the websites make using a headless browser, to identify which ad networks they load. Very few spam sites have no ads, since there wouldn't be any economy in that. e.g. https://marginalia-search.com/site/www.salon.com?view=traffi... The full data set of DOM samples + recorded network traffic are in an enormous sqlite file (400GB+), and I haven't yet worked out any way of distributing the data yet. Though it's in the back of my mind as something I'd like to solve. | | |
| ▲ | dredmorbius 2 hours ago | parent [-] | | Oh, that is clever! I'd also suspect that there are networks / links which are more likely signs of low-value content than others. Off the top of my head, crypto, MLM, known scam/fraud sites, and perhaps share links to certain social networks might be negative indicators. | | |
| ▲ | marginalia_nu an hour ago | parent [-] | | You can actually identify clusters of websites based on the cosine similarity of their outbound links. Pretty useful for identifying content farms spanning multiple websites. Have a lil' data explorer for this: https://explore2.marginalia.nu/ Quite a lot of dead links in the dataset, but it's still useful. |
|
|
| |
| ▲ | saltysalt 4 hours ago | parent | prev [-] | | Very interesting, and it is very kind of you to share your data like that. Will review! |
| |
| ▲ | snowwrestler 3 hours ago | parent | prev [-] | | Google’s biggest search signal now is aggregate behavioral data reported from Chrome. That pervasive behavioral surveillance is the main reason Apple has never allowed a native Chrome app on iOS. It’s also why it is so hard to compete with Google. You guys are talking about techniques for analyzing the corpus of the search index. Google does that and has a direct view into how millions of people interact with it. | | |
| ▲ | saltysalt 2 hours ago | parent | next [-] | | Yes indeed, they have an impossibly deep moat and deeper pockets. I'm certainly not trying to compete with them with my little side project, it's just for fun! | |
| ▲ | xnx 2 hours ago | parent | prev [-] | | > That pervasive behavioral surveillance is the main reason Apple has never allowed a native Chrome app on iOS The Chrome iOS app still knows every url visited, duration, scroll depth, etc. |
|
|
| |
| ▲ | orf 6 hours ago | parent | prev | next [-] | | Sure, but the point is results are not relevant at all? It’s cool though, and really fast | | |
| ▲ | saltysalt 6 hours ago | parent | next [-] | | I'll work on that adjustment, it's fair feedback thanks! | | |
| ▲ | direwolf20 5 hours ago | parent [-] | | Unfortunately this is the bulk of search engine work. Recursive scraping is easy in comparison, even with CAPTCHA bypassing. You either limit the index to only highly relevant sites (as Marginalia does) or you must work very hard to separate the spam from the ham. And spam in one search may be ham in another. | | |
| ▲ | saltysalt 5 hours ago | parent [-] | | I limit it to highly relevant curated seed sites, and don't allow public submissions. I'd rather have a small high-quality index. You are absolutely right, it is the hardest part! |
|
| |
| ▲ | globular-toast 5 hours ago | parent | prev [-] | | What do you mean they're not relevant? The top result you linked contained the word stackoverflow didn't it? It's showing you exactly what you searched for. Why would you need a search engine at all if you already know the name of the thing? Just type stackoverflow.com into your address bar. I feel like Google-style "search" has made people really dumb and unable to help themselves. | | |
| ▲ | orf 5 hours ago | parent [-] | | the query is just to highlight that relevance is a complex topic. few people would consider "perl blog posts from 2016 that have the stack overflow tag" as the most relevant result for that query. |
|
| |
| ▲ | pjc50 4 hours ago | parent | prev [-] | | Confluence search does this, for our intranet. As a result it's barely usable. Indexing is a nice compact CS problem; not completely simple for huge datasets like the entire internet, but well-formed. Ranking is the thing that makes a search engine valuable. Especially when faced with people trying to game it with SEO. |
|