▲ | stereolambda 2 days ago | |
Honestly the SEO talk sounds like reflexive coping in this discourse. I get that WWW has cheapened quality, but we now have the tech that could defeat most of the SEO and other trash tactics on the search engine side. Text analysis as a task is cracked open. Google and such could detect dark patterns with LLMs, or even just deep learning. This would probably be more reliable than answering factual queries. The problem is there is no money and fame in using it that way, or at least so people think in the current moment. But we could return to enforcing some sort of clear, pro-reader writing and bury the 2010s-2020s SEO garbage on page 30. Not the mention that the LLMs randomly lie to you with less secondary hints at trustworthiness (author, website, other articles, design etc.) than you get in any other medium. And the sustainability side of incentivizing people to publish anything. I really see the devil of convenience as the only argument for the LLM summaries here. | ||
▲ | zahlman 2 days ago | parent [-] | |
> But we could return to enforcing some sort of clear, pro-reader writing and bury the 2010s-2020s SEO garbage on page 30. We could. But it will absolutely not happen unless and until it can be more profitable than Google's current model. What's your plan? > Not the mention that the LLMs randomly lie to you with less secondary hints at trustworthiness (author, website, other articles, design etc.) than you get in any other medium. And the sustainability side of incentivizing people to publish anything. I really see the devil of convenience as the only argument for the LLM summaries here. Well, yes. That's the problem. Why rely on the same random liars as taste-makers? |