| ▲ | rvz 5 hours ago | |
Also having to wait for ChatGPT for a "thinking" response to search for information that is slower than a Google search loses them lots of money. I believe that it can still work and I won't claim about being unsurprised about this failure. But this is a great opportunity to execute this problem really well if OpenAI and others are not interested in getting good at this. Perplexity also attempted this, got sued by Amazon and it appears semi-abandoned. The only problem is that it must be quicker or just as quick as a Google search, and also compatible with the existing checkout flows. | ||
| ▲ | TeMPOraL 4 hours ago | parent [-] | |
> Perplexity also attempted this, got sued by Amazon and it appears semi-abandoned. Any details on that? I feel the answer is more likely there than in "friction". Hardly any purchase of consequence is so sensitive to friction that the difference between Google Search and an LLM response matters (especially that in reality, we're talking 20+ manual searches per one LLM response). I.e. I'm not going to use LLMs advise on some random 0-100$ purchase anyway, and losing #$ on a ##$ purchase due to suboptimal choice is not that big of a deal - but I absolutely am going to consult it (and have it compile tables and verify sources) on a $500+ purchase and for those I can afford spending few more minutes on research (or rather few hours less, compared of doing it the usual way). | ||