▲ | gwern 3 days ago | |||||||||||||
> I don't know what the rest of your comment is talking about. Googling "Yirgacheffe" shows it's a real thing. The Safavid coffee/Qajar tea claims seem accurate as well. So you at least learned something from the article. No, I didn't learn anything from the article (except how susceptible HN has become to even 4o-level LLM outputs, of course). I learned something from your comment, and you learned that from a Google search. Do you see the difference? > So the actual content of the article is perfectly plausible. So in other words... it added nothing to even a superficial familiarity with the topic. | ||||||||||||||
▲ | pazimzadeh 2 days ago | parent [-] | |||||||||||||
> I learned something from your comment, and you learned that from a Google search But I wouldn't have thought to look into Qajar tea/Safavid coffee if it wasn't for the blog post (by the way, I find 4o to be pretty good at history). What I can't figure out is why you seem so confident that the OP didn't verify the LLM output and/or would have published anything written by the model, whether it was faulty or not (which again, in this case it wasn't). You're clearly allergic to basic LLM-style or at least the masquerading of LLM text as human, so I'm curious what you'd consider worse: 1. LLM-generated text reflecting an accurate prompt/input, or 2. genuine human BS wanting to be taken seriously? (e.g. The Areas of My Expertise by John Hodgman if it wasn't in jest) Personally, I prefer #1 since I can still learn something from it. | ||||||||||||||
|