| ▲ | maplethorpe 19 hours ago |
| A few sentences in, I was thinking that the article felt AI-generated, so I scrolled to the bottom of the page. There's no author listed, but there is this disclaimer: "AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct." One thing I hope we'll see in the future on these types of articles is the ability to view the original prompt. If your goal is to be succinct, you can't get much more succinct than that. |
|
| ▲ | sarreph 19 hours ago | parent | next [-] |
| The (presumably fully human) author is listed in the byline at the top of the article. What is sadly rather ironic is the author's first name, "Al" looks like AI when stylised in the article's font. |
| |
|
| ▲ | Waterluvian 18 hours ago | parent | prev | next [-] |
| > view the original prompt I think this assumes a very limited scope of how AI gets used for these. As if the article is a one and done output from a single prompt. I can imagine many iterative prompts combined with some copying and pasting to get an hour’s worth of copy in five minutes. |
|
| ▲ | autoexec 18 hours ago | parent | prev | next [-] |
| > One thing I hope we'll see in the future on these types of articles is the ability to view the original prompt. Would it matter if the same prompt gives different output? You couldn't verify it. |
| |
| ▲ | saghm 16 hours ago | parent [-] | | The point is to not need to look at the output if the prompt itself has all of the info that someone cares about | | |
| ▲ | baby_souffle 14 hours ago | parent [-] | | If I put a button on the bottom of a web page that says " click here to see the secret sauce", you click it and I pop up some text. How likely are you to just trust, let alone know for sure, whether or not the text I showed you is actually what I fed to the llm? |
|
|
|
| ▲ | jonplackett 18 hours ago | parent | prev | next [-] |
| That jovial overly friendly tone is a give away. Like to thinks its writing style is HILARIOUSLY clever |
| |
| ▲ | permo-w 17 hours ago | parent [-] | | the reason LLMs use that jovial, overly friendly tone is because it's so common in journalism and marketing. this article does smell of ChatGPT, but there's absolutely no way to know for sure. people using LLMs annoy me just as much as people who are so certain that they can tell the difference a smart person can make ChatGPT sounds completely authentic, and a very boring and middle of the road writer who uses em-dashes can make themselves sound completely inauthentic. it's not like LLMs got their style from nowhere as far as I'm concerned, as long as the factual information has been curated by a human, I don't give a shit | | |
|
|
| ▲ | idkfasayer 15 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | ortusdux 19 hours ago | parent | prev | next [-] |
| Just have an AI summarize it for you /s https://marketoonist.com/2023/03/ai-written-ai-read.html |
| |
| ▲ | elpakal 19 hours ago | parent [-] | | or move the disclaimer to the top. or better yet, have aggregators like HN add a badge if it's likely AI generated | | |
| ▲ | jedberg 18 hours ago | parent [-] | | > or better yet, have aggregators like HN add a badge if it's likely AI generated How could you possibly tell? I've been playing around with AI detectors, putting in known all-human samples, known all-AI samples, and mixed samples. The only thing it's gotten right is not marking a human sample as 100% AI (but it marked one of the AI samples as 100% human). Having such a mark would be a witch-hunt for sure. |
|
|
|
| ▲ | 14 hours ago | parent | prev [-] |
| [deleted] |