| ▲ | TheJoeMan 7 hours ago | ||||||||||||||||
That first image, “Structure Prompts with XML”, just screams AI-written. The bullet lists don’t line up, the numbering starts at (2), random bolding. Why would anyone trust hallucinated documentation for prompting? At least with AI-generated software documentation, the context is the code itself, being regurgitated into bulleted english. But for instructions on using the LLM itself, it seems pretty lazy to not hand-type the preferred usage and human-learned tips. | |||||||||||||||||
| ▲ | rafram 7 hours ago | parent | next [-] | ||||||||||||||||
No, it’s two screenshots from Anthropic documentation, stitched together: https://platform.claude.com/docs/en/build-with-claude/prompt... The post even links to that page, although there’s a typo in the link. | |||||||||||||||||
| |||||||||||||||||
| ▲ | Calavar 7 hours ago | parent | prev | next [-] | ||||||||||||||||
It looks like a screenshot from the Claude desktop app, so I don't think the author is trying to disguise the AI origin of the marerial | |||||||||||||||||
| ▲ | croes 5 hours ago | parent | prev | next [-] | ||||||||||||||||
You just hallucinated the content is AI generated. | |||||||||||||||||
| |||||||||||||||||
| ▲ | doctorpangloss 4 hours ago | parent | prev [-] | ||||||||||||||||
There must be an OpenClaw YouTube video helping people post to hacker news, or something, because the front page is overrun with AI slop like this article, that makes no sense anyway. The author literally has no idea what any of this stuff means. | |||||||||||||||||