| ▲ | xmcqdpt2 10 hours ago |
| Then pangram isn't very good, because that article is full of Claude-isms. |
|
| ▲ | embedding-shape 10 hours ago | parent | next [-] |
| > because that article is full of Claude-isms Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism. |
| |
| ▲ | snapcaster 9 hours ago | parent | next [-] | | Real talk. You're not just making a good point -- you're questioning the dominant paradigm | | | |
| ▲ | xmcqdpt2 9 hours ago | parent | prev [-] | | I'm sure some human writers would write: > The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle. > The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it. > *Tests verify the code as written; a behavioural specification asks what the code is for.* However this is a blog post about using Claude for XYZ, from an AI company whose tagline is "AI-assisted engineering that unlocks your organization's potential" Do you really think they spent the time required to actually write a good article by hand? My guess is that they are unlocking their own organizations potential by having Claude writes the posts. | | |
| ▲ | embedding-shape 8 hours ago | parent | next [-] | | > Do you really think they spent the time required to actually write a good article by hand? Given I'm familiar with Juxt since before, used plenty of their Clojure libraries in the past and hanged out with people from Juxt even before LLMs were a thing, yes, I do think they could have spent the time required to both research and write articles like these. Again, won't claim for sure I know how they wrote this specific article, but I'm familiar with Juxt enough to feel relatively confident they could write it. Juxt is more of a consultancy shop than "AI company", not sure where you got that from, guess their landing page isn't 100% clear what they actually does, but they're at least prominent in the Clojure ecosystem and has been for a decade if not more. | |
| ▲ | NetMageSCW 8 hours ago | parent | prev [-] | | Your guess is worth what you paid for it. |
|
|
|
| ▲ | DiffTheEnder 10 hours ago | parent | prev | next [-] |
| Is it possible for a tool to know if something is AI written with high confidence at all? LLMs can be tuned/instructed to write in an infinite number of styles. Don't understand how these tools exist. |
| |
| ▲ | gcr 9 hours ago | parent [-] | | The WikiEDU project has some thoughts on this. They found Pangram good enough to detect LLM usage while teaching editors to make their first Wikipedia edits, at least enough to intervene and nudge the student. They didn’t use it punatively or expect authoritative results however. https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipe... They found that Pangram suffers from false positives in non-prose contexts like bibliographies, outlines, formatting, etc. The article does not touch on Pangram’s false negatives. I personally think it’s an intractable problem, but I do feel pangram gives some useful signal, albeit not reliably. |
|
|
| ▲ | cameronh90 10 hours ago | parent | prev | next [-] |
| It has Claude-isms, but it doesn't feel very Claude-written to me, at least not entirely. What's making it even more difficult to tell now is people who use AI a lot seem to be actively picking up some of its vocab and writing style quirks. |
|
| ▲ | mbo 8 hours ago | parent | prev | next [-] |
| Pangram has a very low false positive rate, but not the best false negative rate: https://www.pangram.com/blog/third-party-pangram-evals |
|
| ▲ | NetMageSCW 8 hours ago | parent | prev [-] |
| You sound like a flat earther and a moon landing denier combined. |