| ▲ | Ask HN: Does Claude use 'prior' in a Bayesian sense more than English? | |||||||
| 4 points by slake 17 hours ago | 4 comments | ||||||||
Just an observation. When asked to summarize articles, or extract insights, I see the word 'prior' being used a lot more by Claude than usual English language writing (Journalistic essence). And it's clearly using it in a Bayesian sense, because it's always mentioning things like 'Updating priors', 'the prior doesn't hold', etc. Probably something I noticed after reading the 'goblin' and 'gremlin' article. | ||||||||
| ▲ | nivertech 5 hours ago | parent | next [-] | |||||||
AI talk is turning into Silicon Valley pseudo-math slang. Priors, exponentials, latent space You get lines like “no priors” or “embracing exponentials” that sound smart but mostly signal status Same move as N Taleb and “convexity.” A real idea turned into a generic intellectual flex | ||||||||
| ▲ | bjourne 12 hours ago | parent | prev | next [-] | |||||||
Probably? Reinforcement learning creates bots with specific styles. For example, ChatGPT is very fond of "typically", "unpack this", and "if you want". | ||||||||
| ▲ | ex-aws-dude 15 hours ago | parent | prev [-] | |||||||
Once again a post with literally 3 points and 2 hours old is the top of /ask Why is the HN algorithm such ass, can we talk about that? | ||||||||
| ||||||||