▲ | mossTechnician 3 days ago | |||||||||||||||||||
Personally, I find companies with names like "Anthropic" to be inherently icky too. Anthropic means "human," and if a company must remind me it is made of/by/for humans, it always feels less so. E.g. The Browser Company of New York is a group of friendly humans... Second, generative AI is machine generated; if there's any "making" of the training content, Anthropic didn't do it. Kind of like how OpenAI isn't open, the name doesn't match the product. | ||||||||||||||||||||
▲ | FooBarBizBazz 3 days ago | parent | next [-] | |||||||||||||||||||
I actually agree with your principle, but don't think it applies to Anthropic, because I interpret the name to mean that they are making machines that are "human-like". More cynically, I would say that AI is about making software that we can anthropomorphize. | ||||||||||||||||||||
▲ | derefr 3 days ago | parent | prev [-] | |||||||||||||||||||
> Anthropic means "human," and if a company must remind me it is made of/by/for humans Why do you think that that's their intended reading? I had assumed the name was implying "we're going to be an AGI company eventually; we want to make AI that acts like a human." > if there's any "making" of the training content, Anthropic didn't do it This is incorrect. First-gen LLM base models were made largely of raw Internet text corpus, but since then all the improvements have been from: • careful training data curation, using data-science tools (or LLMs!) to scan the training-data corpus for various kinds of noise or bias, and prune it out — this is "making" in the sense of "making a cut of a movie"; • synthesis of training data using existing LLMs, with careful prompting, and non-ML pre/post-processing steps — this is "making" in the sense of "making a song on a synthesizer"; • Reinforcement Learning from Human Feedback (RLHF) — this is "making" in the sense of "noticing when the model is being dumb in practice" [from explicit feedback UX, async sentiment analysis of user responses in chat conversations, etc] and then converting those into weights on existing training data + additional synthesized "don't do this" training data. | ||||||||||||||||||||
|