Remix.run Logo
mossTechnician 3 days ago

Personally, I find companies with names like "Anthropic" to be inherently icky too. Anthropic means "human," and if a company must remind me it is made of/by/for humans, it always feels less so. E.g.

The Browser Company of New York is a group of friendly humans...

Second, generative AI is machine generated; if there's any "making" of the training content, Anthropic didn't do it. Kind of like how OpenAI isn't open, the name doesn't match the product.

FooBarBizBazz 3 days ago | parent | next [-]

I actually agree with your principle, but don't think it applies to Anthropic, because I interpret the name to mean that they are making machines that are "human-like".

More cynically, I would say that AI is about making software that we can anthropomorphize.

derefr 3 days ago | parent | prev [-]

> Anthropic means "human," and if a company must remind me it is made of/by/for humans

Why do you think that that's their intended reading? I had assumed the name was implying "we're going to be an AGI company eventually; we want to make AI that acts like a human."

> if there's any "making" of the training content, Anthropic didn't do it

This is incorrect. First-gen LLM base models were made largely of raw Internet text corpus, but since then all the improvements have been from:

• careful training data curation, using data-science tools (or LLMs!) to scan the training-data corpus for various kinds of noise or bias, and prune it out — this is "making" in the sense of "making a cut of a movie";

• synthesis of training data using existing LLMs, with careful prompting, and non-ML pre/post-processing steps — this is "making" in the sense of "making a song on a synthesizer";

• Reinforcement Learning from Human Feedback (RLHF) — this is "making" in the sense of "noticing when the model is being dumb in practice" [from explicit feedback UX, async sentiment analysis of user responses in chat conversations, etc] and then converting those into weights on existing training data + additional synthesized "don't do this" training data.

mossTechnician a day ago | parent | next [-]

We both assumed, so I didn't expect to need to back up my thoughts, but their own website ticks the "for humans" trope checkbox: Their "purpose is the responsible development and maintenance of advanced AI for the long-term benefit of humanity."

I acknowledge and appreciate Anthropic's addition to the corpus of scraped data, but that data (both input and output) is still ultimately from others; if it did not exist, there would be no product. This is very different from a video editing tool, which I purchase or lease with the understanding that I will provide my own content, or maybe use licensed footage for B-roll

3 days ago | parent | prev | next [-]
[deleted]
ctoth 3 days ago | parent | prev [-]

I read Anthropic as eluding to the Anthropic Principle as well as the doomsday argument and related memeplex[0] mixed with human-centric or about humans. Lovely naming IMHO.

[0]: https://www.scottaaronson.com/democritus/lec17.html