| ▲ | dang a day ago | |||||||||||||||||||||||||
Could you please stop posting this sort of indignant-sensational comment? It's not what this site is for, as you know (or should know). | ||||||||||||||||||||||||||
| ▲ | echelon a day ago | parent [-] | |||||||||||||||||||||||||
Dang, can you explain how this is indignant or sensational? Anthropic's leadership and researchers continue to this day to post messages saying engineering will be fully automated. I can go find recent messages on X if you'd like. This forum is comprised mostly of engineers, who will be the most impacted if their vision of the world pans out. YC depends on innovation capital to make money. If the means of production are centralized, how does YC make any money at all from engineers? Such a world will be vertically and horizontally integrated, not democratically spread for others to take advantage of. Now I don't think that's what's going to happen, but that's what the messaging has been and continues to be from Anthropic's leadership, researchers, and ICs. Why should we support companies like this? We shouldn't we advocate for open models where any market participants can fully utilize and explore the competitive gradients? I don't think I'm saying anything controversial here. Furthermore, if this pans out like it seems it will - a set of three or four AI hyperscalers - we'll also be in the same situation we have today with the big tech hyperscalers. Due to a lax regulatory environment, these companies put a ceiling on startup exits by funding internal competition, buying competitors, etc. I don't see how the situation will improve in an AI world. If you're a capitalist, you want competition to be fierce and fair. You don't want concentration of power. I can see how an Anthropic IC might not like this post, but this should be fairly reasonable for everyone else who would like to see more distribution of power. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||