| ▲ | make3 12 hours ago |
| There is no acceptable use of AI for most people in the artistic field. They see it as an extreme treason, and I understand. They're under incredible incredible threat. They are conscious of preventing momentum in a bad direction. If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly. |
|
| ▲ | hgoel 12 hours ago | parent | next [-] |
| That's a strange position to take. I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears. Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable. |
| |
| ▲ | mbgerring 8 hours ago | parent | next [-] | | Do you have any idea how hard it already is to make a living in a creative field? | |
| ▲ | make3 11 hours ago | parent | prev | next [-] | | This is a very first degree analysis. AI is seen as an oppressor and a threat, and AI providers are seen as oppressors. It's understandable that people don't want to collaborate with their oppressors, either direct or by association. If you were a Jew, would you buy shoes from the Nazis just because you were individually safe from them at that moment? Or would you if you were of a minority they hadn't started exterminating yet? Or if they were not exactly the Nazis killing your people but some affiliated group? This sounds extreme until you realize they are under threat of losing their likelihood for good. They are right to not accept your inevitability point without a fight, this is a human thing that can be fought, revolutions have happened, and will continue to happen. I don't necessarily agree with this but I do understand it. | |
| ▲ | FireBeyond 11 hours ago | parent | prev [-] | | > I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears. "If you ignore their biggest, their primary, concern, their other concerns seem almost trivial". | | |
| ▲ | hgoel 10 hours ago | parent [-] | | I literally said I understand if the training data sourcing is their primary concern. | | |
| ▲ | make3 8 hours ago | parent | next [-] | | he meant that that's not the primary concern. the sourcing of the data is a red herring, they care about losing their ability to make a living doing the thing that they love that is so central to their identity | |
| ▲ | FireBeyond 10 hours ago | parent | prev [-] | | I think I'm not sure how to parse your statement... I don't think there'd be much care for (or need for) the UX change if it wasn't for the whole ideological/valid fear about training AI on creative works? But it has been a long day, so I apologize. | | |
| ▲ | hgoel 10 hours ago | parent [-] | | I've been all over the place with my thoughts, so it's fair for you to be unsure of how to parse what I said. When making my initial post, I was thinking "this is a coding model, it isn't an image/3d model generation model, so why do they care?". I further interpreted make3 as saying that 3d artists were opposed to AI in general because they view any AI use as trending towards taking away their jobs. So, what I meant when I said '... otherwise ...' wasn't trying to dismiss the data sourcing concern, but more like "I understand if the data sourcing is the concern, but you (make3) seem to be saying it's about the use of AI in general (ie even if, hypothetically, an ethically sourced training dataset was used for a model), which feels like a weird restriction to me". That was when I added the edit to my initial post. |
|
|
|
|
|
| ▲ | 2001zhaozhao 7 hours ago | parent | prev | next [-] |
| This is the best phrasing of the issue I've seen online anywhere. You can find AI useful and still be against its introduction into your field for entirely understandable reasons. Unfortunately this does create uphill friction for any good-intentioned people trying to use AI to improve art by empowering people to take on more ambitious projects. (This is a general statement and not related to the case of Anthropic. Of course Anthropic here is just trying to sell their product, which is a fair thing to do in isolation, but I also understand the opposition to it on the grounds of its downstream effects.) |
|
| ▲ | simianwords 7 hours ago | parent | prev [-] |
| Completely false and I hate this puritan gatekeeping. Artists who hate AI are the type to put more importance on the craft than the end product itself. Art is a means of communicating something personal. It’s not meant to show off skills in how well you can move a pencil or how many fricking tools you know in adobe. AI removes all these hurdles and directly presents you with the end problem - communication. Artists hate that because most artists don’t have anything to communicate. These people deserve to be automated away. I don’t wanna see more derivative shit. Artists who have something special to communicate won’t feel threatened by AI but feel more freedom. |
| |
| ▲ | javascriptfan69 6 hours ago | parent [-] | | >AI removes all these hurdles and directly presents you with the end problem - communication. Which is why 99.9% of AI art is worthless. There's literally nothing personal or interesting about getting grok to fart out some picture you thought about while sitting on the toilet in the morning. AI art will never be good without actual artists embracing the medium. |
|