dragon.style is part of the decentralized social network powered by Mastodon.
I'm a grumpy queer dragon lady and this is my quiet cave for me and some friends.

Server stats:

81
active users

Learn more

Pyrex, nightsworn alchemist

I keep seeing this discourse.

> "I'm opposed to AI art."
> "No, you're opposed to the effects of AI art under capitalism."

I would like to see more people reply:

> "... yes, and we are living under capitalism."

I think culture ultimately will reincorporate AI art in a way that's not utterly destructive to artists, but I'm sick of hearing this offered as a defense for Tool That Exists Mostly To Scab.

Things I want more of, while I'm at it:

- Public criticism of people who make AI art products.
- Public criticism of products that incorporate AI art, including boycotts.

Things I want less of:

- Dogpiling of randos who create/consume AI art for their own use.
- Copyright maximalism for _literally any reason_
- Corporate intimidation campaigns of the "look! we can hire a robot to replace you" stripe (especially when it is obvious the robot is doing a terrible job)

The likely trajectory is that in five years people will all have a pretty reasonable idea of what AI art is actually capable of -- which is a lot, but professional artists will largely not have been replaced, everything the robots are bad at will be worth much more, and a few specific styles will have greatly reduced in value because they're easy for the computer to imitate.

But in the short term? Well, lots of anti-worker action taking the form of media campaigns.

I basically think the same of AI coding assistants, for what it's worth!

Those memey "ChatGPT generated a game _from scratch_" videos will disappear -- although people will probably still be using large language models to generate example projects.

Relatively unintrusive AI assistants like the IntelliCode assistant built into Visual Studio 2022 will probably beat "generate cognitively taxing wall of gibberish"-type products like Copilot. (Yes, I know those are by the same company.)

(Re LLMs in general: I literally don't think they're ready for any kind of knowledge work, but I have no idea how the crash will happen. I think they're unironically better at persuasion than humans and it means it will be pretty hard to kill them off even as their incapacities become more well known and the "hallucination problem" gets more press.)