Let's be nice; they've released Stable Diffusion and now SDXL for free, which has completely changed the landscape on what can practically be done by individuals.
Having an open foundation model for image-generation is a service to the world. It just isn't exactly obvious how it could possibly lead to profit.
We really have no idea how the case law is going to fall out on whether or not it is legal to train on copyrighted content. There is precedent in the US that lends itself to the idea that this would be considered fair use, such as Google Books.
Other countries, like Japan, explicitly allow training on copyrighted content for AI and machine learning - Article 30-4 of the Japan's Copyright Law.
As it stands, there are very few places where the law here is settled, and StabilityAI is, to my knowledge, not in any of those places. So it's probably not reasonable at this point to be so definitive in claiming that copyrighted content was stolen - it very well end up that the letter of the law ends up supporting this as explicitly legal. Or maybe not! We'll see.
I don't think the Google books case is a good analog here because the image model stuff runs counter to a few of the things that might make something fair use, particularly the point about affecting the market for the original work.
And even if it does turn out to be legal, you're still a piece of shit if you take an artist's work and train on it without their permission.
Having an open foundation model for image-generation is a service to the world. It just isn't exactly obvious how it could possibly lead to profit.