Painting by Words
When I started playing around with Stable Diffusion and another text-to-image generator, OpenAI’s DALL-E 2, I wracked my mind for what I’d actually like to see. Something ridiculous like a puppet playing guitar on the moon? No. A beautiful sunset? Nope. But what about a scene I was writing in a script?
Without the resources and time to consult a concept artist for inspiration, DALL-E 2 and Stable Diffusion proved useful. The generated art helps set the mood, clarify the images in my imagination, and even inspire. And all it takes is a descriptive and carefully crafted prompt to steer the algorithm in the right direction. Here are a few examples, in the (altered) style of Caspar David Friedrich.
AI and machine learning has the power to revolutionize nearly all aspects of our digital lives. What are people going to do about it? Will we see man rise against machine, the way movies told us would happen? Spielberg’s Flesh Fairs celebrating the violent torture and annihilation of sentient machines while angry mobs scream, “WHAT ABOUT US?”
Indeed, what about us? Artificial intelligence is not mature enough nor autonomous enough — as far as we know — to be a viable, existential threat. The threat alone is internal. As an artist, would you render an image like the cover photo I generated for this post, or would you draw it by hand? As you will be paid for your work regardless, and not neglecting the likelihood that the AI follows a brief better than you do, why spend the extra effort?
These are the dilemmas we now face. Artists are more likely to shun artificial intelligence because their pride and principles compel them. Others will embrace it as a tool. In this writer’s opinion, the latter option has a promising future… as long as the AI remains within the boundaries we established.