I think this is missing the point and not seeing where the technology is going. For generative art, right now it's basically prompt in -> image out, with no iterative process. But I can tell you that in coding, which is a creative endeavor, it is not like that at all. It is highly iterative. You go back and forth shaping the implementation, exploring ideas, veering off into tangents about some obscure encoding question, or this or that. The point is, coding with a GPT is an iterative, creative process, where you as the operator bring all of the human elements like vision, purpose, ingenuity (the truth is: the solution usually has to come from you when working with a GPT, but the GPT is invaluable in the exploratory / prototyping / sounboarding process).
There is a famous video of Picasso painting a fish on a backlit screen. If you imagine what is happening in his process, it is iterative: he envisions a shape, and then his brain communicates to his hands to produces the shape; then he looks at the shape and his brain envisions another shape to build upon the existing shape. This happens over and over. You can see the process at work in the video. It is iterative, there is a feedback loop, there is the human element -- that is creativity.
Now imagine there is someone who has a vibrant mind like Picasso's, but he has a bodily injury such that he cannot use his limbs. Imagine he gets a Neuralink implant that enables him to interface his mind with a generative AI that is able to represent on a screen what he envisions in his mind? This takes the place of Picasso's brain communicating to his own hands to put things on the screen. Instead this translation of vision to reality is happening THROUGH a generative AI.
If you think about what current generative AI is doing, it is already a crude form of that: a human envisions something -> this becomes a prompt -> the generative AI puts something on the screen in response to that prompt. Now fast forward ten or twenty years to when the process is fluid and iterative, and we all have neuralinks, and we can iteratively think representations into being via generative AI, tweaking every aspect of the final representation, with the totality of prior visual representation available to us like tools and palettes in Photoshop.
You just need to think ahead to where these technologies can go. I think the people who are belittling generative AI are simply not thinking far enough ahead. Like if you saw Steve Wozniak's first breadboard and couldn't envision an iPhone with a retina display in your pocket.
There are very fast text to image or image to image models that run interactively today. A few take well under a second to generate based on the next few characters typed or lines sketched. That's not really new. I built a somewhat slower version months ago. And I think I saw an image generator recently that was over 30 frames per second.
Also Adobe Photoshop has already incorporated image generation.
There is a famous video of Picasso painting a fish on a backlit screen. If you imagine what is happening in his process, it is iterative: he envisions a shape, and then his brain communicates to his hands to produces the shape; then he looks at the shape and his brain envisions another shape to build upon the existing shape. This happens over and over. You can see the process at work in the video. It is iterative, there is a feedback loop, there is the human element -- that is creativity.
Now imagine there is someone who has a vibrant mind like Picasso's, but he has a bodily injury such that he cannot use his limbs. Imagine he gets a Neuralink implant that enables him to interface his mind with a generative AI that is able to represent on a screen what he envisions in his mind? This takes the place of Picasso's brain communicating to his own hands to put things on the screen. Instead this translation of vision to reality is happening THROUGH a generative AI.
If you think about what current generative AI is doing, it is already a crude form of that: a human envisions something -> this becomes a prompt -> the generative AI puts something on the screen in response to that prompt. Now fast forward ten or twenty years to when the process is fluid and iterative, and we all have neuralinks, and we can iteratively think representations into being via generative AI, tweaking every aspect of the final representation, with the totality of prior visual representation available to us like tools and palettes in Photoshop.
You just need to think ahead to where these technologies can go. I think the people who are belittling generative AI are simply not thinking far enough ahead. Like if you saw Steve Wozniak's first breadboard and couldn't envision an iPhone with a retina display in your pocket.