By Amanda Hicok
Computer-generated technology has transformed digital artwork on the internet, raising questions for both students and faculty as to what plagiarism is, what ownership rights mean, and what the ongoing impacts for art and art-engineering programs can be, says George Legrady, director of the Experimental Visualization Lab of the Media Arts & Technology (MAT) graduate program at UC Santa Barbara.
The ease of AI-produced digital artwork, illustrations, and photorealistic images has already altered UCSB’s MAT program, which now grants greater creative liberties when it comes to AI-generated work.
Graphic images produced by generative AI, or natural language processing programs (large language models) can create many types of art, and most of these programs can be accessed for free, or at a low cost. This has given both professionals and laymen the ability to produce professional level graphic arts with AI, raising concerns about copyright and authenticity of the content produced, Legrady says.
The former Chair of the MAT program, who is director of the Experimental Visualization Lab in the Media Arts & Technology Graduate Program, gives us his take on what this and similar AI art production means for the field of digital media and for UCSB students.
Q: Can you explain what generative AI is, particularly in the context of algorithmic art? How do creative AI programs generate the images?
A: Yes, generative AI is where you extract from a system [digital artwork] by providing either a text prompt or an image prompt, and then the system delivers the image. In MidJourney [an image-text creative AI model], you type in a phrase which will determine the content of the image and then set some parameters like the image width/height ratio, image quality, and how much you want MidJourney to impose an aesthetic style.
Q: What are some ethical considerations for the use of generative AI both in the classroom and in professional settings?
A: Professional settings are more complicated because there’s the question of intellectual property. Many professionals working in the field of graphic design are very concerned about the fact that their style is appropriated by the system. Graphic designers are dependent on delivering a particular style, and if the AI learns that style and can deliver it, then graphic designers lose some control over their protection. And then their career is going to be challenged in some ways.
As an artist, that problem doesn’t really exist, because in art, usually —at least in my case— every work that I do is a type of new work. It’s not limited by what I’ve done in the past. It’s new work.
Q: How do you ensure that AI-generated artworks maintain originality and distinctiveness? When do you view the artwork as creations of the AI user, who does the prompts, and when do you attribute the credit to programs and their developers?
A: Originality and distinctiveness are big challenges because any image that an AI system will deliver is based on a large collection of images that it has been trained on. Basically, what you’re getting is a remix of what’s been fed to the system. For instance, in the database of Laion-5B [an image-text model like Midjourney] there’s 5.85 billion images, so the number of images is very large. The re-configuration is quite complex. You cannot trace what the source is. I think MidJourney says that anything that is done through MidJourney, they own the rights to.
The best results are achieved by iteratively repeating and rephrasing the prompt, adding direction both subtle and more direct like “reduce darkness, make it look like a super-8 image texture, no straight lines and old cars,” for example.
Q: Can organizations or individuals use MidJourney or Dall-E [another image generating model] for professional work? If so, what are the considerations when doing so?
A: Organizations and businesses do use AI software to upgrade their range of outputs. For instance, an illustrator may take a week to do a design for a book cover, but with AI software hundreds can be generated in a short time.
Q: What are the distinguishing features of AI generated artwork that you notice when organizations have used it, and what are your feelings about it?
A: As an artist, I consider the AI software as a collaboration providing output variations as a way to expand the range of possibilities and subtleties. I begin with the concept, and the aesthetic constraints and then see what the software can generate. I will then try to push the results by adjusting the prompts to guide the software, somewhat like riding a horse where you continuously pull on the reins to indicate the direction you want to go in. Some clients or audiences do not want AI generated results. A way to notice an AI output is to examine the secondary details beyond the main features. For instance, will the ears or earrings look realistic?
Q: How can students stay relevant while competing with AI?
A: It depends on the field of course. But, in my case, which is the field of creating images…one could consider the AI as a collaborator, or as an assistant, or as a source for inspiration. And, one can work with it in such a way that it, kind of, provides thoughts or images or ideas that one would have not considered on one’s own.
Q: What are the risks to incorporating generative AI in the jobs of graphic artists?
A: It’s a challenge, because if you’ve developed a style and that style is appropriated by the system, it produces works for other people, using that style…Also, if you as a human spend a week developing an image, the AI machine cranks out 100 in a very short time. Let’s say you’re working with AI and you’re using the mode of images, you should reveal the steps you’ve taken and use them to know your work. So, the focus is human involvement.
Q: How would you suggest the general public gains awareness or understanding of how these tools could be implemented?
A: Experience and knowledge are an outcome of repeated exposure which is a process of education. The culture slowly adapts, but the general understanding is always far behind how such imaging or text generating technologies function.
Even with the case of photography, we are behind in our comprehension of how the technology functions. We still believe in the image, even though it is a construct by a human, or now through imaging technologies defined through the constraints of the technology. Additionally, our cultural description even today does not differentiate between a thing or a presentation of that thing. I can show you a photograph, and describe what is represented as either “this is my car” or “this is a photograph of my car.” We still do not precisely differentiate between those two modes of describing.
Q: If we can look ahead 10 years regarding AI artwork, what is the best-case scenario for creative industries?
I am not good at guessing the future, but what I have noticed over the past 30 years of working with computing technologies is that there is the process of continuous evolution; the current technology being surpassed by the new one, without much thought to what degree our notions of how we represent, and how we consider our understanding of things are being impacted.
The French cultural theorist Jean Baudrillard has defined the process of continuous cultural re-imagings through his discussions of two terms “Simulacra” and “Simulation.” He argues that we begin by representing a thing or experience through a faithful copy which everyone accepts. The next phase is one where the copy or representation is seen as false and therefore problematic. This is followed by a situation where the representation is untrue to the original, but we nonetheless accept it. And the final phase is where the representation does not have any connection to the original, but we now consider this representation to be the trusted, “the real,” and in fact prefer it, even though false — for instance, covering our backyard with fake artificial grass, preferring this to the natural grass.
As an artist who has evolved out of the tradition of a conceptual approach to photography and transitioning to computational image creation, with the overarching goal to explore to what degree there is the discrepancy between how images produced by optical-computational technologies increase the discrepancy between what is represented and to what degree the technology disrupts or imposes a meaning onto the thing represented, I foresee living in a culture where the gap between what an image means and what it represents is further distanced. We believe in the photographic image even though it is a false representation distancing the gap between the thing and its representation. With AI this is further exponentially increased pushing us into a world where we will further live in the symbolic, at a greater removal from the real.
Amanda Hicok attended UCSB as a Global Studies major and Professional Writing Minor. She has also studied Design Management at Parsons Paris School of Art & Design, and Advertising at The Art Institute of San Diego California.