By Karlota Jasinkiewicz Herrador
Artificial intelligence is here to stay, and panicking about it is not the answer, says Fabian Offert, a professor of digital humanities who teaches in UC Santa Barbara’s Germanic and Slavic Studies department. In his course “Critical AI,” he encourages students to approach it instead from a critical standpoint, putting the risks in context.
Offert formerly worked for German cultural institutions such as ZKM Center for Art and Media in Karlsruhe, the Goethe-Institut New York, and the Ruhrtriennale Festival of the Arts. While working as a media art curator he developed an interest in the relation between computation and art. He then completed his Ph.D. at UCSB in Media, Arts and Technology, writing his dissertation on artificial neural networks.
Offert is now teaching the third iteration of “Critical AI” in the Comparative Literature program, which covers the history and theory of AI from a humanities perspective. It explores the design and construction of machine learning systems and their philosophical and political implications.
Through its small class-size of less than 30 students and engaging activities such as attempting to “break” ChatGPT, the course allows students to dive into the world of artificial intelligence and understand the discourse surrounding it.
Offert recently sat down for an interview about the perceived “threats” of AI and the importance of a balanced approach towards them.
Q: There is a lot of discussion about the need for AI regulations. What are your thoughts on that?
A: Of course, there are certain aspects of AI that can become dangerous. There's a whole area of research on biases and uses in AI, for instance in policing or face recognition. But what I always tell my students is: Ban guns first. AI is still much less problematic than many other things that are already out there that we need to address right now: guns, economic circumstances, the conditions of production.
Q: What are the most urgent problems you think AI presents?
A: A big problem that comes from both language and image [machine learning] models is that the Internet will become 99.5% spam. People already use ChatGPT for search engine optimization and to produce websites. If you Google stuff in two years’ time, the first 100 matches will be AI generated stuff, so the overall quality of what you can find on the web will decrease.
Q: What do you think about the impact of these tools within academia?
A: I don't know if I want to be quoted on this, but if you want to cheat, you’ll find a way. At least in the university setting, I don't see it as a big threat. It's maybe a threat to somewhat outdated ways of evaluating people. If all I do to test my students’ knowledge is to ask them to write a five-paragraph essay, then yes, it's a problem. But then I'm not testing what I want to test in the first place, I’m testing people's capabilities to write a five-paragraph essay.
Q: What is the difference between machine learning and AI?
A: We can't get rid of artificial intelligence as a concept. It’s what people have called this kind of research since the 1950s. I usually prefer to say “machine learning,” because AI is connected to issues of consciousness and sentience, while these models are just another layer of computation. People will always call machine learning AI, but I think in terms of critically studying these kinds of machines, it's important to make a distinction between critical AI studies and critical ML studies.
There’s a difference between criticizing a blanket technology and finding the technical element that corresponds to a certain criticism. For instance, think about face recognition. We can say “this is wrong in general, let's get rid of it,” or you can link this general critique to specific properties of the models that are being used, such as training data sets featuring just Western white guys which make the model fail on the faces of Black people. When a system does a bad thing, it's important to say that it does it, and why it does it.
Q: Your course bridges the gap between technology and humanities. Do you see a larger trend in this direction?
A: I don't see it happening, but I would love to. As a digital humanist, you need to know how to code and read AI papers. You don't need to understand the details, but you need to understand their way of arguing for a particular piece of technology. There's not a big difference between figuring out a complicated piece of literature and a technical computer science paper. For computer science students, there has to be a loop between building stuff and critical reflection, talking about the philosophical implications. And many do. It's not like all engineers are just mindlessly building the next atomic bomb. But a better combination of theory and practice is really important.
Q: Who do you think would benefit from taking this AI course?
A: That's not for me to say, but if you do technical work, it would probably be good. This class draws people from all across campus. One of my students last year, a psychology major, applied a standardized theory of mind test to ChatGPT, which it failed. She almost single-handedly disproved in this little class exercise a paper which came out claiming ChatGPT has a theory of mind. This shows that there's still lots of low-hanging fruit—this is not something that you can't catch up to anymore, it's still a field in development. This is why I really like to teach this class, students come up with all kinds of interesting ways to address these topics.
Karlota Jasinkiewicz Herrador is a second-year exchange student at UCSB majoring in Political Science and Law, and minoring in Data Science. They wrote this article for their Digital Journalism course.