The Linderman Library rotunda on Feb. 26, 2024. The english department hosted "From Critical Ai & AI Literacy to Design Justice" on Feb. 22, 2024 in Linderman Library. (Holly Fasching / B&W Staff)

Taking an Interdisciplinary Look at Generative AI


Communities around the world are grappling with the rise of artificial intelligence. Different groups are and will continue to be affected by its use and societal implications. One of these groups includes universities like Lehigh. 

Lauren Goodlad, an English professor at Rutgers University, co-editor of the journal “Critical AI” and chair of the Critical Artificial Intelligence initiative at Rutgers, delivered a transformative lecture titled “An Interdisciplinary Look at Generative AI” to the Lehigh community on Feb. 22 in Linderman Library. 

She shared insight from her research in artificial intelligence to discuss underlying issues with the use of AI. Her lecture also aimed to dispel common misconceptions about AI, while fostering a deeper comprehension of its societal implications at Lehigh.

The event was hosted by Lehigh’s department of English and the Health, Medicine and Society Program. It was open to all community members to attend either in person or virtually.

The lecture emphasized the prevalent confusion surrounding the term “AI,” which is often relegated to the realm of science fiction. Goodlad stressed the disparity between current AI technology and its fictional depictions, highlighting the need to demystify AI’s capabilities.

“The term AI itself is very confusing,” Goodlad said. “It’s often thought of as a theme for science fiction, so that’s one thing that I absolutely want people to get a good sense of.”

Goodlad also explored the societal ramifications of degenerative AI, urging for a nuanced understanding of its role in conjunction with the humanities. 

She highlighted the symbiotic relationship between AI and human intelligence, advocating for interdisciplinary collaboration to harness AI’s potential as a tool for augmentation. She also stressed the importance of cultivating critical AI literacy among students, irrespective of their academic disciplines, to navigate the ethical complexities inherent in AI.

Goodlad underscored the imperative for ethical AI design, emphasizing the importance of human perspectives in shaping AI’s trajectory. 

“That is something that AI researchers have not done for themselves, in part because they are not trained to think about the humanities or rather the question of what is human intelligence in the same way that interpretive social scientists and people who teach literature or language or history do,” Goodlad said. “It should be a very strong interdisciplinary enterprise.”

She said she feels strongly about the importance of humanity’s perspectives, decision-making about how AI should be designed, how intelligence should be measured and understood and what should be ethical usage of the technology. 

As far as using generative AI in her own classes, Goodlad appreciates that the tools are a progressive way to teach students how to write, but she feels it is important that students develop critical AI literacy. 

Whether they’re humanists, computer scientists or business students, she said it is essential to have an understanding of technology.

“My own teaching practice involves teaching students to use generative AI tools, like chatbots or image models for probing, which is actually what many data scientists do,” Goodlad said. “(My students) probe models in order to find out what’s inside them and how they behave.” 

Goodlad feels that teaching students to understand and perform probing experiments actually gives them a sense of how to use the model. From the standpoint of a researcher instead of a consumer, she said this really is the best of both worlds. 

Goodlad started the Critical Artificial Intelligence initiative at Rutgers with a few of her colleagues who were interested in doing interdisciplinary work. They were interested in having a humanities journal on the topic of critical AI research and literacy. 

She said the group was not only interested in performing research on the technology, but also in forming a dialogue with technologists involving computer scientists, data scientists and computational linguists.

Interest in ChatGPT sparked when it was released in November of 2022. Goodlad said those who had initially questioned her research quickly started asking for advice and information. 

“I don’t see any reason why it would dramatically change our systems of higher education,” Goodlad said. “Humans are still humans. We shouldn’t make the mistake of thinking that just because a human has a tool.” 

She explained that people are prone to making mistakes with new technology, especially when they’re not trained to scrutinize the assumptions behind it. She said a similar commotion took place in universities when the internet was created, but just as the internet didn’t harm the presence of universities, AI won’t either. 

However, this mentality on AI may shift when it comes to writing-intensive courses. 

AI does not have the ability to produce sources from which information was modeled. Goodlad said if you ask it for a citation, it will make one up, in what is known as a hallucination. If it is a system that is hooked up to the internet, AI will look for something on the internet that backs up what it’s saying

Goodlad wants all people to be educated enough about AI technology to understand it’s not a search engine, and if they are using it as a search engine, it is smart to remember the risk of receiving incorrect information. 

Lorenzo Servitje, a professor in the Department of English and Health, Medicine and Society program, has always had a passion and interest in the history of statistics and probability. 

AI emerged from probabilistic and statistical prediction and predictive modeling, Servitje said. Because of this, he’s interested in thinking about how the term “AI” and its predecessors are related. 

Servitje honors the path Goodlad has navigated regarding her humanistic and interdisciplinary understanding of AI, as he said she applies training as both a cultural historian and literary scholar.  

Technologies such as machine learning, large language models and deep learning have been appropriated and represented for the public. Servitje said the real question is how people think about them. 

“I think it also raises a lot of interesting questions for educators because how do we use it appropriately, generatively and usefully?” Servitje said. 

Servitje often questions the relationship between how humans think about themselves in relation to technology and how we imbue ourselves into technology. He said it’s important to bring an interdisciplinary perspective to these questions. 

To improve on and catch AI’s mistakes, Servitje said the emergence of this kind of technology not only opens up new possibilities and problems but also an already entrenched complex web of labor, environment and economics. 

“I think that’s where having people with multiple perspectives, either they be humanists, cultural historians, literary scholars, or people trained in composition and rhetoric … really start to weigh in on the questions on how it gets deployed,” Servitje said. “Then you can bring in sociologists to really see how these things start to impact groups. Anthropology can think about how different cultural factors help understand how (AI) impacts where it’s received.”

Justin Greenlee, the director of the Writing Across the Curriculum program at Lehigh, believes Goodlad’s concepts of critical AI will be of great interest to the Lehigh community. 

He said he is most interested in Goodlad’s comments on the interactions of AI including labor, surveillance, misinformation, biases of all kinds and environmental impacts. 

“After a year of learning about AI, I find myself turning back to issues of academic integrity and university policy,” Greenlee said. “How can we emphasize Lehigh student voices on AI?” 

Greenlee said although ChatGPT launched rather recently in 2022, researchers at Lehigh have been investing in the study of AI for years. 

“We are still within a hype cycle related to AI,” Greenlee said. “If things go well, we could move into a plateau of productivity this year.”

By harnessing the current discourse about AI, the work of the community of researchers at Lehigh and Goodlad’s ideas about collaboration surrounding AI, Goodlad said Lehigh can learn a lot. 

Comment policy

Comments posted to The Brown and White website are reviewed by a moderator before being approved. Incendiary speech or harassing language, including comments targeted at individuals, may be deemed unacceptable and not published. Spam and other soliciting will also be declined.

The Brown and White also reserves the right to not publish entirely anonymous comments.

1 Comment

  1. good piece and one should mention the work done in the journalism department here at Lehigh with courses by Professors Littau and Lule…Lehigh has been ahead of the curve on understanding AI from a non tech standpoint including building a chatbot now for the Brown and White!!

Leave A Reply