Artificial Intelligence (AI) used to be a fantasy that was seen only in sci-fi movies, but recent innovations have made it a staple in nearly all of our lives. There are few places where it is more prevalent than in schools, as the word-processing and information-gathering abilities of programs such as OpenAI’s ChatGPT and Anthropic’s Claude are incredibly well-suited to tackle university assignments.
However, the ability to have a large language model complete assignments and help students study raises practical and ethical concerns. Is work produced by ChatGPT considered plagiarism? Does instant retrieval of information stop students from learning as effectively? On the other hand, can it be used to study more effectively, generate ideas or provide students with insight?
The answers to these questions are anything but clear. AI is being widely adopted and used by students and rapidly evolving in academic spaces, with new uses and advancements coming out on a seemingly daily basis. With its use becoming so ubiquitous in such a short amount of time, professors and administrators alike are grappling with ways to control the newfound technology and find ways for it to help instead of harm the classroom experience.
Onur Bakiner, associate professor of political science and director of the technology and ethics initiative, emphasized that the best approach to controlling AI is through a combination of schoolwide policy and independent decisions by professors.
“Every discipline and every instructor will legitimately develop specific policies around AI use in teaching and research, so the institution’s job is to establish general guidelines to overcome uncertainty,” Bakiner wrote in an email to The Spectator.
How professors act largely depends on their personal perspectives on AI. Some professors see AI as wholly detrimental to students and an acute threat to both academic integrity and the learning process.
Others, such as Professor of Communication and Media Christopher Paul, believe that AI is just another step in the advancement of technology that students will have to get acclimated to.
As far as dismissing concerns, Paul says that cheating due to AI isn’t very different from the cheating that students have been doing for years in other ways. Students of the past have paid graduate students and peers to write their papers for them, and used other students’ reading notes instead of doing the reading, which isn’t too different from ways that AI can be used to cheat.
Paul also emphasized positive uses of AI, like minimizing the number of students who come to class completely unprepared to discuss the readings, among other things.
“Students don’t always do all the readings for a class, and that makes my classes run worse… if AI can help us get through some of those things, I think that that’s going to be something that is useful for them, both in my class and in life in general,” Paul said.
Not all teachers are as optimistic as Paul, though. Thomas Mann, a visiting assistant teaching professor of political science, is a strong opponent of AI and believes that it should be eliminated entirely in the classroom, and has adjusted his curriculum to accomplish that.
“In my classroom, I have moved to handwritten exams, handwritten midterms and handwritten finals. That’s the only way I can think to have no AI at all,” Mann said, later adding, “The university should go back to the medieval ages. That they should just have an anti-technology, anti-AI policy across the board.”
He believes that students cannot resist the allure of having their assignments done for them, and attests to the fact that he has seen the quality of student performance degrade in recent years as generative AI has become more prevalent. However, he holds sympathy for students.
“AI is one more thing added to this already toxic sludge of lack of attention and social media… it’s like it’s just a degradation of what human beings are capable of. It’s really unfair to your generation, because it’s like, you guys aren’t responsible for this. It’s a system that you’ve been thrown into,” Mann said.
Between Paul’s hopeful outlook and Mann’s cynicism, though, exists a wide range of reactions to AI’s use in the classroom.
Many professors, such as Adjunct Professor of Communication and Media Tyrah Majors, fall in between the two extremes and will simply continue to teach as they did while acknowledging the need to regulate the potential risks of AI.
“I would say it’s forced me as a professor to regulate more how students are using it. But, at the end of the day, students are going to use what they have access to,” Majors said.
Part of being a professor means being willing to adapt and change to give your students the best possible education, and the advent of AI is no exception. While we wait for time and experience to tell us if AI is a benefit or harm to students, professors will regulate it as they see fit, finding the solutions that they feel best allow them and their students to succeed.