Non-Human is Not Neutral: The Ethics of ChatGPT

Generated+by+Craiyon.

Generated by Craiyon.

In just the few months since ChatGPT was released to the public, it has already produced a lifetime of creative works. Millions of users have used its text generation abilities to write Shakespearean sonnets, explanations of complicated topics in layman’s terms and heartfelt love songs. It has also been used by journalists to write articles and CEOs writing emails to their employees. While the buzz around ChatGPT continues to build, new ethical questions around artificial intelligence (AI) have begun to surface.

The “GPT” in ChatGPT stands for “Generative Pretrained Transformer,” a descriptive label for what the software is and how it reached its current level of development. The AI takes existing information in its vast database and molds it into a response that satisfies a question that the user asks. AI, like ChatGPT, are trained using a reinforcement learning model that requires human oversight in order to gauge the accuracy and coherence of the responses.

Nevertheless, AI only functions as well as the data it was trained on and the human feedback it is provided. Bias occurs in other algorithms that require human input, such as those that determine whether or not one is eligible for a mortgage or facial recognition software.

Adair Dingle, Seattle University computer science professor, considers innate bias in algorithms a key ethical concern with programs like ChatGPT that aim to be as humanlike as possible.

“Bias is a really big problem in AI because a lot of AI is trained on existing datasets. If those datasets exclude people, then you have an orientation toward making decisions that don’t address a segment of society and their needs,” Dingle said.

Eric Severson, religious studies professor and author of the critical article “Shaped by Modern Tools,” also sees technology at its most hazardous when we assume it to be a neutral, unbiased product.

“With modern technology we must be intensely vigilant and humble; we should be most suspicious when a tool claims neutrality. To look with suspicion at modern tools is one way to express antiracism,” Severson wrote to the Spectator.

Racial and gender bias are two important factors to consider in the creation of an AI, but other factors, such as proficiency with English, may bar many from using natural language processors like ChatGPT. The software may misunderstand spelling or grammatical mistakes and generate undesired content.

Nate Kremer-Herman, computer science professor at Seattle U, teaches a class on technological ethics. He considers ChatGPT to be a tool with equal capacities for positive and negative applications that is not entirely foolproof.

“ChatGPT doesn’t reason… and that can mean that it creates confidently incorrect answers. So if you’re relying on it to tell you truthful things, you will not always be given a truthful answer,” Kremer-Herman said.

The misinformation provided by the software is one of the ways that students using ChatGPT to complete academic assignments were spotted. Such incidents of cheating inspired Seattle Public Schools to ban the use of ChatGPT. The AI has the ability to generate highly convincing fake citations for academic papers by cobbling together real authors’ names and common buzzwords related to the topic that the tool is writing about. The use of ChatGPT on assignments has caused many teachers to evaluate, or further solidify, their teaching practices. Professors like Dingle have already been prioritizing critical problem solving in their curriculums; professors in fields that require some memorization of material may opt for in-person written or oral exams.

The more ChatGPT and similar AI are explored and tested by the public, the more apparent their limitations become. Pejman Khadivi, Seattle U professor in computer science, also expressed concern for the application of AI in consumer-facing fields, such as customer service or healthcare.

“A person needs empathy, they need a real understanding from the other side [of the conversation]. I think it’s against human dignity to simulate the depth of that kind of empathy,” Khadivi said.

While ChatGPT cannot offer genuine empathy, its strengths could be co-opted for education by using its adaptive conversation structure. Whether or not we condemn ChatGPT or embrace it, the software is likely here to stay. Microsoft has already invested billions in OpenAI, the developer of ChatGPT.

As new technologies augment the human experience, they change us for better and for worse. Few of us can do long division as easily as we did before we got our hands on calculators. Even fewer can walk outdoors barefoot with the calluses that would have once made the task easy replaced entirely by socks and shoes. Like any tool, ChatGPT alters the user as much as it alters the environment around us. Severson defines this phenomena as “backflow,” the concealed, inevitable effect of using tools. If AI becomes further cemented in our lives, something equally powerful could be lost forever.