Everyone remembers when ChatGPT could only generate less-than-ideal pictures and designs with human hands on a dog or a background with trees on the moon. It was used for fun, as it would not create great quality images and would deviate from the provided prompt. That’s not necessarily the case anymore.
AI is and has been the number one buzzword in the media and business. With marketing in particular, AI has become an increasingly applied tool to improve performance and creativity. According to the Digital Marketing Institute, 50% of people utilize AI when creating content, and 43% believe that AI is essential for successful social media strategies. Since generative AI software has become more accessible, its use has become much more common.
In Seattle University’s on-campus organizations’ own marketing materials, the use of generative AI has become visible. The Seattle U Men’s Basketball (SUMBB) team’s Instagram page posted an AI-generated action figure collection of the team. Its caption reads, “Which Redhawk Basketball Action Figure are you copping??”
Sarah Finney, the associate athletic director for strategic communications, is responsible for most of the marketing content posted for the SUMBB team. Finney’s creative team saw a trend of universities using generative AI to market certain events and decided to join.
“It was something that a member of my creative team saw other schools doing, and just kind of something fun to post, and we’re like ‘Hey, let’s just, you know, put photos in ChatGPT, and see what it pumps out.’ We really just thought it would be a fun post, particularly for men’s basketball right now, we’re out of season, so we don’t have game-related content, and so looking for other ways to just kind of continue to share content about our team,” Finney said.
The post, which was posted April 16, received 252 likes and eight comments. One of the comments that received the most likes, 26 likes, discussed how AI is “cringe.”
Finney shared that it’s not common for their marketing team to use AI when creating marketing content, noting concerns for plagiarizing the styles of existing artists.
“I had told my staff we’re not gonna generate something that looks like that [Studio Ghibli], that copies that style. But what we went ahead and did was something pretty generic. It’s just for fun, and I felt more comfortable with the style that we chose, and the way that we leaned into doing something with AI as opposed to making it look just like someone else’s art,” Finney said.
Eli Voigt, creative director of marketing communications at Seattle U, shared how the marketing team has used AI in other ways. Adobe Creative Cloud, a software often utilized by marketing communications, has generative AI features that they have used in their applications for a few years. It has been used to expand and remove in Photoshop for seamless edits and image refinements.
With these tools, Voigt and their team involve designers to help improve the AI-generated content. They use AI only on background elements and for better quality enhancements, never on people, to help not compromise the trust or integrity of the department’s marketing materials.
Voigt noted that AI is primarily used for efficiency purposes and is not intended to inhibit the creativity of student employees.
“We support faculty, staff, and student workers in their design efforts. Through our Canva Enterprise account, they have access to a wide range of branded assets and customizable templates that we’ve created for use. This helps to empower them to create marketing material efficiently and helps ensure consistency with our brand,” Voigt wrote in an email to The Spectator.
Vogit emphasizes integrating AI tools into their creative process to enhance rather than replace their work. The aim is to encourage faculty, staff and students to explore emerging technologies with intentionality and thought.
Despite the thoughtful exploration of how AI could enhance content production or be used just for fun, some students are concerned about the ethics of AI and how its use could replace designers and photographers.
Nic Tecson, a fourth-year design major at Seattle U, shared how they feel that generative AI does not approach design problems and solutions like designers do. Instead, AI approaches design in a general way, which lacks human depth and thinking.
“I think it’s such a gray area in terms of ethical applications as well as how it pertains to the current job market. I definitely think it can be a useful tool for designers, although I don’t think that it should completely replace a whole field of design just because we have the means,” Tecson said.
Julie Pham, a third-year design major, shared some similar thoughts on the uses of AI. She said that even though it can be an easy, cheap way for people to feed their ideas to generative AI software to create their ideas, it will continue to lack the touch of a human designer.
“Yes, you can get your content and the graphics you want out there. However, I don’t think that what you’re going to be putting out there will have as much of an impact if you ask a real designer to do it for you,” Pham said.
Pham argues that it’s out of the question to think that generative AI could take the place of a human. For her, it’s the work and effort that she puts into the design that makes the design special. She believes that implementing human vision and creativity is a vital design component.
“As a student, I feel that, my work and my design process will be really valued and people really appreciate it, even if I pull up with the worst looking graphic ever like, people will still appreciate the design process, because that’s what makes good designers is that we have to go that process multiple times several times in order to, really develop our own style, our own work,” Pham said.
AI does not create something out of nothing because it can only create what is fed into it, which limits its potential and creative capabilities. This is one of the few things that helps alleviate the concern of designers being replaced by AI, which Tecson finds relieving.
Yet, concerns surrounding the biases of AI are alarming to some. Gabriella Palmeri, a fourth-year design major, shared a similar take on how AI is usually biased.
“AI is an algorithm that references already existing material. So it’s technically unbiased, but it uses human-made material. So whatever’s put into this algorithm is what will come out. We as humans are always going to be biased in some way. So it’s like we’re putting in biased material to receive this unbiased output, but it can never really be fully unbiased,” Palmeri said.
In an article from the Washington Post, reporters inserted various prompts into AI image generators like Stable Diffusion and DALL-E. They asked for images of attractive or productive people and depictions of routine activities. The software ended up producing images of various offensive tropes about race, class, gender, wealth, intelligence, religion and other cultures, revealing how information that AI pulls from databases, which can be littered with bigotry, can and has reinforced harmful stereotypes.
Palmeri added that most of the material that AI, especially ChatGPT, produces is incorrect. If people rely too much on it, then this could cause widespread misinformation and misconceptions, which would affect people negatively, even outside the scope of generative AI.
As the use of generative AI grows in frequency, there will always be a different concern or misuse of the tool. This could stem from either using it to replace a certain job, such as designing, or the spread of misinformation. Yet, some view AI as a relatively harmless creative tool with exciting potential. However, time and time again, it’s proven that the human touch, whether it be through creativity or teaching, cannot be replicated by generative AI such as ChatGPT or Adobe Creative.