Where is the “I” in AI?

 

Words by Anna Varshavsky

Artwork by Midjourney emellished by Sean M. Smith

AI has become the hot soundbite. Articles about AI have overtaken the media, covering new inventions daily, enthusiastically listing the benefits and cautiously raising concerns. 

Most authors agree that AI, used wisely and with regulatory parameters, is a fantastic tool. Midjourney is invaluable when ideating, enabling us at ACOMPANY to focus on concepting, not  on searching for images. ChatGPT enables us to try out a wide range of syntaxes before writing the final manifesto. These are tools that make us efficient and our work cost-effective. 

Renowned artist David Salle is one of the first traditional artists to embed on the front lines of artificial intelligence. Salle taught AI how to execute a number of sketches using his style. The process was lengthy and involved selecting groupings of AI-generated art that had plausible forms. It became dangerously tempting to read into the generated images and find meaning in what was just code. Through this application, an algorithm was created that produced an image that was close to the artist’s style. Yet, Salle remains unsure if what AI created can be called art since all of the components were put in a blender and became a remix—a faint echo of his voice. 

Using AI as an instrument for exploration substantiates its value in the creative process. However, AI is not a substitute for human choice or creativity, yet the current trends and hype drive it to be used and generated as such, because AI is cheaper than human talent and it’s a new toy. 

I Am Robot

Let’s step back for a second and take a look of the definition of artificial intelligence. Stanford professor emeritus John McCarthy in 1956 defined AI as “the science and engineering of making intelligent machines.” Much research has humans program machines to behave in a clever way, like playing chess.

The 2024 definition evolved to: Computer systems capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions, or solving problems. A humorous op-ed described how AI would make human life dull because there would be zero for humans to do, including think or create. It was an interesting sociological hypothesis on human behavior: When humans are not pushed or motivated, they simply turn into couch blobs. 

AI software such as ChatGPT and now Gemini google are complex search engines making decisions for you as to what you read and see based on their algorithms and partnerships. This causes a lot of concern for me personally as it takes away the invaluable user choice, spoon-feeding information and contributing to confirmation bias. 

ChatGPT-maker OpenAI and the Associated Press made a deal for the artificial intelligence company to license AP’s archive of news stories. OpenAI already had AP’s data, but they decided they were going to pay for what they took. It seems that the New York Times is suing OpenAI, not because they want to preserve intellectual property, but because they want a piece of the pie. 

Google restructured how it serves up search results, prioritizing AI results first, followed by reams of sponsored content. Native search is left in the tail end. Most of the time, AI results are inaccurate or incomplete, pulling information from questionable sources that aren’t referenced. ChatGPT results are biased since they come from OpenAI-contracted vendors, meaning that smaller companies or local news outlets are left in the dust, compromising the very essence of freedom in this country: the free press. 

Artificial Condition

In my world—the advertising/marketing world—AI has been a cause of much debate. It leads me to question the current zeitgeist of how big agencies and clients are adopting AI, and what does the future hold?

For example, an AI-generated database of the agency’s output is a great idea. It streamlines the RFP process, because clients want to see a portfolio of work that is calibrated to the assignment, and digging through thousands of portfolio pieces is time-consuming for humans but more cost-effective with AI.  

Strategic research, a large chunk of groundwork for new business, can be aided by AI, but asking the right questions to identify painpoints and opportunities—both functional and emotional—applying the data to an approach is perhaps best left to humans, because it requires an analysis based on empathy and instinct, not just algorithmic logic. 

A very large agency conglomerate is creating its own AI platform. Once live, the hope is that the platform will be able to access trillions of data points, billions of personal profiles globally, millions of creative assets and billions of impression bids daily. It also has a dozen third-party AI services partners including OpenAI, Adobe, Amazon, and Microsoft. Other third-party providers include Hugging Face, Runway ML, Midjourney, Pika, and Bria AI. 

To this I have many questions! If the internal agencies within the conglomerate are connected via this new AI, how are confidential information and conflicts of interest protected? As an agency working with healthcare clients, we are well aware that information given to an agency by a pharma client is proprietary, confidential, and sensitive, and must be secured in an IT vault. 

And exactly how will licensing work when the agency uses platforms such as Midjourney to create final art, because using an AI-generated final image could expose the agency’s clients to lawsuits for copyright infringement since Midjourney does not provide legal protection. Also where is the personal data coming from, and does this big agency have permission to access it? 

The agency is saying that this AI platform will enable them to be more “precise in creating personalized content for micro audiences.” Their key ambition is to ensure that AI operates without bias, but the whole concept of AI is built on bias—an algorithm created specifically for each user profile. Additionally, AI is a reflection of the human condition, and currently humanity is not exactly bias-free.

Plus, I’m not certain how something like that is monetized. Agencies bill on an hourly scale. If AI is doing the work of an agency, shouldn’t the client just get the program and do everything in-house? 

Foundation

Ad and design agencies don’t create widgets, we create ideas by applying reasoning, decision-making, and problem-solving. And, when done successfully, empathy and humanity are the core creative instruments. KPIs and ROIs aside, the best campaigns with the best outcomes are from a group of humans who share experiences. Look at the Quaker ad created by Uncommon. Can AI really understand the nuance of a family? In theory, yes. From content on media, yes. But that’s regurgitation, not a unique idea. Take a look at the David Salle experiment mentioned earlier in this blog. 

Magnetic Stories by Siemens Healthineers won big at Cannes for a reason: it is based on a very human insight that children are scared of MRI disruptive sounds. In an effort to minimize discomfort, the company created a series of stories, integrating the sounds into the narrative. The creatives used the bothersome noise of the MRI machine as a soundtrack to the story, the WHOOSH WHOOSH was a spaceship taking off, the KNOCK KNOCK was a robot walking. I want one created for adults that weaves in the horrible sounds into something composed by Brian Eno.

Do Androids Dream of Electric RFPs?

The concepts such as Magnetic Stories and Quaker are generated by humans, using emotional experiences and insights. And those ideas start percolating the minute an agency receives an RFP or RFI from a client. We have been the recipient of many RFPs, and with AI, what seems to be happening is that clients are using AI to write RFPs, and more and more agencies are relying on AI to write their submissions. Finally, clients are using AI to sift through the submissions and find contenders based on an algorithm. This makes me wonder: Where’s the human in that process other than sending an email and sometimes not even that much?

AI-generated text is forced. Granted, with time, AI discourse will become more fluid, but even with sophisticated machine learning, there’s an intuitive human “tell” when something is written by AI—it lacks soul and a narrative, a true voice not an echo. In a response to an RFP, shouldn’t the unique voice of the agency come through? Otherwise, don’t all submissions sound the same, a template not an original? But if AI is also reading the submissions on the client end, why involve humans at all? Why not have AI also generate the creative? I think that’s where this conversation is going. It is unfortunate.

Ghost in the Machine

There’s nothing more human than market research. Yet apparently, there is a generative AI company that is creating AI archetypes for market research, pulling stereotypical profiles off social media and web on a preset criteria. So if your brand is looking for 50-year-old women who are going through perimenopause, the AI will create a character for you based on tropes and memes from unverified sources rather than real live individuals. Then, another AI interviews the AI archetype, and a third AI spits out the result. 

Market research, although expensive, leads to insights. A good market research facilitator has a list of questions, but it’s the human-to-human interaction that leads to the best results, and a lot of times the questions get thrown out in lieu of actual conversation. This is how my team and I found out that doctors weren’t prescribing a treatment correctly, which led to the brand’s efficacy decline. Sure, AI can read our facial expressions, but it’s not capable of making intuitive and instinctual decisions like talking and delving further because your gut told you there’s more to the story. 

A.I. Care

Another AI software engineering company had this to say about AI in healthcare, “…Generative AI has the potential to make healthcare more 'accessible, empathetic and fine-tuned' to the needs of patients and HCPs.” Accessible and affordable, yes. But, quality and empathetic care? I don’t think so.

A friend’s son recently got sick over a holiday weekend. He woke up with horrible throat pain and 102º fever. He tested negative for COVID, and his human physician did a quick test for strep, which came back inconclusive. The son had ALL the symptoms of strep, his history also shows that he’s susceptible to it, having it a few times in the last few years. The doctor was about to write a prescription for an antibiotic, but instead, tested him for mononucleosis. It was an intuitive decision, a hunch. The mono test came back positive. Had the doctor prescribed antibiotics, my friend’s son would’ve gotten a rash or had a more serious adverse reaction to the improper medication.

ExMachina

But, getting back on topic, there’s a newfound enthusiasm for AI without regulations or understanding of consequences. AI has become a trendy a soundbite du jour, like omnichannel was a few years back. More and more, it seems that ad agencies and clients are navigating away from human experience and connection, which is missing not only in the advertising industry, but across the human experience as a whole. Just look at the 18+ AI. It’s an app that enables you to create companions. The creators built it to combat “loneliness.” Really? How exactly does that help someone who is in desperate need of a human connection? Reminds me of the movie HER, and not in a good way. And, an honorable mention to ElliQ, a device that is supposed to make older folks feel less lonely. It’s capable of not only reminding them to take their meds, but can also hold conversations. A few seniors are very attached to it in a way that is preventing them from seeking actual human connection. And, a few legislators are raising concerns, because through these conversation the AI is obtaining extremely personal information. 

Tech companies are rewriting their terms and conditions to include a clause allowing AI learning, giving it free rein to access everything and anything.

And, just a couple of days ago (July 8, 2024), TikTok announced that it will allow brands to create ads using avatars generated by AI that look like real people. There has been some pushback from the FCC which stated that children cannot tell the difference between an AI character or a live human, putting them in danger of falling victim to scams. I know a lot of adults for whom this will also be an issue.

There is no solution or end to this blog. It took me forever to write this litany, because every time I thought I was done, another AI innovation hit the media circuit. 

History trends show that humans create inventions with the best intentions. Yet it is the application of the innovation that can turn something with the potential to improve the human experience into a weapon. Nuclear energy is a perfect case study.

All this reminds me of Silicon Valley, a great comedy that profiles a character arc that we see today: Young, brilliant, idealistic minds create tech based on an unmet need, yet they don’t have the emotional maturity or real-life experience to understand the consequences, nor do they have the mentorship, regulatory experience, or financial backing to conduct rigorous testing. Look at JUUL.

Sunny

I’m currently answering an RFP. Yup, I’m writing one the old-fashioned way. No ChatGPT used. It’s an arduous process, but I do want our client to hear our agency’s unique voice when they read our response (and I’m truly hoping that it will not be read by an AI, scanning for keywords and phrases). At ACOMPANY, we welcome tech. Sean Smith, our award-winning CCO, was included in Midjourney beta testing. We adored working with it and seeing how that tool progressed into something we use today. But we use it as just an additional tool. All our work is generated by humans, because when creating advertising or even branding, no amount of data or algorithms will replace human intuition, instinct, or thought. 

Humans aren’t perfect, but we feel and taste. There are so many things that you can’t describe in words or code—like the anxiety felt when eating an ice cream that’s melting, the confusing emotions of love and angst towards family members who hold opposing views, or the mix of emotions when diagnosed with a disease. AI can’t do that. They can’t understand love or death, just parodies of it. They don’t get “mouth feel” or the luxury of a cold martini after a really bad week. 

Writing is not easy, so I don’t have time for much of anything else.  As a result, my house is a mess, my dog is doing the “gotta go to bathroom” dance around the living room, and I need to think of something for dinner. And that’s why I do wish that OpenAI would create a version of Jane Jetson’s Housekeeper Rosie. I can do all the writing and designing. I adore what I do as a profession. So it would be nice to have AI do the chores, so I have more time to think, create, innovate, and be with other humans…and my dog. 

Resources:

https://www.adweek.com/agencies/high-flying-publicis-groupe-reveals-its-latest-evolution-with-coreai-centralization

https://www.nytimes.com/2024/05/09/technology/meet-my-ai-friends.html

https://www.adweek.com/media/openai-preferred-publisher-program-deck/

https://digiday.com/media/publicis-groupe-debuts-new-coreai-platform-and-e300-million-ai-investment/

Axel Springer, The Financial Times, Le Monde, Prisa and Dotdash Meredith

https://www.fiercepharma.com/marketing/siemens-healthineers-cannes-grand-prix-winning-campaign-weaves-scary-mri-sounds-whimsical

https://www.nytimes.com/interactive/2023/09/22/arts/design/david-salle-ai.html?searchResultPosition=19

https://www.nytimes.com/2024/06/17/style/tiktok-ads-ai-influencers.html

 
anna varshvasky