Feb. 19, 2025

Can AI Influencers be Ethical?

Can AI Influencers be Ethical?
The player is loading ...
Can AI Influencers be Ethical?
00:00
00:00
00:00

With AI influencers on the rise in the world of social media, it’s time to discuss the moral quandaries that they naturally come with, including the question of who should be held accountable for ethical breaches in their use. Our host, Carter Considine, breaks it down in this installment of Ethical Bytes.

 

Influencers–in particular those with large followings who create content to engage audiences–have been a significant part of social media for almost two decades. Now, their emerging AI equivalents are shaking up the dynamic. These AI personalities can engage with millions of people simultaneously, break language barriers, and promote products without the limitations or social consequences human influencers face.

 

AI influencers are programmed by teams to follow specific guidelines, but they lack the personal growth and empathy that humans develop over time. This raises concerns about accountability—who is responsible for what an AI says or does? Unlike human influencers, AI influencers don’t face reputational risks, and they can be used to manipulate audiences by exploiting insecurities.

 

This creates an ethical dilemma: AI influencers can perpetuate harmful stereotypes and reinforce consumerism, often promoting unattainable beauty ideals that affect people’s self-esteem and mental health. AI influencers can also overshadow smaller creators from marginalized communities who use social media to build connections and share their culture.

 

It’s time to raise questions over how we can better tread ethical boundaries in this new reality. There’s potential for AI influencers to do good, but as with any rapidly evolving technology, responsibility and accountability should always take center stage.

 

Key Topics:

  • Understanding Influencers and AI Influencers (00:00)
  • Authenticity, Accountability, and Social Reprocussions (03:44)
  • AI Influencers and Community Values (05:44)
  • Impact on Culture and Minority Groups (07:13)
  • Targeted Marketing and Beauty Standards (10:14)
  • Ethical Considerations and Future AI Influencers (12:18)

 

 

 

More info, transcripts, and references can be found at ethical.fm

AI influencers are here to stay. What ethical questions surround them and how people interact with them?

If you’ve been on social media anytime in the last decade, you’ve probably run into influencers: people who maintain a large social media following as a source of income; providing entertainment, connection, and information to their audience. They’ve become part of the social media landscape, but what happens when artificial intelligence shows up on the scene?

AI influencers are shaking up the industry. They don’t sleep, they don’t follow the same rules as people, and they can do a lot of harm if we don’t pay attention to the ethical questions this new technology has created.

We’re going to talk about what influencers do, how AI influencers approach accountability, their impact on culture and minority groups, and how unethical use of this tech can worsen societal toxicity including unrealistic beauty standards. Let’s explore what’s happening with ethical use of AI influencers.  

What are AI influencers?

Influencers are people who use social media for a job, like being an online celebrity. They develop audiences through posting content like videos, posts, even essays while building a persona that people connect with. That public persona can be a powerful tool for marketing, and they often create sponsored content that marketers pay them for. For influencers with a large enough following, social media platforms sometimes pay them to create content so that people will use their platform. Other times, influencers ask their audience to pay for access to some of their content or buy subscriptions to build their income.

Influencers build parasocial relationships with their audience: one-sided relationships where a follower feels a sense of connection to the influencer, but the influencer has little to no connection with us. It’s not really possible for them to connect with thousands or even millions of people on a personal level, but an individual might develop trust for an influencer’s recommendations and content. We might be more inclined to buy something an influencer is sponsored by or take social actions like voting, attending events, and more.

AI influencers are already here and have been for several years now. Some of them even have millions of followers interacting with the content they develop and routinely get marketing deals. They are developed and programmed by teams of people who decide what limits the AI has, what data it is trained on, and how it interacts with people.

There are some big differences between a human influencer and an AI influencer. Unlike a human influencer, an AI can talk to thousands of people at the same time. This changes the relationship someone might develop with an influencer by removing a huge barrier: now someone could have a lot of one-on-one conversations with an influencer, building a stronger and more personal connection that is still fundamentally one-sided. This has some big implications that we’ll go into more detail about later.

AI influencers can also cross language barriers much more effectively than most people. This can give them a much wider reach and possibly give their audience a stronger sense of global community by sharing their connection with the AI with people all over the planet. They can become a form of interactive storytelling, like characters from stories we can interact with. 

Authenticity, accountability, and social repercussions

Now that you have a little background info, we’re going to dive into our first major topic: how authenticity, accountability, and social repercussions change when the influencer is made of ones and zeros instead of water and stardust. 

Another thing to consider with AI, social media, and accountability is if we are even aware of when we’re interacting with AI. Some social media platforms have people check a box to show that content was made at least in part with AI, but there are a few issues with this. For one thing, it’s operating on an honor system, which is probably not going to work that well if we’re looking for people lying about AI. Another problem is that it’s often not clear how much of a piece of content was AI generated – if it was just a fun filter applied to a real world video, or if the entire video was generated.

We might consider whether social media platforms have an ethical responsibility to identify if content posted on them was AI generated or not. Something that complicates this question is youth using social media: they might not have the knowledge and skills to guess if content was generated by an AI or even understand AI as a concept. A recent study found that some children don’t know if tech like Siri and Alexa have feelings. This makes identifying AI content and influencers more critical.

Another thing to think about: AI influencers don’t develop the way we do, learning to challenge our assumptions and develop a stronger sense of empathy and compassion. Most of us have a sense that being kind to others is valuable in and of itself, but an AI is a controlled series of algorithms and assumptions. It follows the values it was told to prioritize by the people who paid for it to be created.

In other words, AI influencers are fundamentally motivated by profit, not community. A community can keep antisocial behavior, exploitation, and abuse in check through social pressure, reputation, and even ostracizing people who hurt others. It’s an important part of maintaining a community that protects people who are vulnerable. 

When we have AI influencers participating in a community, several important things change. An AI programmed to maximize profit is not going to respond to social shame, embarrassment, or think about changing its behavior if people respond negatively. In fact, it might prioritize negative interactions since playing off our anger, fear, and insecurity can generate more buzz online. 

Authenticity online isn’t always a guarantee, but when there are real people behind the screen, misrepresentation can cost someone their followers, and with it, their power. Defamation can even lead to legal repercussions, costing people fines and public reputation for spreading lies and misinformation. 

But who is responsible for what an AI says? The AI, the programmers, managers, people paying for the AI’s development, people hiring the AI for online marketing, or someone else entirely? Using AI influencers muddies the waters of accountability. People unethically using AI influencers for manipulation might see legal fines as a cost of doing business, and hiding behind an AI makes it easier for them to abuse people.

Impact on culture and minority groups

Next, we’re going to consider how the development of AI influencers is impacting culture and minority groups. Influencers have an interesting role in minority groups – they can circumvent some barriers to employment like hiring biases, discrimination, lack of access to education, and other issues. Influencers can also provide ways for smaller communities and marginalized groups to connect with each other and share culture and art. An example might be LGBTQ+ authors sharing books with good representation, which can be hard to find from traditional publishers. 

Now, smaller creators who are connecting with specialized communities could be facing competition that doesn’t sleep and can talk to thousands of people in multiple languages at the same time. This raises the barrier to entry for influencers who are getting established, and people who are primarily focused on building small communities.

This has the potential to drown out community and cultural development. Depending on how it was trained and programmed, an AI influencer might not have the context or rules to understand what culture is and why it is important to people. It may be insensitive towards issues the people responsible for the AI don’t value. AI training has well-documented problems with bias and discrimination, and while there are measures programmers can do to mitigate this, they have to put in the effort and money to do it. If sowing discord and playing off people’s fear is more profitable than accounting for discrimination in training data, the people in charge of the AI have motivation to ignore it.

Having AI influencers that are profit motivated drives their followers towards heavier consumerism, not cultural development and connection. It incentivizes people to steal content from marginalized groups, use it for AI training, or even repurpose it to target marginalized people with personalized ads. For groups that rely on social media to build community and share critical information and resources, AI influencers could cause some serious damage.

Toxicity, beauty standards, and targeted marketing

The next ethical dilemma we’re going to explore has to do with how AI influencers are affecting digital marketing, which is the primary reason they exist in the first place. Targeted marketing has become so prevalent that it’s next to impossible for businesses to tell people online about the services and products they offer without it. Influencers are a part of targeted marketing: their audience usually shares some common interests, so when a marketer pays an influencer to sponsor content, they’re usually trying to sell things that align with those common interests.

The heavy emphasis on marketing incentivizes AI influencers to build audiences on shared insecurities instead of shared interests. The top five AI influencers in the world, some with millions of followers, are all modeled after very skinny young women idealized by commercial beauty standards. They are designed to sell products to reach an unattainable goal, because if we ever attained that standard, we wouldn’t be spending money on it anymore. 

The damage commercial beauty standards does to us goes much further than encouraging people to buy things. The US economy spends billions of dollars on costs from body dissatisfaction, weight discrimination, misogyny, and racism promoted by unrealistic beauty standards – including costs from not getting access to healthcare because of medical providers not taking people seriously. It leads to a lower quality of life, and people are profiting from those billions of dollars spent.

AI influencers can take unrealistic standards to a new level well past what photoshop and filters can do for human influencers, and they can develop stronger, more personal relationships with individual followers, making it easier to influence our behavior. If they are built for marketing, they are likely programed to encourage people’s insecurities. It affects everyone. Everyone has insecurities, and there are always going to be some people who want to exploit that for profit. It’s up to us to decide how we engage with the tools they use.

Conclusion

The differences between AI and human influencers have opened up new thorny ethical problems and made some existing problems more severe. The people funding AI influencers have no incentive to promote ethical behavior and a very big incentive to encourage people to buy stuff. It creates distance from their actions, concentrates wealth to a small group of people, and relies on each individual owner having a conscience to program the AI to respect cultures, minority groups, and nuanced topics like beauty standards. What do you think? Where should we draw the line on what makes an AI influencer ethical? Who bears the responsibility for their actions? 

What are some ways AI influencers could use their power for good? Imagine being able to interact with your favorite characters from stories and movies, ask them questions and have conversations. Maybe AI influencers could be used to provide accessible education for people. What ideas do you have?