Deepfakes in Advertising: The Identity Crisis Reshaping Influencer Marketing

The advertising industry has always pushed boundaries. From airbrushed magazine covers to CGI-enhanced product shots, we have a long and storied history of bending reality to sell things. But deepfakes? That’s a different conversation entirely.

Over the past few years, AI-generated video and image technology has advanced at a pace that few of us in the industry were prepared for. What started as a niche, slightly unsettling corner of the internet has migrated, quietly and quickly into the marketing mainstream. Brands are now using AI to replicate the faces, voices, and likenesses of real people to sell products. Sometimes with permission. Often without. And in a world where influencer trust is the entire currency of social media marketing, this should be keeping all of us up at night.

What Actually Is a Deepfake?

The term “deepfake” is a combination of “deep learning” and “fake.” At its core, it refers to AI-generated synthetic media in which a person’s likeness – their face, their voice, their mannerisms – is digitally manipulated or entirely fabricated.

Deepfakes first surfaced in mainstream public consciousness around 2017, initially through Reddit communities where the technology was used – almost exclusively for harmful purposes – to superimpose the faces of celebrities onto other people’s bodies. The conversation at the time was rightly dominated by concerns around non-consensual content and misinformation. What nobody fully predicted was how quickly the same technology would be repurposed, polished up, and handed to the marketing industry.

By 2022 and 2023, we were seeing early commercial applications: AI-generated spokespeople, brand avatars, and the first rumblings of influencer deepfakes appearing in paid social. By 2024, it had become a genuine industry conversation. Now, in 2026, it is a crisis hiding in plain sight.

The Trust Economy and Why Deepfakes Threaten It

Let’s be honest about what influencer marketing actually sells. It doesn’t sell products, it sells trust.

When a consumer follows an influencer – whether that’s a 28-year-old fitness creator on TikTok or a beauty editor with 800K Instagram followers – they are buying into a relationship. They believe that person’s recommendations are genuine, that their opinions are their own, and that when they hold up a product and say, “I actually use this,” they mean it.

That trust isn’t just an organic phenomenon. From a paid social perspective, it is the single most powerful performance signal available to a brand. When we amplify a genuine creator partnership through Meta or TikTok Ads, we’re not just buying impressions. We’re borrowing the creator’s credibility and scaling it. The click-through rates, the cost-per-acquisition, the return on ad spend – all of it is underpinned by the audience’s belief that what they are watching is real. Deepfake advertising takes that belief and exploits it. Systematically.

Imagine this. A teenager has followed a creator for three years. They’ve watched that person talk about their skincare routine, their diet, their mental health. They trust them. Now imagine that same creator appearing in a paid TikTok ad promoting a supplement brand they have never heard of, speaking words they never said, endorsing a product they have never touched. The teenager has no idea. The brand gets the conversion. The creator is none the wiser. And the advertiser has just weaponised the creator’s audience relationship to hit a performance target.

That is not clever media buying. It is a betrayal of the consumer relationship — and “exploitation” is the only accurate word for it. Particularly when we are talking about younger audiences aged 13 to 18, who are statistically more susceptible to influencer-driven purchase decisions and less equipped to critically interrogate what they are seeing. The Advertising Standards Authority (ASA) has long mandated clear disclosure on paid partnerships. Deepfake advertising blows straight past that framework. There is no disclosure for content a creator never agreed to make.

When Did This Become a Brand Strategy?

The escalation here is not accidental. It follows the money.

Influencer fees have risen dramatically over the past five years. Mid-tier creators who were charging £2,000 per post in 2020 are now quoting £10,000 for the same deliverable. Macro influencers have become almost inaccessible to brands without six-figure budgets. The market corrected sharply for early undervaluation, and rightly so. But the knock-on effect has been that smaller and start-up brands have been priced almost entirely out of the influencer market.

The appeal of deepfake technology to a brand with a £5,000 marketing budget is, I will admit, understandable. If AI can simulate a credible creator endorsement for a fraction of the cost of a real partnership, the ROI conversation becomes genuinely compelling. For a bootstrapped DTC brand trying to compete in a saturated market, the temptation is real.

But the question is never just “can we do this?” It’s “what does doing this cost us in the long run?”

The Brand Trust Problem Nobody Is Talking About

Here’s where I think the industry is making a serious strategic error – and I say this as someone who spends a significant portion of their working week inside Meta Ads Manager and TikTok Ads.

Brand trust, once broken, is extraordinarily expensive to rebuild. The moment consumers become aware that a brand used a fake version of someone they trust to sell them something, the reputational damage is not contained to that one campaign. It contaminates everything. It reframes every previous piece of content through the lens of “what else was manufactured?”

The performance implications of that are severe. Paid social amplification of influencer content works because the creative carries authentic social proof. Running paid ads in collaboration with an influencer or using IGC (influencer generated content) in an ad delivers stronger engagement and lower CPMs, precisely because the audience trusts the source. The moment that source is revealed to be fabricated, the entire paid strategy built on top of it collapses. You cannot buy your way out of an authenticity crisis with a higher daily budget.

We have already seen early versions of this backlash. Several brands have faced significant social media blowback after being caught using AI-generated spokesperson content without adequate disclosure. The comments sections tell the story: audiences feel deceived, and they say so loudly. And a comments section full of accusations of deception is not an environment any media buyer wants their paid spend amplifying.

For larger brands working with celebrity-level talent, the risk is even greater. A deepfake of a well-known celebrity promoting a product without their consent is not just an ethical failure. It is a legal one. The right of publicity laws in the UK and US are increasingly being tested in exactly this context, and brands are beginning to find themselves on the wrong side of these cases.

The Platform Problem: Hidden in the Small Print

Here is where it gets more complicated, and more sinister.

Several major social media platforms have buried clauses within their terms of service that grant them broad rights to use uploaded content, including user likeness, for AI training and commercial applications. TikTok, Instagram, and Snapchat have all faced scrutiny over the scope of these permissions. For everyday users, the practical implication is that content they upload — their face, their voice, their creative output — can theoretically be used to train the very AI systems that could one day replicate them.

For influencers with large followings, the implications are significant. Years of content. Thousands of videos. A completely mapped digital identity. All of it potentially available to AI systems operating within the terms of a 47-page document nobody actually read when they created their account at 16.

This is not hypothetical. It is the infrastructure that makes large-scale influencer deepfakes possible, and platforms have been notably slow to close the loopholes that permit it.

The question of AI ownership adds another layer of complexity. If an AI system generates a video of an influencer using training data derived from that influencer’s own publicly uploaded content, who owns that output? The creator? The platform? The brand that commissioned it? The legal framework has not kept pace with the technology, and that gap is being exploited every single day.

The Dilution of Influencer Marketing as a Discipline

From where I sit as a Digital Account Director, there is another consequence of this trend that the industry has been reluctant to discuss openly: if AI can replicate the identity and perceived authority of an influencer, what does that do to the value of authentic influencer partnerships?

Harriet Poole, Influencer Marketing Director, puts it plainly: “The entire value of an influencer partnership is the genuine relationship between a creator and their audience. That relationship has been built over years of consistent, honest content. The moment you introduce a deepfake into that equation, you aren’t just deceiving the consumer – you’re devaluing every legitimate partnership in the market. Why would a brand invest in building a real creator relationship when they believe they can manufacture the same result? It is a race to the bottom, and the creators, the agencies, and ultimately the consumers all lose.”

She’s right, and the performance data backs it up. When we amplify creator content via Meta or TikTok campaigns, the engagement rates we see from warm, creator-loyal audiences consistently outperform cold audience targeting. The algorithm rewards content that generates authentic interaction. Deepfake content, once the audience cottages on, does the opposite – it triggers the kind of negative engagement signals that actively suppress paid distribution.

Influencer marketing agencies have spent a decade building the case that genuine creator relationships drive better outcomes than traditional advertising. Engagement rates, trust metrics, conversion data – the evidence stacks up. But that argument depends entirely on authenticity being something that cannot be faked.

If a brand can generate a credible deepfake of a creator for £500, the economic logic of a £50,000 partnership starts to wobble. Not immediately, and not entirely – but the trajectory is clear. Deepfakes do not just risk the ethics of individual campaigns. They risk the structural integrity of the influencer marketing industry itself.

For agencies, this is an existential conversation we need to start having seriously. The value we provide is rooted in relationships, cultural credibility, and genuine audience connection. The moment those things can be synthesised by a machine, the brief changes. How we adapt to that brief will define the next chapter of this industry.

Are There Scenarios Where Deepfakes Are Acceptable?

Yes. And I think intellectual honesty demands we acknowledge them.

There are a handful of contexts in which AI-generated likenesses in advertising sit on reasonably solid ethical ground. First, where explicit, informed consent has been given. Several creators have begun licensing their AI likenesses commercially – essentially creating a digital version of themselves that brands can use within agreed parameters, for agreed fees, with full creative oversight. This is a legitimate commercial arrangement, no different in principle to a brand licensing a celebrity’s image rights.

Second, in the creation of entirely fictional AI personas that make no claim to be a real person. AI-generated brand mascots, digital avatars, and synthetic spokespeople built from scratch rather than modelled on a real individual do not carry the same ethical weight. The deception is not present because there is no real person being impersonated.

Third, in post-production contexts where talent has consented – for instance, a creator who films a campaign but uses AI to dub the content into multiple languages, or to de-age their appearance for a specific creative concept. These are tools in service of a creative vision, with the subject’s full knowledge and agreement.

The common thread in every acceptable scenario is consent. Without it, the ethical framework collapses.

The Future of Deepfakes in Marketing

The technology isn’t going away, that much is certain.

What will change – and is already beginning to change – is the regulatory environment around it. The EU AI Act, which came into force in 2024, includes provisions around synthetic media disclosure. The UK’s approach is still evolving, but there is growing cross-party appetite for clearer rules on AI-generated advertising content. The ASA has begun consulting on updated guidelines. Platforms, under pressure from regulators and creators alike, are starting to introduce content labelling for AI-generated material. Meta has already begun requiring disclosure on AI-generated ads running across Facebook and Instagram. TikTok’s synthetic media policy is tightening. The direction of travel is clear.

For brands and agencies, the smart play is to establish internal ethical frameworks now, ahead of the regulation. What counts as acceptable AI use in creative production? Where does the line sit between tool and deception? What disclosure obligations do you hold yourself to, regardless of whether the law currently requires them?

From a pure performance standpoint, the answer is also clear. The highest-converting paid social creative we run is almost always genuine creator content, amplified. This content reaches audiences who already have a relationship with that creator. The trust is pre-built. The creative does not need to work as hard. The cost efficiencies follow. Deepfake content cannot replicate that. It can replicate a face and a voice, but it cannot replicate three years of audience trust.

Consumers are getting smarter. The same generation of 16-to-24-year-olds who grew up online and became native to influencer culture are also becoming increasingly attuned to inauthenticity. They will clock it. And when they do, no amount of paid spend will buy back the goodwill you burned.

The brands that will win in this environment are the ones that treat authenticity as a performance asset. That means investing in real creator relationships, building genuine community, and using AI as a production tool rather than a deception mechanism. Run paid behind content that is genuinely worth amplifying, and the numbers will follow.

The influencer market is expensive. The reputational cost of getting this wrong is more so.

Ready to build influencer partnerships that actually perform?

At Brandnation, we’ve spent over two decades building genuine creator relationships that drive real business impact. Our influencer marketing studio, Sphere™, is built on exactly the principles this piece defends: authentic connections, transparent partnerships, and paid amplification that works because the creative deserves to.
If you’re a brand navigating the influencer landscape and want to do it right, we’d love to talk.  Get in touch here.

simarin-tandon

About the author

Simarin Tandon | Senior Digital Marketing Manager

Having worked with brands across the Beauty & Wellness, FMCG, FinTech, and Home & Lifestyle sectors, Simarin focuses on driving acquisition and growth, whilst managing the Digital team at brandnation.

A curious marketer, Simarin’s finger is always on the pulse when it comes to performance and digital updates across both paid and organic platforms.

  • Our Services
  • Hide Services