The People Know It’s AI. Here’s How to Build Trust.

The One Thing

AI generates the words.
You determine if they earn trust.
The brands winning B2B in 2026 added the layer AI can't.


Generic output lacks specificity, customer language, and the accountability signals B2B buyers use to verify every claim. The fix isn't using less AI — it's ensuring every draft passes three tests before it reaches a buyer: the brand voice test, the VOC test, and the citation test.

Key Takeaways

  • 50% of consumers prefer brands that skip generative AI in consumer-facing content (Gartner, 1,539 respondents, October 2025)

  • AI influencer research isolates the psychological mechanism — the same one operating when B2B buyers read your copy

  • Three rules close the gap: brand voice editing, VOC language, and source-first citations

  • Content that passes all three is also more likely to earn AI citations in ChatGPT, Perplexity, and Google AI Overviews

Read this twice: half of all consumers would rather buy from a brand that doesn't use AI in its marketing at all.

I’m not making that up. That's from Gartner — 1,539 people surveyed last October.

If you're thinking "okay, but that's consumers, not MY B2B buyers" — your B2B buyers are in that number.

Ninety-four percent of them now use AI tools when they research vendors. They know what AI copy sounds like. More than half say they're less likely to engage with it when they spot it.

New research on AI influencers explains exactly why trust takes the hit when buyers detect generated content. The mechanism is the same whether you're talking about a virtual spokesperson or a service page your team drafted in Claude on a Tuesday afternoon.

Three rules protect you from the same erosion — let’s talk about it.

Definition

GEO Concept · bradleebartlett.com

AI Influencer

also: virtual influencer, CGI influencer, synthetic creator


A computer-generated digital persona deployed on social media to endorse brands, products, or ideas. AI influencers have no lived experience, no real opinions, and no authentic relationship with what they promote. They are entirely scripted — which is exactly what makes them the clearest case study for how generated content erodes trust.

The same trust erosion documented in AI influencer research operates when B2B buyers read copy that sounds like it came from an averaging machine. The source is different. The mechanism is identical.

What Is an AI Influencer?

An AI influencer is a computer-generated social media persona with no real person behind it — scripted content, fabricated experiences, and zero authentic relationship with the products it promotes.

Before we dig into B2B copy and AI, let’s talk about AI influencers in general.

An AI influencer is exactly what it sounds like: a social media persona that doesn't exist.

There’s someone behind the account, but it’s not the person you’re seeing. That means what you’re viewing and reading isn’t genuine outside the poster’s imagination.

Even if it’s based on real events, there are no genuine experiences, no actual opinions, no authentic relationship with the brands they promote.

The photos, the captions, the endorsements — all generated or scripted. Some are photorealistic enough that followers engage for months without realizing it.

Real Example @emilypellegrini  ↗
Screenshot of the @emilypellegrini Instagram grid — an AI-generated influencer account with over 200,000 followers

What you're looking at

@emilypellegrini is a fully AI-generated Instagram persona that built over 200,000 followers before publicly disclosing it wasn't a real person. Several posts now carry an explicit "this is AI." label — which, per the JMSR research, made perceived trust worse, not better. No lived experience, no real opinions, no authentic relationship with the audience. This is the mechanism the research is measuring.

Look at Emily Pelligrini — one of the first accounts you’ll find if you search “AI influencer”. Her content looks real. The person isn't (at least, they aren’t the person pushing “publish”)

That's why the research on them matters here. The trust signals buyers use to evaluate an influencer or brand — specificity, accountability, a point of view that feels earned — are the same signals they use to evaluate your copy and other online brand collateral.

When those signals are missing, the response is the same whether the content is a sponsored post or a service page. Is this real? Is this a joke? Do they think we’re dumb?

What Does AI Influencer Research Tell Us About Consumer Trust?

AI-generated influencers significantly reduce perceived authenticity and brand trust — and the same psychological mechanism operates when B2B buyers read your copy.

Let’s get data-nerdy for a moment. A study published in the Journal of Marketing and Strategy Research conducted a quantitative experiment with 320 participants to test whether AI influencers damaged brand trust compared to human influencers.

And as you might guess, they do. And quite a bit. (F = 119, p < .001, to be precise).

And when brands explicitly disclosed that the influencer was AI? Trust dropped even further (F = 50.61, p < .001).

A second study in the International Journal of Multidisciplinary Research and Publications asked the same question of 790 people. When participants identified content as AI-generated, perceived authenticity fell hard (β = -0.48, p < .001).

The researchers called it "a major driver of trust erosion."

Both studies point to the same mechanism: Source Credibility Theory. Authenticity drives persuasion. When it drops, so does everything downstream — trust, engagement, purchase intent.

Real Example @emilypellegrini  ↗
@emilypellegrini Instagram post showing AI-generated content disclosure alongside a lead generation CTA

What's happening here

1

The disclosure is buried after the CTA. The caption leads with a lead generation hook — "Comment GUIDE and I'll send you the full breakdown" — and only then adds "This content is AI-generated." The trust signal comes after the ask, which is exactly backwards.

2

Followers are responding to someone who doesn't exist. The comments engage with her appearance and personality directly — parasocial attachment fully intact, disclosure or not. This is what the JMSR research documented: explicit disclosure made brand trust worse, not better.

3

The account is now monetizing the deception playbook. The guide being sold teaches others how to build and grow their own AI creator. The product is the inauthenticity itself — which tells you everything about where this category is headed and why buyers are getting more skeptical by the month.

Lil Miquela and the Brands That Learned the Hard Way

Here’s an example I love to talk about. Lil Miquela has 2.5 million Instagram followers and brand deals with Calvin Klein, Prada, Samsung, and PacSun. She's one of the most-followed accounts on the platform. She's also entirely computer-generated.

The Calvin Klein campaign is the most documented case of this going sideways. They ran Miquela alongside Bella Hadid — real model, real person — in a way that implied intimacy between them.

The backlash was immediate — not because people couldn't tell Miquela was AI, but because the moment felt engineered, and to many, offensive.

Calvin Klein was attempting to manufacture a moment of intimacy between a real person and something that doesn't exist, and asking audiences to feel something about it.

Real Example @lilmiquela  ↗
Lil Miquela Instagram post claiming to have met Nancy Pelosi at Outside Lands — comments immediately call out the inauthenticity

The comments section did the work

d

"I just want to know how that interaction went — 'Hey can we get a picture with you? I play a famous AI influencer. We impose her face onto my body and then we'll post it.'"

m

"'Capture the gen z audience' ahh post."

b

"girl be fr"

The audience isn't confused. They're not impressed. They're running their own authenticity test in real time — and the post failed it. This is the same test your buyers run on your copy.

But the comments section tells the story better than any case study. Miquela posted a "selfie" at Outside Lands music festival — appearing to pose with Nancy Pelosi. The internet was not moved, other than to mock.

"I just want to know how that interaction went — 'Hey, can we get a picture with you? I play a famous AI influencer. We impose her face onto my body and then we'll post it.'"

That's the audience telling you exactly what they think. They're not confused, and they aren’t impressed. They may not be done with the brand (because people will follow ANYONE — especially if they are attractive), but it can quickly tip over into cringe — which is brand reputation damage.

And that's the mechanism this post is about. The scale is different when we're talking about your website copy. The stakes are different.

But the psychological response is identical — a buyer who senses the words in front of them weren't written by anyone with a real point of view, real experience, or real accountability. Trust doesn't erode loudly. It just quietly decides to look elsewhere.

So why does all of this matter for your brand, AI, and how you present yourself online?

How Big Is the AI Copy Trust Problem in B2B?

Over half of B2B buyers are now less likely to engage with content they suspect is AI-generated — even as 94% of them use AI tools to research vendors.

B2B brands are pumping out AI-generated content to stay competitive. The volume is up, but the trust is down. And buyers are doing something specific in response: verifying everything.

15%

of consumers highly trust products endorsed by virtual or AI influencers.

Source: Influencer Marketing Factory survey, 2025

Researchers call it a Verification Spiral — skeptical buyers manually checking every claim before they'll engage, which extends deal cycles and erodes confidence before a single conversation happens.

The data:

Forrester put out 2026 predictions and landed on this perfectly: trust is now "the ultimate currency for B2B buyers."

Not solutions or claims of success. They want real, verifiable proof. The brands with specific, verifiable, human-edited content will win the deals.

68%

of consumers frequently wonder whether the content they see online is even real.

Source: Gartner survey, 1,539 U.S. consumers, October 2025

What Are the 3 Rules for Preserving Authenticity in AI-Assisted B2B Copy?

Authenticity in AI-assisted copy comes down to three tests. Generic output fails all three. Content that passes them earns trust and earns AI citations.

Hear me out — this isn't about using less AI. I’m not even against AI influencers on Instagram (as long as they take themselves for what they are and roll with the fun).

It's about what happens between the draft and the publish button. There are three tests you should run, and if your copy passes them, it builds trust.

Rule 1 — Strategic Human Editing (The Brand Voice Test)

AI drafts + you shape.

Pull your logo off the page — would anyone know it was your brand? If not, that's the problem.

B2B brands with long sales cycles especially need a consistent, trustworthy brand voice. AI without strategic human editing actively works against that.

Watch for these in every draft: corporate language that avoids anything bold, predictable three-point structures, hedge words ("might," "could," "arguably"), passive voice that hides who's responsible for anything.

Three questions before you publish:

  1. Does this sound like us, or does it sound like everyone?

  2. Would our best customer recognize this voice?

  3. Does every sentence make a specific, ownable claim?

No to any of those — the draft isn't ready.

Rule 2 — Surface Unique Customer Language (The VOC Test)

Your customers already wrote your best copy. Voice-of-customer language converts because it uses the exact words buyers use to describe their own problems — language AI cannot generate from your specific customer base.

Where to find it: sales call transcripts, customer interviews (ask "What would you need to see to feel confident buying?"), G2 and Capterra reviews, support ticket language.

Copy Comparison — Generic AI vs. VOC-Informed

Generic AI

"Our platform leverages cutting-edge AI capabilities to streamline your workflows and drive measurable results."

No metric No attribution No voice
VOC-Informed

"Before [Product], our clients' ops teams spent 14 hours a week reconciling reports. Now it's under 2 hours — and three clients told us it's the first tool change their team actually asked to keep."

Specific metric Customer attribution Human detail

The VOC version works because of three things: a specific metric, attribution to real customers, and detail AI would never generate unprompted. Those are also E-E-A-T signals. The same edit that builds authenticity builds GEO performance.

Rule 3 — Require Source-First Citations (The Verification Test)

Every specific claim in your copy should trace somewhere real. A customer outcome, a named study, a first-hand case. Uncited claims are a liability — 90% of B2B buyers who encounter AI search results click through to cited sources to verify claims.

Seventy-one percent avoid suppliers without transparent information. Forrester's 2026 predictions include a Fortune 500 company suing a B2B provider over AI-generated misrepresentation. That's what unchecked, uncited content looks like when it scales.

Content with structured citations sees 30–40% higher visibility in AI responses across 10,000 real-world queries. Citation authority is now the primary ranking factor for Perplexity and Google AI Overviews.

The test:

Before any page goes live — can every specific claim link to a real source? If not, it's both a trust problem and a GEO problem.

Generic vs. Authentic — Copy Comparison

These five elements separate authentic, citation-worthy copy from generic AI output. Label them by name when you're editing drafts.

Use this as a diagnostic when you're reviewing AI-assisted drafts before publication.

Element Generic AI Version Authentic Version
Claim specificity "measurable results" "14 hours → under 2 hours/week"
Attribution none "three clients told us"
Voice passive, generic active, specific, ownable
Verifiability zero customer-sourced, traceable
AI citation eligibility low high

Every weakness in that generic column is also a GEO problem. AI systems don't cite vague claims. They cite specific, attributable, source-backed statements. The edit that builds trust builds AI visibility at the same time.

So here's where that leaves you.

Every piece of AI-assisted copy your team publishes is either building trust or eroding it. There's no neutral.

Buyers are running authenticity checks on everything — your homepage, your case studies, your LinkedIn posts, your email sequences. They're doing it faster than ever, with better tools than ever, and they're making decisions based on what they find.

Forrester 2026 Prediction ✓ Confirmed

A Fortune 500 company will sue a B2B provider over AI-generated misrepresentation.

The three rules in this post are the new editing layer. Voice → VOC → citations.

You can apply all three to a single page in an afternoon and walk away with copy that reads as trustworthy to a skeptical buyer and citable to an AI search engine pulling sources for someone who just asked about your category.

That's the opportunity most B2B brands are leaving on the table right now — not because the work is hard, but because no one told them the bar had moved.

Now you know.

Work with Brad

Your buyers are already verifying every claim they read. Is your copy ready?

The brands that pass the trust check aren't doing it by accident. I work with B2B and SaaS teams to audit existing copy, build brand voice systems, and create content that earns citations in the AI tools your buyers use to research you. If that's the gap you need to close — let's talk.

Let's Work Together → Fiverr Pro vetted  ·  4.9 stars  ·  1,600+ client reviews

Frequently Asked Questions

  • It depends on what you do between the draft and the publish button. AI output that goes live without strategic human editing, customer language, and source citations erodes trust. 

    Gartner's October 2025 survey of 1,539 consumers found 50% would rather buy from brands that skip generative AI in their marketing altogether.

    B2B brands with long sales cycles get the most exposure here — trust is the primary driver of purchase, and buyers can sense generic copy.

  • VOC copywriting uses the exact language real customers use to describe their own problems and outcomes. It matters for AI-assisted content because AI cannot generate your specific customers' words.

    That customer-specific language is the primary authenticity signal that separates your copy from generic output — and it also mirrors the conversational phrasing AI search engines use to match buyer queries, so it's doing double duty.

  • AI engines favor content with clear H2/H3 structure where each section directly answers one question, named sources with sample sizes and dates cited inline, specific statistics and original data, author bylines with credentials, and concise claims that can be extracted in one to two sentences. 

    Generic or uncited claims don't get pulled. The same edits that make copy more trustworthy to buyers make it more visible to AI search.

  • SEO optimizes for Google ranking signals: search intent match, keyword placement, backlinks, page experience. GEO (Generative Engine Optimization) optimizes for AI citation:

    • structured answer modules

    • source-first claims

    • named statistics

    • author authority signals.

    SEO drives clicks from traditional search; GEO earns citation in the AI tools that 94% of B2B buyers now use during purchase research.

  • The most common:

    • corporate language that avoids anything bold or memorable

    • predictable three-point structures

    • jargon your competitors also use

    • hedge words like "might," "could," or "potentially," and passive voice that hides accountability

    AI doesn't create bad content — it creates average content.

    Average is the enemy of a brand voice that buyers actually remember.

  • The JMSR study (320 participants) found that explicit AI disclosure exacerbated the negative effects on brand trust (F = 50.61, p < .001).

    Once buyers know the source is non-human, they apply greater scrutiny to every authenticity signal in the content.

    Strategic human editing for voice is what turns disclosure from a trust liability into a neutral fact.

Brad Bartlett — Copywriter and Content Strategist based in Kansas City

Written by

Brad Bartlett

Brad is a copywriter and content strategist who helps creators, brands, and organizations build content that's actually worth reading — and built to be found. He specializes in conversion-focused copy, brand voice, and SEO and AI search optimization, with a straightforward philosophy: great content has to be authentic before it can perform. He works comfortably across the AI content space, helping clients use the tools without losing the voice. Fiverr Pro vetted, 4.9 stars out of 5 across 1,600+ clients.

Next
Next

Mainstream Press Just Discovered GEO. But Don’t Panic About AI Search Just Yet.