News & Resources

When do you need to disclose AI in your advertising? New IAB U.S. framework

Written by IAB US | Mar 31, 2026 5:06:46 AM

What does the new IAB U.S. AI Transparency and Disclosure Framework means for your business? If you're using AI to create advertising, and most of us are, there's now a helpful industry framework that tells you when you need to disclose it to consumers, and when you don't.

 

IAB U.S. released its AI Transparency and Disclosure Framework in Q1 2026. It's the first of its kind globally, and it's worth understanding - here's what it means.

 

The problem it's solving

AI is now embedded in most advertising workflows - generating product images, writing copy, cloning voices, creating video. But disclosure practices are all over the place. Some brands label everything. Others label nothing. The result is consumer confusion, and research that should concern anyone in the industry.

 

IAB U.S's own study found that 82% of advertising executives believe younger consumers feel positively about AI-generated ads. The reality? Only 45% of Gen Z and Millennials actually do. That gap has grown since 2024. Gen Z consumers are nearly twice as likely as Millennials to feel negatively toward AI ads, with the majority describing AI-using brands as inauthentic, disconnected, or unethical.

 

The good news: 73% of Gen Z and Millennials say clear disclosure would either increase their likelihood to purchase, or make no difference. Transparency is a trust builder, not a deterrent.

 

Disclose when it matters, but not for everything

The Framework takes a sensible, risk-based approach. You don't need to label every piece of content that touched an AI tool. The question to ask is: does the AI involvement create a real risk that a consumer will be misled about what they're seeing, hearing, or interacting with?

 

If the answer is yes, you disclose. If the answer is no, you don't.

 

When disclosure is required

You need to tell consumers when AI has been used for:

  • Images or videos generated entirely from a prompt (text-to-image, image-to-video)
  • AI-generated voices of deceased people creating statements they never made
  • AI-generated voices of living people making statements about events or circumstances that never happened
  • Photorealistic synthetic humans in primary roles
  • Digital twins of deceased individuals in any capacity
  • Digital twins of living people shown in scenarios that never occurred
  • AI chatbots or conversational agents in ads that simulate human interaction

 

When disclosure is not required

You do not need to label AI use for:

  • Routine post-production editing - colour correction, retouching, background cleanup
  • Standard copy and headlines drafted with AI assistance
  • AI-generated background music or sound design
  • Translations or localisations of approved content
  • Authorised synthetic voice of a living person for scripted commercial endorsements
  • Obviously stylised, animated, or fantastical characters

 

The test is always consumer impact - not how many AI tools were used in production.

 

What disclosure should look like

When disclosure is required, keep it simple and visible. Plain text labels are the standard: "AI-generated image", "AI-generated video", "AI-generated person". The label should appear near the content, remain visible throughout, and meet basic accessibility standards.

 

For video and synthetic humans, the label should appear from the first frame and stay visible. For audio, a verbal disclosure is required. Platforms like Meta, YouTube, and TikTok are increasingly applying their own AI labels automatically - these can complement your disclosure but don't replace your responsibility as the advertiser.

 

Who is responsible?

The advertiser - or the agency creating content on behalf of a brand - carries ultimate responsibility. Platforms enforce within their own ecosystems but don't take on the advertiser's accountability. This sits with you.

 

What to do now

The Framework recommends these minimum viable actions, achievable within 60 days:

  1. Nominate someone in your team to own AI disclosure decisions - this doesn't need to be a new role, just a clear responsibility
  2. Add a four-question checklist to your pre-launch process: Was AI used? What tools? Does this require disclosure under the framework? Is the label present and visible?
  3. Use plain language labels when disclosure is required - "AI-generated image" not "synthetically produced asset"
  4. Document AI use across campaigns so you have an audit trail if needed
  5. Update creative briefs and agency contracts to include AI disclosure requirements

 

The regulatory POV

This framework is voluntary, but the regulatory environment is moving quickly. The EU AI Act requires disclosure for AI-generated content from 2027. US states including California and New York are already legislating. In the APAC region, South Korea has mandated blanket labelling of all AI-generated photos and videos in advertising effective early 2026. New Zealand doesn't yet have advertising-specific AI disclosure law, but existing consumer protection rules around deceptive advertising already apply regardless of whether AI was involved.

 

Getting ahead of this now, before it becomes a compliance issue is the best approach.

 

If you’re an IAB New Zealand Member, you can download this resource here

For further information or to become an IAB New Zealand Member, please get in touch.