What does the new IAB U.S. AI Transparency and Disclosure Framework means for your business? If you're using AI to create advertising, and most of us are, there's now a helpful industry framework that tells you when you need to disclose it to consumers, and when you don't.
IAB U.S. released its AI Transparency and Disclosure Framework in Q1 2026. It's the first of its kind globally, and it's worth understanding - here's what it means.
The problem it's solving
AI is now embedded in most advertising workflows - generating product images, writing copy, cloning voices, creating video. But disclosure practices are all over the place. Some brands label everything. Others label nothing. The result is consumer confusion, and research that should concern anyone in the industry.
IAB U.S's own study found that 82% of advertising executives believe younger consumers feel positively about AI-generated ads. The reality? Only 45% of Gen Z and Millennials actually do. That gap has grown since 2024. Gen Z consumers are nearly twice as likely as Millennials to feel negatively toward AI ads, with the majority describing AI-using brands as inauthentic, disconnected, or unethical.
The good news: 73% of Gen Z and Millennials say clear disclosure would either increase their likelihood to purchase, or make no difference. Transparency is a trust builder, not a deterrent.
Disclose when it matters, but not for everything
The Framework takes a sensible, risk-based approach. You don't need to label every piece of content that touched an AI tool. The question to ask is: does the AI involvement create a real risk that a consumer will be misled about what they're seeing, hearing, or interacting with?
If the answer is yes, you disclose. If the answer is no, you don't.
When disclosure is required
You need to tell consumers when AI has been used for:
When disclosure is not required
You do not need to label AI use for:
The test is always consumer impact - not how many AI tools were used in production.
What disclosure should look like
When disclosure is required, keep it simple and visible. Plain text labels are the standard: "AI-generated image", "AI-generated video", "AI-generated person". The label should appear near the content, remain visible throughout, and meet basic accessibility standards.
For video and synthetic humans, the label should appear from the first frame and stay visible. For audio, a verbal disclosure is required. Platforms like Meta, YouTube, and TikTok are increasingly applying their own AI labels automatically - these can complement your disclosure but don't replace your responsibility as the advertiser.
Who is responsible?
The advertiser - or the agency creating content on behalf of a brand - carries ultimate responsibility. Platforms enforce within their own ecosystems but don't take on the advertiser's accountability. This sits with you.
What to do now
The Framework recommends these minimum viable actions, achievable within 60 days:
The regulatory POV
This framework is voluntary, but the regulatory environment is moving quickly. The EU AI Act requires disclosure for AI-generated content from 2027. US states including California and New York are already legislating. In the APAC region, South Korea has mandated blanket labelling of all AI-generated photos and videos in advertising effective early 2026. New Zealand doesn't yet have advertising-specific AI disclosure law, but existing consumer protection rules around deceptive advertising already apply regardless of whether AI was involved.
Getting ahead of this now, before it becomes a compliance issue is the best approach.
If you’re an IAB New Zealand Member, you can download this resource here
For further information or to become an IAB New Zealand Member, please get in touch.