Pulling the Curtain Back Too Far? Implications of Meta’s “Made with AI”

Do you know whether teams are using GenAI tools for your brand’s organic content? You should.

Posted

As announced in February, and then updated in April, Meta recently rolled out a content disclaimer across platforms and placements that alerts users when content has been manipulated by AI.

Any content developed with tools like Meta AI comes with watermarks that say “Imagined with AI,” while any third-party content uploaded to platforms with “industry-shared signals” (e.g., C2PA or IPTC metadata generated by Adobe, Google, Shutterstock, etc.) receives a label from the platform that says “Made with AI”*—no matter the extent to which GenAI created or revised the asset.

The label is applied to all organic posts with any AI creative elements but will only appear in paid placements if the content Is flagged for social issues or politics, or if the ad is a boosted organic post. Assets with the Made with AI label are not directly penalized by the algorithm. Still, some creators and marketers are now worried that, in the platform’s attempt for radical transparency, their organic content will suffer from decreased native platform engagement because of what the label implies.

What’s happening

The proliferation of GenAI solutions has triggered an equivalent outpouring of discourse on the possibility of manipulative content, particularly during “the ultimate election year,” in which citizens of 64 countries will be taking to the polls. (Nick Clegg, president of global affairs for Meta, says that politically manipulative information, though present, represents a “manageable amount” for platform moderation teams.) But the possibility of mis- or disinformation underscores how the platform articulates the importance of transparently identifying AI-derived content—all while also ensuring that platform intervention doesn’t go so far as to limit free speech.

Meta announced the Made with AI label in an update to platform policy in April. The approach currently in use, called “labels with context,” automatically labels any organic content on which metadata, C2PA encryption, or creators proactively disclose that GenAI enabled asset creation. In paid placements, the label will appear only on content flagged for social issues or politics, or boosted organic content. Meta says "the label is designed to share this AI context neutrally and is intended to provide information about how content was created.” There’s no impact on organic reach from an algorithmic perspective just because of the label. (In contrast, Clegg says that Meta will soon apply penalties to accounts that fail to disclose the use of AI in asset creation or revision; YouTube and TikTok have similar guidelines in place.)

How it works

What happens in creative workflows How Meta interprets the asset
Content production workflows from Adobe, Microsoft, Google, Shutterstock, and other companies include industry-standard AI signals, like C2PA encryptions or IPTC metadata, when GenAI elements are used. Whenever the platform identifies these signals within an asset, it will trigger the Made with AI label. This is based on the metadata, not the composition, so the label doesn’t reflect the amount number of revisions (or amount of original content that is not AI).
Not all production workflows will trigger the Made with AI label. However, marketers and their creative design teams should be sure to understand what toolsets and processes could trigger the label. Platform, legal, and regulatory guidelines are rapidly evolving. We do not recommend attempting to obfuscate the use of GenAI tools in asset creation; transparency is key.
This is especially important for organic content production workflows, or for brands whose social paid creatives are typically marked as discussing social issues or politics. All organic posts with AI will be flagged by the tool. For paid, only social issues/political content, or organic posts with AI running as boosted ads will be marked.

Limitations and additional context

Encryption standards like C2PA—as well as other approaches—are good faith attempts to provide provenance, transparency, and brand safety around asset generation and revision. (Full disclosure: Publicis Groupe, along with Adobe, Meta, Google, and others, is on the steering committee of C2PA.) But there are several ways that bad actors can avoid pixel-based encryption and authentication efforts.

However, there are outspoken critics—especially photographers and content creators whom Meta has historically tried to woo from platform competitors—who are concerned that the label is inaccurate and misleading. There are extensive threads on Instagram, as well as explainer videos on YouTube, from photographers and creators who are trying to understand the policy, and how to remove the label from work they believe to be mislabeled. To them, the label fails because it doesn’t clarify the extent to which AI tools were used, or in what use cases. And there are plenty of questions as well about the distinction the label seems to be making between Photoshopping an asset with generative fill and, well, Photoshopping an asset.

Meanwhile, there are plenty of AI—fueled pages and accounts polluting Meta’s recommendation engines with “zombie” content from less ethical contributors goosing engagements. Critics of the Made with AI label may see it as a blunt instrument that could have a downstream impact of preventing creators from using time-saving, ethical, and transparent features within industry standard solutions for content creation and production. But within a context of low-quality and patently obvious AI content flooding platforms—likely without triggering auto-generated labels, due to a lack of transparency on creators’ part—the uneven application of Made with AI labels could completely miss any “real” issues to be solved around content authenticity and quality on platform.

Implications for brands

We continue to see the potential for GenAI to enable better, faster, and never-been-done-before creative content and campaign executions.

But the rapid evolution of possibilities—and the lagging implementations of platform policies, as well as legal and regulatory guidance—require that marketers lead with a lens of brand safety and governance.

All signs point to platforms’ continuing to expand programs to identify GenAI outputs (and to penalize any attempts to skirt them). Some brands—especially those with attributes like innovation, technology, or creativity at heart—may want to lean in and intentionally own AI labels as an extension of the brand and proof of those brand attributes in action.

At the same, marketers need to consider unintentional or undesired cases where the label may be applied to their content. Does the organization have content production processes that create areas of risk for “false positives”? And how will your brand’s priority audiences react to seeing such a label? Just because there is no direct algorithmic penalty does not mean that audiences will look favorably on content with such labels, or that patterns of engagement with the brand will never change.

Update: This article was written in June 2024. As of July, Meta has made a minor linguistic change to disclaimers, replacing "Made with AI" to "AI Info." It is still too early to tell whether this will satisfy the pushback and concerns that are relevant for brands and digital creators alike. Other than changing the label verbiage, the platform has not made any change to the process described in this POV to identify GenAI creative components.

Share this article

You Might Also Like

How can we help?