The owner of Facebook, Meta (META.O), has announced that it is preventing advertisers in other regulated industries and political campaigns from utilizing its new generative AI advertising products. This move prevents access to tools lawmakers have warned could accelerate the spread of false information ahead of elections. This announcement was made by a company spokesperson on Monday.
Following the publication of this piece, Meta made the decision publicly known on Monday night through modifications to their support center. Although it has no direct limits on AI, its advertising standards forbid ads with content that the company’s fact-checking partners have refuted.
“As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit, or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals, or Financial Services aren’t currently permitted to use these Generative AI features,” the company said in a note appended to several pages explaining how the tools work.
“We believe this approach will allow us to understand potential risks better and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries,” it said.
The policy change was made a month after Meta, the second-largest digital ad marketplace in the world, declared it would begin to give advertisers more access to AI-powered advertising tools. These tools allow advertisers to instantly create backgrounds, edit images, and change ad copy in response to simple text prompts.
Initially, starting in the spring, only a select few advertisers had access to the capabilities. The company said it will launch for all marketers worldwide by the following year.
Since OpenAI’s ChatGPT chatbot debuted last year and generated a lot of buzz, several tech companies, including Meta, have rushed to introduce generative AI ad solutions and virtual assistants. ChatGPT can write responses to prompts and inquiries similar to a human’s.
The safety guard rails that the businesses want to implement on such systems have yet to be made public; thus, Meta’s decision on political advertisements is among the most essential AI policy decisions the industry has made yet.
The largest digital advertising provider, Alphabet’s (GOOGL.O) Google, revealed last week the release of comparable image-customizing generative AI ad technologies. According to a Google representative who spoke to Reuters, the company intends to keep politics out of its products by prohibiting using a list of “political keywords” as suggestions.
Additionally, Google has scheduled a policy update for mid-November that would mandate that any election-related advertisements carry a disclaimer if they feature “synthetic content that inauthentically depicts real or realistic-looking people or events.”
While X, formerly known as Twitter, has not released any generative AI advertising tools, TikTok and Snapchat owner Snap (SNAP.N) prohibit political advertisements.
Last month, Nick Clegg, the chief policy executive at Meta, declared that generative AI use in political advertising was “clearly an area where we need to update our rules.”
In advance of the recent AI safety meeting in the UK, he issued a warning, urging governments and tech companies to prepare for the possibility that the technology may be used to rig the 2024 elections. He also called for focusing on election-related information that “moves from one platform to the other.”
Before this, Clegg informed that Meta was preventing the creation of life-like representations of public personalities by its user-facing Meta AI virtual assistant. Meta promised to provide a mechanism to “watermark” artificial intelligence-generated content this summer.
Except for parodies and satire, Meta strictly prohibits deceptive AI-generated videos in all content, including organic, non-paid posts.
The independent Oversight Board of the company announced last month that it would investigate the viability of such a strategy, taking up a case involving a manipulated video of US President Joe Biden that Meta claimed it had left up since it was not artificial intelligence (AI)-generated