Facebook will add ‘more human review’ of ad-targeting options

After a manual review, Facebook has reinstated roughly 5,000 ad-targeting terms that were disabled last week.

Chat with MarTechBot

Facebook Users People Diversity3 Ss 1920

On Wednesday, Facebook COO Sheryl Sandberg outlined steps the company is taking to address its latest ad-targeting controversy.

Last week, ProPublica revealed that Facebook’s ad-targeting options included the ability to target ads to people who had listed “Jew hater” as their field of study and “NaziParty” as their employer. In response, Facebook removed four ad-targeting fields populated by an algorithm based on information people entered into their Facebook profiles. Education, employment, field of study and job title targeting were disabled for new campaigns.

Now, the company is instituting “more manual review of new ad targeting options to help prevent offensive terms from appearing,” according to a Facebook post by Sandberg (embedded below). The company has also re-enabled some of the ad-targeting options that are based on self-reported user data.

“After manually reviewing existing targeting options, we are reinstating the roughly 5,000 most commonly used targeting terms – such as ‘nurse,’ ‘teacher’ or ‘dentistry.’ We have made sure these meet our Community Standards,” said Sandberg, adding,  “From now on we will have more manual review of new ad targeting options to help prevent offensive terms from appearing.”

The company also plans to create a program for people to “report potential abuses of our ads system to us directly,” wrote Sandberg.

Additionally, Facebook said it is “clarifying our advertising policies and tightening our enforcement processes to ensure that content that goes against our community standards cannot be used to target ads,” according to Sandberg.

Facebook will step up existing enforcement against targeting “that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender or gender identity, or disabilities or diseases”.

It’s unclear what specific steps Facebook is taking to that end, aside from the ones laid out in Sandberg’s post. A Facebook spokesperson said the company will release an update to its policies that outlines those steps sometime in the future.

Sandberg’s post in full:

Last week we temporarily disabled some of our ads tools following news reports that slurs or other offensive language could be used as targeting criteria for advertising. If someone self-identified as a ‘Jew-hater’ or said they studied ‘how to burn Jews’ in their profile, those terms showed up as potential targeting options for advertisers.

Seeing those words made me disgusted and disappointed – disgusted by these sentiments and disappointed that our systems allowed this. Hate has no place on Facebook – and as a Jew, as a mother, and as a human being, I know the damage that can come from hate. The fact that hateful terms were even offered as options was totally inappropriate and a fail on our part. We removed them and when that was not totally effective, we disabled that targeting section in our ad systems.

Targeted advertising is how Facebook has helped millions of business grow, find customers, and hire people. Our systems match organizations with potential customers who may be interested in their products or services. The systems have been particularly powerful for small businesses, who can use tools that previously were only available to advertisers with large budgets or sophisticated marketing teams. A local restaurant can shoot video of their food prep with just a phone and have an ad up and running within minutes and pay only the amount needed to show it to real potential customers. Most of our targeting is based on categories we provide. In order to allow businesses – especially small ones – to find customers who might be interested in their specific products or services, we offered them the ability to target profile field categories like education and employer. People wrote these deeply offensive terms into the education and employer write-in fields and because these terms were used so infrequently, we did not discover this until ProPublica brought it to our attention. We never intended or anticipated this functionality being used this way – and that is on us. And we did not find it ourselves – and that is also on us.

Today, we are announcing that we are strengthening our ads targeting policies and tools.

First, we’re clarifying our advertising policies and tightening our enforcement processes to ensure that content that goes against our community standards cannot be used to target ads. This includes anything that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender or gender identity, or disabilities or diseases. Such targeting has always been in violation of our policies and we are taking more steps to enforce that now.

Second, we’re adding more human review and oversight to our automated processes. After manually reviewing existing targeting options, we are reinstating the roughly 5,000 most commonly used targeting terms – such as ‘nurse,’ ‘teacher’ or ‘dentistry.’ We have made sure these meet our Community Standards. From now on we will have more manual review of new ad targeting options to help prevent offensive terms from appearing.

And third, we are working to create a program to encourage people on Facebook to report potential abuses of our ads system to us directly. We have had success with such programs for our technical systems and we believe we can do something similar with ads.

We hope these changes will prevent abuses like this going forward. If we discover unintended consequences in the future, we will be unrelenting in identifying and fixing them as quickly as possible. We have long had a firm policy against hate on Facebook. Our community deserves to have us enforce this policy with deep caution and care.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Tim Peterson
Contributor
Tim Peterson, Third Door Media's Social Media Reporter, has been covering the digital marketing industry since 2011. He has reported for Advertising Age, Adweek and Direct Marketing News. A born-and-raised Angeleno who graduated from New York University, he currently lives in Los Angeles. He has broken stories on Snapchat's ad plans, Hulu founding CEO Jason Kilar's attempt to take on YouTube and the assemblage of Amazon's ad-tech stack; analyzed YouTube's programming strategy, Facebook's ad-tech ambitions and ad blocking's rise; and documented digital video's biggest annual event VidCon, BuzzFeed's branded video production process and Snapchat Discover's ad load six months after launch. He has also developed tools to monitor brands' early adoption of live-streaming apps, compare Yahoo's and Google's search designs and examine the NFL's YouTube and Facebook video strategies.

Get the must-read newsletter for marketers.