As we discussed during our class on Facebook, the social media giant is known for controversy—whether it’s over a new feature, privacy, or using the platform for shady activities. This week, the social media company is bolstering one of their most important divisions, advertising, to combat what some refer to as “questionable advertising,” specifically in the election campaign section.
Facebook’s advertising is a critical revenue stream for the company, but the advertising system has recently come under fire. Just this morning, Facebook sent “evidence of 3,000 Russia-linked advertisements to congressional investigators, following weeks of pressure from Congress to reveal details about its advertising system.” A Russian entity known as the “Internet Research Agency” allegedly paid for thousands of ads to run with the intention of “fueling political discord” and “exacerbating divisiveness” during the most recent presidential election. It is not clear at this point which ads were posted by this Russian entity or who the ads were targeted at.
Facebook is not alone in the fight against Russian propaganda spread. Twitter announced last week that it had discovered and terminated nearly 200 accounts that were used to spread divisiveness across the United States. These accounts reportedlly promoted tweets such as the “text to vote” scam (do take note, this is a scam! You cannot vote unless you appear in person to fill out a ballot) which was primarily targeted at potential Clinton voters based on Twitter’s advertising algorithms. Several of these terminated accounts were matched to Facebook accounts that had been disabled as part of Facebook’s investigation.
Facebook’s ad platform will also allow users to see all of the advertisements run by a particular page, not just those that are targeted at you. This could potentially allow users to spot shady advertisements and then report them to Facebook ad reviewers for a closer inspection. However, Facebook hopes that their own internal new measures are able to catch these situations before they even reach the public view.
To combat these problems, Facebook announced this afternoon that they are going to hire an additional 1,000 people to fortify its advertising review teams globally. This is just a first step, with the company investing in machine learning technologies to automatically flag suspicious ads. Another major step Facebook is taking is that they are going to begin requiring more significant documentation if someone attempts to purchase an ad relating to a US federal election. This documentation is as simple as proof that a purchaser does indeed work for a legitimate organization. Facebook will also attempt to prevent ads that promote “subtle expressions of violence” to help crack down on ads that intentionally increase social tensions.
Facebook founder and CEO Mark Zuckerberg said this morning, “It is a new challenge for internet communities to deal with nation-states attempting to subvert elections. But if that’s what we must do, we are committed to rising to the occasion.” Facebook, Twitter, and other social media companies are beginning to work with government and other industry players to develop better standards and share information about advertising trends so that they can prevent problem ad campaigns again in the future.
As Joe Fingas explains in his article, he believes Facebook’s move is “part of an all too familiar pattern at Facebook: the company implements broad changes (usually including staff hires) after failing to anticipate the social consequences of a feature.” This pattern includes Facebook’s reaction to violence on Facebook Live, which allows users to livestream whatever they want to their network. Back in May of 2017, Facebook announced they were to hire over 3,000 moderators to its global community operations team, nearly doubling the size of that group which is responsible for making sure objectionable content is not put on Facebook. The goal of this is to aid in quick response, making sure that there is someone available to take action on a report and take the post down or call emergency services for help. Like with the advertising response, Facebook pledged to increase investment in artificial intelligence to not only combat potential hate speech and child exploitation, but also identify potentially suicidal people and deliver suicide prevention tools to those people.
The key failure in this pattern is that the measures Facebook has taken are reactive, not proactive. It was not until a feature was abused did they introduce what they believe to be sufficient moderation of that feature. This is a particularly troubling pattern considering that companies like Facebook are increasingly experimenting in artificial intelligence and machine learning. Will the company have proper controls in place once they roll these features out to more people? How will social media companies come together to make sure their features are actually safe for users and the integrity of our country’s electoral processes?
Facebook needs to more thoughtfully consider potential problems with their new features before they are released to the public instead of waiting for bad situations to come to light.
What do you think about Facebook’s response to advertising criticism? Leave your thoughts in the comment section below.
For reference and more information on the ongoing situation with Facebook’s advertising systems, please visit: