Facebook’s Questionable Advertisements

As we discussed during our class on Facebook, the social media giant is known for controversy—whether it’s over a new feature, privacy, or using the platform for shady activities. This week, the social media company is bolstering one of their most important divisions, advertising, to combat what some refer to as “questionable advertising,” specifically in the election campaign section.

Facebook’s advertising is a critical revenue stream for the company, but the advertising system has recently come under fire. Just this morning, Facebook sent “evidence of 3,000 Russia-linked advertisements to congressional investigators, following weeks of pressure from Congress to reveal details about its advertising system.” A Russian entity known as the “Internet Research Agency” allegedly paid for thousands of ads to run with the intention of “fueling political discord” and “exacerbating divisiveness” during the most recent presidential election. It is not clear at this point which ads were posted by this Russian entity or who the ads were targeted at.


Facebook is not alone in the fight against Russian propaganda spread. Twitter announced last week that it had discovered and terminated nearly 200 accounts that were used to spread divisiveness across the United States. These accounts reportedlly promoted tweets such as the “text to vote” scam (do take note, this is a scam! You cannot vote unless you appear in person to fill out a ballot) which was primarily targeted at potential Clinton voters based on Twitter’s advertising algorithms. Several of these terminated accounts were matched to Facebook accounts that had been disabled as part of Facebook’s investigation.


Facebook’s ad platform will also allow users to see all of the advertisements run by a particular page, not just those that are targeted at you. This could potentially allow users to spot shady advertisements and then report them to Facebook ad reviewers for a closer inspection. However, Facebook hopes that their own internal new measures are able to catch these situations before they even reach the public view.

To combat these problems, Facebook announced this afternoon that they are going to hire an additional 1,000 people to fortify its advertising review teams globally. This is just a first step, with the company investing in machine learning technologies to automatically flag suspicious ads. Another major step Facebook is taking is that they are going to begin requiring more significant documentation if someone attempts to purchase an ad relating to a US federal election. This documentation is as simple as proof that a purchaser does indeed work for a legitimate organization. Facebook will also attempt to prevent ads that promote “subtle expressions of violence” to help crack down on ads that intentionally increase social tensions.

Facebook founder and CEO Mark Zuckerberg said this morning, “It is a new challenge for internet communities to deal with nation-states attempting to subvert elections. But if that’s what we must do, we are committed to rising to the occasion.” Facebook, Twitter, and other social media companies are beginning to work with government and other industry players to develop better standards and share information about advertising trends so that they can prevent problem ad campaigns again in the future.


As Joe Fingas explains in his article, he believes Facebook’s move is “part of an all too familiar pattern at Facebook: the company implements broad changes (usually including staff hires) after failing to anticipate the social consequences of a feature.” This pattern includes Facebook’s reaction to violence on Facebook Live, which allows users to livestream whatever they want to their network. Back in May of 2017, Facebook announced they were to hire over 3,000 moderators to its global community operations team, nearly doubling the size of that group which is responsible for making sure objectionable content is not put on Facebook. The goal of this is to aid in quick response, making sure that there is someone available to take action on a report and take the post down or call emergency services for help. Like with the advertising response, Facebook pledged to increase investment in artificial intelligence to not only combat potential hate speech and child exploitation, but also identify potentially suicidal people and deliver suicide prevention tools to those people.

The key failure in this pattern is that the measures Facebook has taken are reactive, not proactive. It was not until a feature was abused did they introduce what they believe to be sufficient moderation of that feature. This is a particularly troubling pattern considering that companies like Facebook are increasingly experimenting in artificial intelligence and machine learning. Will the company have proper controls in place once they roll these features out to more people? How will social media companies come together to make sure their features are actually safe for users and the integrity of our country’s electoral processes?

Facebook needs to more thoughtfully consider potential problems with their new features before they are released to the public instead of waiting for bad situations to come to light.

What do you think about Facebook’s response to advertising criticism? Leave your thoughts in the comment section below.

For reference and more information on the ongoing situation with Facebook’s advertising systems, please visit:

Facebook is hiring 1,000 people to fight shady ads

Facebook hands over Russia-linked ads to Congress

Twitter finds links to hundreds of Russian-backed bot accounts

Facebook will hire 3,000 moderators to tackle livestreamed violence


  1. britt_hopkins4 · ·

    I feel like this is a really hard topic, and you did a great job. The issue with Facebook is that pretty much anyone can post almost anything. If someone posts something that another does not like, then Facebook can take it down, but the other will then claim that they have Freedom of Speech and that Facebook has no right to take it down. I feel like this argument goes beyond Facebook and is much larger, but I applaud Facebook for hiring the extra staff in attempt to review advertisements. The hurdles they are going to have to overcome, however, are vast. The guidelines are going to have to be really specific to avoid controversy.

  2. Yvette Zhou · ·

    Very interesting topic and deep thoughts on it! The negative sides of social media include the bad effects brought by fake news, shady ads and info. exposure. Facebook has been under the pressure of those things for a long time and the topics around that never stop. Maybe with the digital industry development, more and more regulations will be created to control the quality of each post on social media, bringing us more valuable info. through internet.

  3. sherricheng5 · ·

    Awesome post! Facebook, Twitter, and other social media websites definitely need to figure out a way to be proactive, not reactive. I read an article in the WSJ today about fake news being posted on Facebook and Twitter regarding the horrific incident in Las Vegas. Although Facebook’s algorithm quickly deleted these posts, it is another example of Facebook being reactive, not proactive. I think Facebook definitely has followed a pattern in dealing with their mistakes. They “implement broad changes” such as hiring new staff instead of making sure that mistakes do not happen in the first place. I wonder what changes Facebook and other companies can implement to reduce fake news and to prevent future advertisement blunders.

  4. juliabrodigan · ·

    Great post! You did a good job explaining what happened with the Russian involvement via Facebook with the election. Social media is starting to play a huge role in politics and current events. It is hard to monitor what goes out there because anyone can post almost anything on social media. It is hard to monitor every single post or advertisement because there are just too many. It will be interesting to see how this all enfolds and what comes out of the investigation. It is comforting to know that Facebook is going to hire 1,000 new employees to try to combat this issue. This shows that they truly care about their brand and realize how serious of a problem this is.

  5. Really nice post! I think the problem with being proactive rather than reactive is a) they are being proactive, you just don’t know it when they are. b) I don’t think even FB foresees all of the possible applications of the platform. I think the key problem is that they didn’t respond fast enough.

  6. As everyone above has mentioned, it’s hard for Facebook themselves to be proactive. Rather, I believe that the community should be proactive about this. Since the rise of the internet, trolls have been found everywhere and it was usually up to the community’s due diligence to contain the trolls and inform the community of controversial or obscene posts. I think Facebook should provide an incentive for its users to be more proactive like a bounty for cracking down fake information. For example, if you see content that seems fake or outlandish, flag it. Facebook reviews contents with red flags and if it is truly fake news/false advertisement, they will give you a penny each time or something.

  7. This is really interesting – I didn’t realize the extent of the issues around things like texting to vote. One of the things your post raises for me, paired with some conversations on Twitter, is around the responsibilities of these platforms. Twitter didn’t confirm my identity when I made my second account for class. If they’re just a conduit, what level of objectionable content should they even be blocking?

%d bloggers like this: