Ove the past ten years I have become an avid bird watcher. Baldwin was a big reason I chose BC over HBS. All joking aside, I was very excited to learn that Twitter just announced they are testing a new feature called “Birdwatch.”. Birdwatch is currently in beta on a separate site from Twitter. The end goal of Birdwatch is to have designated members of the community that will be able to add notes to posts, directly on Twitter, that are believed to contain misinformation. The objective of this is to have users in the community across the political spectrum flag tweets opposed to Twitter being the one to flag user’s tweets. Twitter has received various survey data that talks about how people are more receptive to tweets being censored by other members than they are if Twitter is the one to block the tweets. As the article puts it “People valued notes being in the community’s voice rather than that of Twitter or a central authority.” The design is similar to Reddit where users will be able to vote up and down notes they find helpful or unhelpful.

            Disclaimer: The following paragraphs are not designed to push a political agenda; I am not attempting to endorse or debate the politics of the situation. I am simply communicating a concern from a Data Transformation perspective as it relates to Twitter and Birdwatch.

I think this is a promising feature however there is one major flaw that I do not believe is being accounted for. That flaw is the power of influence. I believe that regardless of what notes or flags are associated with tweets if certain users want to spread misinformation they will be able to use the platform to do so. Over the last five years there has been a movement of people becoming more and more vocal about being skeptical of the media. I believe that certain users will see these notes and attack the people posting the notes on the tweets. As evidenced earlier this year when Twitter banned former President Trump. A major driver behind this was Twitter believed that a large number of his tweets included misinformation and that he was abusing his influence. The goal of this is not to get into a political discussion, politics aside the former President is very polarizing and had a great ability to influence people with over 80 million followers on the platform. I am sure he is not the last polarizing person we will see. And this is where I see a flaw with Birdwatch, in my opinion I do not think that people who loyally subscribe to a person’s views will care if there is a note or disclaimer that the information is misleading or false. I do not think Birdwatch will be successful in containing misinformation when it comes to very influential people. To me Birdwatch assumes that everyone plays by the same rules and that people will not try and take advantage of their influence. I think if there were notes tied to certain posts subscribers of that theory would look to spin it around on how the other party does not want them to be heard which in certain cases could build more momentum for the cause.

The above flaw is extreme and I am hopeful that we are moving in a direction that moves away from polarization. Where I do see this idea gaining traction is the fact that each day there are over 11.5 million tweets on the platform.

I know I lack originality as this image has already been posted in other classes however, it proves a valuable point. With 481,000 tweets being generated per minute there is a LOT of content for the Twitter algorithm to keep up with and decipher. I like that Twitter is actively trying to tackle this difficult problem. I am not envious of having to make the decision of what gets censored and what does not. I really like the idea of having notes next to tweets explaining why a tweet is believed to be misleading opposed to blocking the tweet entirely. I am hopeful that if properly implemented this will allow users that are following people with suspect ideas to be objective about the content and to form their own opinions. I also like that users can build a reputation as a credible source. I think this model will enable users to be a lot more thoughtful with their content and objectively question potentially misleading information.

In summary I think Birdwatch has a lot of work to do however I believe that it has a lot of potential. Did you ready this post and think “I really want to be a part of Birdwatch?”  You are in luck, the article linked to this post has an application if you are interested.



  1. As more people’s digital platforms have gained so many followings and influence online, people have been able to get news faster than ever before. This also calls for companies like Twitter and Facebook to take more action to prevent the spread of fake/misleading news. I think that this is less about censorship, but more so holding people and social platforms accountable to stop the spread of fakenews. I agree that it’s a good start to add a note or flag when certain posts haven’t been fact-checked or are misleading, this way people can know the full story and do additional research on their own.

  2. conoreiremba · ·

    Really interesting piece Sam, and I think there is a lot to be said for self-regulation by the wisdom of the crowd on Twitter. As you pointed out though, some misinformation will always gain support due to factors of influence and it will be difficult to separate opinion from harmful misinformation. It could develop into very public cases of “he said, she said”, where both sides of an argument get upvotes and downvotes regardless of the truth. I also wonder what sanctions can be imposed on those that spread harmful information and particularly hate speech. We have seen this countless times recently in Europe where professionals athletes are still subject to vile racist abuse online because social media continues to mask the identity of the culprits. Like Jie’s comment above, I wonder will the deliberate spread of misinformation always remain until social media begin to sanction the behavior in some way.

  3. We’ll discuss these issues in greater depth in a few weeks. Clearly, the social media platforms know that content needs to be regulated to a certain degree, but they just don’t want to be the ones doing it (it’s a no win situation for them). It will be interesting to see which ones can get to a workable solution, if any.

  4. Scott Siegler · ·

    I noticed that Facebook is going to try the approach of surfacing political content less frequently than other types of content in general. This will definitely slow the spread of misinformation but it is also slowing the spread of all information, and I’m not sure that is a win. It’s such a tricky problem for these platforms to solve. I do like the approach of allowing the crowd to monitor itself though, I think this might be the best idea I’ve seen so far.

  5. williammooremba · ·

    Really great post. One aspect that I think complicates this whole thing is how ubiquitous Twitter and other social media platforms have become. As you pointed out with the sheer volume of tweets every day, the size of engagement something like twitter gets is at least for me hard to fully wrap my head around. Usually when a major public figure is banned from Twitter it can be quite a notable news event. While they are not likely to make everyone happy, the sheer amount of people these moderation decisions impact I think really complicates this. I do not envy Twitter’s position and I will be looking to see birdwatch’s impact.

  6. marissaspletter · ·

    Very insightful piece and interesting graphics. Birdwatch has strong potential with a set of users flagging the credibility of other tweets, instead of internal decisions. I am sure the PR team at Twitter is pushing to build back their reputation of being an open platform for public conversation.

  7. courtneymba · ·

    Very interesting post about how to regulate content and misinformation on social media. I think you are correct that people who strongly support the individuals who are spreading the false information may not care that a post has been marked as such. I’m from Arkansas and for much of my family, these “false” indicators would be by that side as a badge of honor or act of bravery for posting material that the media doesn’t want to be shared. This is my same family that distrusts all of Facebook’s fact-checking efforts. Unless they saw a high ranking official from that party debunking the statements, it wouldn’t matter to them if the debunking is coming from Twitter directly, a user they don’t know on Twitter, or a fact-checking source outside of Twitter (but funded by Twitter). But for less extreme audiences, I suspect more of a Reddit-style crowd-policing model would be more effective than enforcement coming directly from Twitter.

  8. changliu0601 · ·

    Interesting Post.I read an article about Birdwatch.The article said that after analyzing more than 2600 notes and reviewing 8200 ratings, he found that blatant misinformation receiving “not misleading” notes.And I’m skeptical of the fact-tracking system.I don’t believe that the automated system can judge the truth

%d bloggers like this: