Rule Of Thumb: Facebook’s Perspective On Privacy And Free Expression

SOPA Images

SOPA Images

As social media has become core to our everyday lives, we have become inundated with a plethora of platforms that connect us with friends, update ourselves on current events, and help us express our opinions online. Of these platforms, Facebook remains the most widely used one globally, with over 2.7 billion monthly active users, many of which are Millenials and Gen Zers. 77% of millennials say that they are active on the platform every day. 

The Social Network has created a space to easily share opinions. Users have access to so much information, both factual and inaccurate. What we choose to interact with online also gives away a lot of our interests and data. What implications do these aspects of Facebook have on our society?

Since its creation within Mark Zuckerberg's Harvard University dorm room in 2004, Facebook has evolved from a budding social media site designed to connect Ivy-League classmates to a social media behemoth, generating roughly $86 billion in revenue and $29 billion in net income in 2020.

This financial performance is driven by the immense role social media plays in influencing its user base's consumer spending habits. 67% of millennials responded that they are influenced to purchase by someone they follow on Facebook. Facebook is free for users; its core business relies on advertising revenue to support the costs associated with running the platform, such as server facilities, data center equipment, property expenses, and salaries. These ads are both "pay-per-click," in which Facebook is compensated when users click on advertisements, and "pay-per-impression," where Facebook is paid based on simply displaying the ad to users. 

Facebook is a free platform given that their core products are users, whose data is a valuable contribution to the site. As you may have already noticed, the ads on Facebook directly relate to monitored interests and buying habits. That is not an uncanny coincidence, but rather a systematic effort by Facebook to monetize user data to improve the precision and accuracy of ads on the platform. Facebook provides advertisers with a treasure trove of data they can use to target specific users, including geography (down to the street), age, sex, political affiliation, interests, and conversations. This precision level allows advertisers to be more targeted with their marketing budgets and improve efficiency while lowering advertising costs by focusing on users who are most likely to buy their products or services. 

Facebook representatives stated in a memo that while ads target people who fit a specific profile, it keeps the users' identities private from advertisers. However, companies still can target particular individuals through their email account, which are shared between Facebook and the company's website. For example, if you created a login to an online clothing store with the same email linked to your Facebook profile, you may see ads for that brand on your News Feed. While this is oddly specific and frankly creepy at times, most users still prefer to see ads relevant to them. 

When do these ads become too personalized? What happens when advertisements are directed towards your emotional state or health concerns? There is less regulation for advertisers aiming to sell a product that feeds on consumers' emotions and insecurities. Is it right to advertise diet pills to someone insecure about their weight that was recently searching for weight loss diets and workout plans? Is it right to promote medications to the person that spends time on WebMD searching their symptoms?

A few years after Facebook started, it began to experience outrage from users regarding their privacy when the company debuted the News Feed; created to be the central destination for users to see new posts and social interactions between friends without going to each others' profiles. Many users felt the feature, which showed everything their friends did on the platform, was too intrusive. The news Feed evolved to become more personalized and offer the user stories that Facebook's AI system decides they would care most about.  

The News Feed is "filtered based on that person's past activity, including posts liked, commented on or shared." The information users see is more relevant to their interests but less likely to be breaking news. For many people that use social media as a news outlet, this could mean missing significant events as they happen. 

One example of the impact the News Feed had on the world was the ALS Ice Bucket Challenge fundraiser, which went viral thanks to Facebook's News Feed. Seventeen million Ice Bucket challenge videos were shared across the platform, raising tens of millions of dollars to support research into Lou Gehrig's disease. Simultaneously, as this cause was being promoted, Facebook's news feed failed to highlight the protests going on in Furgeson, Missouri, regarding the shooting of Michael Brown to the same extent as the fundraiser was promoted. 

Free speech is another aspect of Facebook that receives criticism. For years many have alleged that Facebook's content removal and censorship policies were biased, and the company was not doing its part to police incorrect and misleading content. However, Zuckerberg continues to defend the idea that the platform is an example of free expression. During the 2020 U.S. election, Facebook announced that it would "not moderate politicians' speech or fact-check their political ads." Zuckerburg defended the idea that this type of information was still of interest to the public and newsworthy even if false. 

Facebook has also historically taken a very laissez-faire approach when regulating political content on the platform. Politicians have frequently been allowed the opportunity to spread misinformation, leaving the public to make their own decisions based on false information. 

One example of Facebook's hands-off approach to incorrect political ads involved advertisements made as part of Trump's election campaign. When Trump's campaign aired a political ad that falsely declared Biden committed corruption in Ukraine, Facebook refused to take down the video when asked by Biden's campaign. Zuckerburg claimed that "political ads are an important part of voice" and gives all candidates equal opportunity at media attention. 

Facebook's lack of policing content has had a more dramatic and shocking impact than simply altering U.S. political discourse. In India, more than 20 individuals were killed in 2018 by angry mobs due to misinformation, arguably spread on Facebook. In Myanmar, the military deliberately spread misinformation and propaganda on Facebook as part of a broader campaign to target the country's predominantly Muslim Rohingya minority group in what the United Nations officials described as a "textbook example of ethnic cleansing." Myanmar military personnel posed as fans of pop stars and national heroes to post hate speech about the minority. 

While Facebook removed official accounts of senior Myanmar military leaders because they violated policy against militarized social movements and violence-inducing conspiracy networks, many associated accounts remained actively disguised by fake names. These Myanmar military-linked accounts continued to spread anti-Rohingya propaganda, contributing to more than 700,000 Rohingya fleeing the country. 

Facebook's solution to policing hateful, abusive, and inappropriate posts on the site is human moderators. Facebook's moderators are contracted by third-party vendors located worldwide. Moderators view hundreds of posts a day that include traumatizing content. With the job position requiring moderators to sign NDAs, they are sworn to secrecy and, without any debriefing, left feeling anxious and isolated. 

Moderators make errors, such as in the case of removing posts made by the Myanmar military. Policing the content is a highly challenging job due to the volume of posts judged and consistently applying Facebook's rules accurately. The rules change on a near-daily basis and lack clarification. To maintain accuracy, moderators have to determine if a post violates guidelines and then why it violates community standards. However, the standards are constantly changing with breaking news and user engagement on posts. Moderators have a very slim margin of error, with just a few mistakes costing them their job.

As the platform inevitably grows, there is a demand for more moderators to regulate potentially harmful posts. Out-soured moderators that are paid just over minimum wage are very cost-effective to Facebook. The median Facebook employee earns $240,000 annually in salary, compared to a contracted moderator who earns around $28,800 per year. Is it fair to have this vulnerable popular paid so little for their work by a multibillion-dollar company? Is the company obligated to put more time and resources into higher well-trained moderators provided with better working conditions?

As Facebook remains an integral platform in our society for communication and news, how will the challenges presented be resolved? Issues regarding user privacy and free expression have to be addressed to ensure the safety of those on and off the social network. Time and time again, it has been proven that content spread on social media has detrimental, real-life consequences. How much responsibility does Facebook have to maintain privacy and free expression while monitoring and regulating what users post?

Previous
Previous

The Four Hundred: Luxury retailer Prada continues sustainable practices

Next
Next

The Four Hundred: Spoken Word Makes a Slam on YouTube