Facebook is blocking researchers' access to data about the amount of misinformation
Distributed inside documents suggest that Facebook - recently renamed Meta - is doing far worse than it claims to reduce COVID-19 vaccine misinformation on high- Facebook social media platform.
False information about the virus and vaccines is a major concern. In one study, survey respondents who received some or all of their news from Facebook were significantly more likely to oppose the COVID-19 vaccine than those who received their news from mainstream media sources .
As a researcher who studies social and civic media, I believe it is extremely important to understand how misinformation spreads online. But this is easier said than done. Counting examples of misinformation found on a social media platform leaves two main questions unanswered: How likely is it that users will come across incorrect information and whether some of users particularly vulnerable to misinformation? These questions are the nominative problem and the circulation problem.
The COVID-19 misinformation study, “Facebook Algorithm: A Major Threat to Public Health”, published by the public interest advocacy group Avaaz in August 2020, revealed that frequently shared sources health information - 82 websites and 42 Facebook pages - a total estimated reach of 3.8 billion views in a year.
At first glance, that's a surprisingly large number. But it is important to remember that this is the number. To understand what 3.8 billion views in a year means, you also need to work out the denominator. The calculator is the part of a fraction above the line, which is divided by the part of the fraction below the line, the denominator.
Getting some insight
One potential nominee is 2.9 billion monthly Facebook active users, in which case, on average, every Facebook user has been exposed to at least one piece of information from the illicit sources. that health information. But these are 3.8 billion content views, not individual users. How many pieces of information does the average Facebook user encounter in a year? Facebook will not disclose that information.
Market researchers estimate that Facebook users spend from 19 minutes per day to 38 minutes per day on the platform. If the 1.93 billion daily active Facebook users see an average of 10 posts in their daily sessions - a very conservative estimate - the nominee for that 3.8 billion pieces of information per year is 7.044 trillion (1.93 billion daily users times 10 daily posts 365 days a year). This means that about 0.05% of the content on Facebook are posts with these suspicious Facebook pages.
The figure of 3.8 billion views includes all content published on those pages, including innocuous health content, and so is the proportion of Facebook posts that are inaccurate the health less than one twentieth.
Is it a concern that there is misinformation on Facebook that everyone has happened to at least one instance? Or is it reassuring that Avaaz is not warning about 99.95% of what is shared on Facebook from the sites? No more.
Dissemination of misinformation
In addition to specifying a denominator, it is also important to consider the dissemination of this information. Is everyone on Facebook just as likely to encounter incorrect health information? Or are people who identify as anti-vaccinated or seek “other health” information more likely to encounter this type of information?
Another social media survey with a focus on extremist content on YouTube offers a way to misunderstand the circulation of misinformation. Using browser data from 915 web users, the League Against Corruption team hired a large, demographically diverse sample of U.S. web users and handed over two groups: heavy users of YouTube, and people individuals who showed strong negative racial or gender biases in a set of questions. asked the inspectors. Oversampling examines a small subset of a larger population than the proportion of the population to better record subset data.
The researchers found that 9.2% of participants watched at least one video from an extremist channel, and 22.1% watched at least one video from another channel, in the months covered with the study. Important piece of context to note: A small group of people relied on most of the comments on these videos. And more than 90% of comments from videos were against or “other” with people reporting a high level of racial or sexual abuse in the preliminary survey.
While about 1 in 10 people found external content on YouTube and 2 in 10 found content from right-wing promoters, the majority of people who came across that content "Kicking off" and going somewhere else. The group that found anti - racist content and sought out more of it was apparently interested people: people with strong racist and sexual views.
The authors concluded that “consumption of this potentially harmful content is prevalent among Americans who are already high in racist hatred,” and that YouTube’s algorithms could reverse this pattern. to consolidate. In other words, just being aware of the proportion of users who come across extreme content does not tell you how many people are consuming it. For that, you need to know the distribution as well.
Superspreaders or whack-a-mole?
A widely published study from the anti-vaccine hate speech advocacy group called Pandemic Profiteers, of the 30 anti-vaccine Facebook groups surveyed, 12 celebrities were anti-vaccine accounted for 70% of the content circulated in these groups, and the top three accounted for nearly half. But again, it's important to ask about types: How many anti - vaccine groups are hosted on Facebook? And what percentage of Facebook users encounter the kind of information shared in these groups?
Without information on types and distribution, the survey reveals something interesting about the 30 anti - vaccine Facebook groups, but there's nothing about misinformation on Facebook as a whole.
These types of studies raise the question, “If researchers can find this content, why can't the social media platforms identify and remove it? ”The Pandemic Profiteers study, which means that Facebook was able to solve 70% of the medical misinformation problem by deleting just a dozen accounts, specifically claims to be eliminate those disinfection vendors. However, I found that 10 of the 12 anti - vaccine side effects identified in the survey have already been removed by Facebook.
Consider Del Bigtree, one of the top three spreads of vaccine disinfection on Facebook. The problem is not that Bigtree is recruiting new fans against Facebook vaccinations; that is, Facebook users follow Bigtree on other websites and bring its content into their Facebook communities. Not 12 individuals and groups posting misinformation online - there seem to be thousands of individual Facebook users sharing misinformation found elsewhere on the web, revealing the dozen that person. It's much harder to ban thousands of Facebook users than it is to block 12 celebrities against vaccines.
That is why questions about naming and circulating are crucial to understanding misinformation online. Nomenclature and circulation allow researchers to ask how common or rare online behaviors are, and who is involved in those behaviors. If millions of users each come across pieces of medical misinformation from time to time, warning leaflets may be an effective intervention. But if misinformation is largely thrown away by a smaller organization that actively seeks out and shares this content, these warning leaflets are likely to be useless.
Getting the right data
Trying to misunderstand information by counting it, without considering nominees or circulation, is what happens when good intentions strike with bad tools. No social media platform makes it possible for researchers to accurately work out how prominent a particular piece of content is across its platform.
Facebook restricts most researchers to their Crowdtangle tool, which shares content engagement information, but this is not the same as content views. Twitter specifically outlaws researchers from measuring a nominee, to either the number of Twitter users or the number of tweets shared in a day. YouTube makes it so difficult to find out how many videos are hosted on their service that Google typically asks interview candidates to estimate the number of YouTube videos hosted to assess their quantification skills.
The leaders of social media platforms have argued that their tools, despite their difficulties, are good for society, but this argument would be more convincing if researchers could prove it that application independently.
As the social effects of social media become more apparent, pressure is likely to increase on the major technical platforms to spread more data about their users and their content. If these companies are responding by increasing the amount of information that researchers receive, take a closer look at them: Will they allow researchers to analyze the nominee and distribute content online? And if not, what do researchers fear?
Article by Ethan Zuckerman, Associate Professor of Public Policy, Communications, and Information, University of Massachusetts Amherst
This article is republished from The Conversation under a Creative Commons license. Read the original article.