Facebook to warn users who engaged with 'harmful' misinformation
Facebook will begin showing notifications to users who have interacted with posts that contain “harmful” COVID-19 misinformation, the company announced on Thursday, in an aggressive new move to address the spread of false information about Covid-19.
The new policy applies only to misinformation that Facebook considers likely to contribute to “imminent physical harm”, such as false claims about “cures” or statements that physical distancing is not effective. Facebook’s policy has been to remove those posts from the platform.
Under the new policy, which will be rolled out in the coming weeks, users who liked, shared, commented or reacted with an emoji to such posts before they were deleted will see a message in their news feed directing them to a “myth busters” page maintained by the World Health Organization (WHO).
“We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook,” said Guy Rosen, Facebook’s vice-president of integrity, in a blog post.
Facebook does not take down other misinformation about Covid-19, such as conspiracy theories about the virus’s origins, but instead relies on its third-party fact checking system. If a fact checker rates a claim false, Facebook then adds a notice to the post, reduces its spread, alerts anyone who shared it, and discourages users from sharing it further.
coincides with the release of a new report by the online activist group Avaaz that highlights Facebook’s shortcomings in counteracting the COVID-19 “infodemic”. The study found examples of COVID-19 misinformation remaining on the platform even after third-party fact checks had been completed. Avaaz also cited delays in Facebook applying fact checking labels to posts.
Facebook challenged the methodology of Avaaz’s report but said it “appreciated their partnership in developing the notifications we’ll now be showing people”.
Avaaz celebrated Facebook’s decision to show notifications to users exposed to misinformation.
“We’ve been calling on Facebook to take this step for three years now,” said Fadi Quran, Avaaz’s campaign director. “It’s a courageous step by Facebook. At the same time, it’s not enough.”
The group wants Facebook’s notifications to be more explicit about the misinformation that the user was exposed to, and it wants the notification shown to any user who saw the misinformation in their news feed, regardless of whether they interacted with the post.
“We think that correcting the record retroactively … will make people more resilient to misinformation in the future, and it will disincentivize malicious users,” said Quran.
But Facebook cited concerns that more explicit messages could do more harm than good.
Misinformation researchers at First Draft News, for example, have long counseled that repeating a false claim, even to debunk it, can help to reinforce it in a person’s mind.
Claire Wardle, the US director of First Draft News, said that she generally shared Facebook’s concern that repeating the misinformation in a notification could have a negative effect. Wardle usually advises debunkers of misinformation to “lead with the fact”, but she noted that such a rule is difficult to follow in relation to the pandemic, where scientific understanding of the virus is continuously evolving.
Wardle welcomed the move by Facebook as a signal that the company was trying to be “innovative” and “braver”, while warning that there could be “unintended consequences”. Among the potential pitfalls she flagged is the fact that people in different countries have different levels of trust in the WHO.
“I like this and want to support it, but also want to recognize that we know so little that this could go horribly wrong,” she said. “What I hope is that they are testing this with some independent academics.”
The new policy applies only to misinformation that Facebook considers likely to contribute to “imminent physical harm”, such as false claims about “cures” or statements that physical distancing is not effective. Facebook’s policy has been to remove those posts from the platform.
Under the new policy, which will be rolled out in the coming weeks, users who liked, shared, commented or reacted with an emoji to such posts before they were deleted will see a message in their news feed directing them to a “myth busters” page maintained by the World Health Organization (WHO).
“We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook,” said Guy Rosen, Facebook’s vice-president of integrity, in a blog post.
Facebook does not take down other misinformation about Covid-19, such as conspiracy theories about the virus’s origins, but instead relies on its third-party fact checking system. If a fact checker rates a claim false, Facebook then adds a notice to the post, reduces its spread, alerts anyone who shared it, and discourages users from sharing it further.
coincides with the release of a new report by the online activist group Avaaz that highlights Facebook’s shortcomings in counteracting the COVID-19 “infodemic”. The study found examples of COVID-19 misinformation remaining on the platform even after third-party fact checks had been completed. Avaaz also cited delays in Facebook applying fact checking labels to posts.
Facebook challenged the methodology of Avaaz’s report but said it “appreciated their partnership in developing the notifications we’ll now be showing people”.
Avaaz celebrated Facebook’s decision to show notifications to users exposed to misinformation.
“We’ve been calling on Facebook to take this step for three years now,” said Fadi Quran, Avaaz’s campaign director. “It’s a courageous step by Facebook. At the same time, it’s not enough.”
The group wants Facebook’s notifications to be more explicit about the misinformation that the user was exposed to, and it wants the notification shown to any user who saw the misinformation in their news feed, regardless of whether they interacted with the post.
“We think that correcting the record retroactively … will make people more resilient to misinformation in the future, and it will disincentivize malicious users,” said Quran.
But Facebook cited concerns that more explicit messages could do more harm than good.
Misinformation researchers at First Draft News, for example, have long counseled that repeating a false claim, even to debunk it, can help to reinforce it in a person’s mind.
Claire Wardle, the US director of First Draft News, said that she generally shared Facebook’s concern that repeating the misinformation in a notification could have a negative effect. Wardle usually advises debunkers of misinformation to “lead with the fact”, but she noted that such a rule is difficult to follow in relation to the pandemic, where scientific understanding of the virus is continuously evolving.
Wardle welcomed the move by Facebook as a signal that the company was trying to be “innovative” and “braver”, while warning that there could be “unintended consequences”. Among the potential pitfalls she flagged is the fact that people in different countries have different levels of trust in the WHO.
“I like this and want to support it, but also want to recognize that we know so little that this could go horribly wrong,” she said. “What I hope is that they are testing this with some independent academics.”
Comments
Post a Comment