Rep. Adam Schiff (D-CA) sent a letter today to Google, YouTube, and Twitter urging the platforms to explicitly notify users when they’ve engaged with misinformation about the coronavirus.
Schiff wrote to Google CEO Sundar Pichai, YouTube CEO Susan Wojcicki, and Twitter CEO Jack Dorsey, saying it’s not enough to remove or downgrade harmful or misleading content about the pandemic, but that it’s critical to ensure that users who saw the content have access to correct information as well.
“Though the best protection is removing or downgrading harmful content before users engage with it, that is not always possible,” Schiff wrote in his letter to Pichai and Wojcicki. “As you are likely aware, Facebook recently announced plans to display messages to any users who have engaged with harmful coronavirus-related misinformation that has since been removed from the platform and connect them with resources from the World Health Organization.”
Guy Rosen, Facebook’s vice president of integrity, outlined its plans for handling coronavirus misinformation posts in an April 16th blog post. “We’re going to start showing messages in News Feed to people who have liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed,” Rosen wrote. “These messages will connect people to COVID-19 myths debunked by the WHO including ones we’ve removed from our platform for leading to imminent physical harm.”
As The Verge’s Casey Newton noted, however, the notification process isn’t as simple as telling duped users that they’ve been duped. While Facebook is putting the correct information into people’s News Feeds, it’s labeling the information with a message that reads “Help friends and family avoid false information about COVID-19,” without necessarily reminding the user they were duped by said false information:
It then invites them to share a link to the WHO’s myth-busting site, as well as a button that will take the user to the site directly.
The goal of this type of approach is to make people less defensive about the fact that they may have been wrong, and try to smuggle some good information into their brains without making them feel dumb about it.
How such notifications would work on YouTube or Twitter in practice isn’t totally clear, but both companies have taken steps toward actively moderating coronavirus content on their platforms. Just this week, YouTube announced it would add informational panels with information from its fact-checkers to videos in the US, an expansion of a program it launched in India and Brazil last year.
“Since early February, we’ve removed thousands of videos violating our COVID-19 misinformation policies — such as content that disputes the existence or transmission of COVID-19 as described by local health authorities, or that promotes medically unsubstantiated methods to prevent or cure COVID-19 in place of seeking medical treatment,” a YouTube spokesperson said in an email to The Verge, “and have seen over 20 billion impressions on our information panels for COVID-19 related videos and searches.”
Twitter introduced its COVID-19 content policies earlier this month, which require users to remove tweets with content that includes misinformation about coronavirus treatments or misleading content meant to look like it’s from authorities. The policy was recently updated to encompass tweets that may “incite people to action and cause widespread panic, social unrest or large-scale disorder,” such as burning 5G towers.
YouTube and Twitter both have the framework, potentially, to build a notification process similar to Facebook’s, but neither has announced plans to do so thus far. A Twitter spokesperson said in an email to The Verge that the company had received Schiff’s letter and was in regular contact with the congressman and his staff “on these and a number of issues.”
But the practical challenges of notifying users about misinformation are further complicated when the president is the one spreading it, as The New York Times points out.
After President Trump suggested at a briefing last week that bleach and ultraviolet light might be used to treat people with the virus, Facebook, Twitter, and YouTube threaded the needle and said since the president didn’t direct people to try the (very dangerous) treatment options, they wouldn’t remove posts that included his statements.
Schiff said in his letters to the companies that he recognized the complexities that misinformation presents to online platforms. “As we all grapple with this unprecedented health situation, I hope you will consider this suggestion for keeping users better informed,” he wrote.