Signage of Sina Weibo, the social media site examined in the study, in Beijing in April 2014.Photo: Wang Zhao / AFP (Getty Images)
One potential reason for the explosion of hoaxes, political disinformation, and fabricated clickbait on social media in recent years is the goddamn blue checkmarks, according to new peer-reviewed research accepted for publication in the Journal of Management Information Systems.
According to the study, which is titled “Cure or Poison? Identity Verification and the Posting of Fake News on Social Media,” verifying the identities of social media users appears to generally lower their proclivity towards sensationalist or made-up crap. But when a verified user is awarded a public-facing badge indicating their status, researchers found that any improvement in the quality of information that a user spreads was “significantly weakened.”
The paper was co-authored by researchers from the City University of New York, Temple University, and the University of Houston. They first sought to discover whether identity verification, in general, affects a user’s propensity to post dubious content. In the midst of their research, they discovered that when a user is given a special badge to indicate they’ve been verified (something like Twitter’s blue checkmark), the results were different.
“It’s a huge problem, it’s one of the biggest problems that we’re dealing with right now,” Min-Seok Pang, a Temple University associate professor of management information systems and the Milton F. Stauffer Research Fellow of its Fox School of Business, said in a press release. “Fake news is becoming a ‘life-and-death’ matter and eroding trust and respect with each other, which is a backbone of any civilized society.”
The researchers relied on a data set from Weibo, a Chinese social media site that they wrote was ideal for this analytical purpose because it explicitly designates different kinds of rule-breaking content and “actively investigates the posts of fake news reported by other users, along with the evidence provided by the accusers and the author, and discloses the results, which gives us an official way to label fake news.” Additionally, in October 2011, Weibo mandated that all users verify their real-life identities, unlike other social media sites that only verify certain types of users (or in the case of Twitter, only a relative handful of celebrities, politicians, and media personalities).
Save $59Apple AirPods
Turn up the volume
The latest AirPods 3 and Pro are on sale, but Apple's 2nd Generation AirPods—though getting older by the day—bring the heat with a 37% discount.
Buy AirPods 2 for $100 at Amazon
The authors argued that users posting to social media weigh several categories of benefit—psychological gain like attention from peers, political gain as in the case of propaganda, or financial gain such as driving web traffic—as well as the potential negative consequences. In the U.S., that might mean facing a defamation lawsuit, whereas in China, where the internet is heavily monitored by state censors, authorities can charge people who disseminate what is determined to be disinformation with a crime that carries a seven-year prison sentence.
The authors relied on a sample size of about 5,000 Weibo users who had been found by the platform to have posted what it considers fake news, restricted to users who had registered before the October 2011 change and to exclude celebrities and opinion leaders like politicians. The team then reduced the sample to roughly 1,335 users on whom they were able to collect more extensive post histories. To control for the potential impact of commercial or political bias by Weibo moderators—after all, Weibo is constantly monitored by Chinese authorities for what they consider to be misinformation, but might otherwise be viewed as political dissidence—the researchers randomly selected 200 posts flagged as fake to verify the determination was fair. They also wrote that they “excluded all fake news relevant to the government or politics from our sample” to control for the “sensitivity of fake political news.”
They found that outside of a few instances in which the misinformation in question concerned unverifiable personal information, the posts did in fact contain “fake news” as defined by Weibo: things like false claims, fabricated details, discrepancies from the source material, exaggerations, or information that was outdated, incomplete, or taken out of context. The researchers also conducted a subsequent survey of Weibo and Twitter users in the U.S. to verify some of their prior assumptions, such as the users’ views on different verification methods, or whether they thought verification badges added credibility to posts. The results largely supported their assumptions and hypotheses. For example, 66% of users expressed willingness to post suspicious news if they had a verification badge that would help them get more engagement, as opposed to 43% who would without the badge.
So, while this data set might come from a social and political context that is very different from that of sites based in the U.S., the authors argue it’s a decent baseline to make some observations on user behavior that are “not unique to Chinese users.”
While it may “intuitively” make sense that identity verification without a badge would have little impact on a user’s willingness to post false content, the researchers wrote, it actually has a “negative and [statistically] significant” impact on the amount of false information they shared, because that user knows their online identity can be tied to their offline one. But when a visible verification badge is thrown into the mix, the researchers wrote, it wiped out any sign of those gains:
Our study found that, while identity verification without a verification badge reduces users’ propensity to post fake news, identity verification with a verification badge does not. These mixed effects suggest that the presence of a verification badge for verified accounts significantly weakens the impact of identity verification. More interestingly, we found that when users volunteer to pass identity verification with a verification badge, they post more fake news after verification.
The researchers offered several explanations. For example, users may believe that the boost in perceived credibility to other users brought by the verification badge lowers the risk they will suffer reputational harm or platform consequences from posting fake content. The authors wrote:
One potential explanation is that when users realize that their verified status will be disclosed by a badge after verification, the increased benefit of posting fake news (given by the badge as a signal of credibility) may incentivize some users to actively verify their identities and take more risks to post fake news.
The paper refers to this as a “perverse incentive to post fake news”—the verification system effectively giving a blank check to users intent on gaming the system for personal or financial gain.
Anyone who hasn’t fallen through an internet rabbit hole in observable reality should recognize the relevance of these findings—see the proliferation of verified Twitter bullshit artists stateside. The German Marshall Fund of the United States, for example, released a study in February 2021 finding that the producers and manipulators of falsified content had respectively tripled and doubled shares of their content on verified Twitter accounts since 2018. Far-right site Gateway Pundit had received more shares from verified Twitter users than the Washington Post. That doesn’t, necessarily, mean that everyone sharing a link was endorsing its content, but that’s a whole other conversation.
The study’s findings also align with previous research generally indicating that the spread of hoaxes and falsified content on social media sites often isn’t an organic phenomenon, but the result of a relatively small but extremely determined group of individuals trying to push a specific viewpoint (such as coronavirus vaccine disinformation, or voter fraud conspiracy theories). Earlier this year, the Oxford Internet Institute released a report that found at least 81 countries were using “social media to spread computational propaganda and disinformation about politics,” with evidence political or government actors in at least 48 of those countries had enlisted private companies to act as “cyber troops.” Perpetrators included both sides of the Libyan civil war, a MAGA group called Turning Point USA dedicated to convincing older Donald Trump supporters that GOP policies are popular with college students, and the failed presidential campaign of billionaire Michael Bloomberg. Tactics ranged from managing armies of bots and professional creation of manipulated media to data-driven targeted advertising of misinformation and harassment/doxxing of activists and journalists.
“When you’re verified, your posts carry more weight, and it’s more damaging when you share fake news,” Pang, the Temple University professor, said in the news release. “While we did not investigate this specifically, it seems like some individuals are using the verification process to game the system. [Social media platforms] have to enforce it more strongly. They have to be open to the possibility that a user has gamed the system and they have to prevent workarounds. We think that is one of the strongest takeaways from this research.”