In secret chats, trolls struggle to get Twitter disinformation campaigns off the grouWatch
In a private “strategy chat” with more than 40 far-right trolls, one user who tried to create a new Twitter account to spread disinformation ahead of Tuesday’s midterms elections described how he had hit an immediate roadblock: Twitter banned him for deliberately giving out the wrong election date.
“Were they really banning people for saying [vote on] November 7? Lol, whoops,” the user, whose name was a racist joke about Native Americans, wrote. “Maybe that’s what got me shadowbanned.”
The remark, seen by NBC News in the closed chat room known as 4chan, used for planning and executing misinformation efforts, suggested that the changes that Twitter has undertaken in the past two years to avoid a repeat of the 2016 U.S. election may be working. Two years ago, the company did little to police misinformation and allowed a Russian influence campaign and politically motivated trolls to thrive.
But the trolls are also learning from their mistakes and developing new strategies to sidestep Twitter’s rules — sometimes with new technology available on other apps — highlighting the arms race between these groups and social media companies that are developing systems to stop them.
While much of its focus has been on foreign operations, Twitter has ramped up preventative measures against domestic troll networks that organize in private chats to push coordinated disinformation on their platform. On Friday, Twitter revealed it took down 10,000 accounts that discouraged voting, mostly accounts posing as Democrats.
A spokesperson for Twitter pointed NBC News to a series of company blog posts from last month describing updates about rules surrounding fake accounts on Twitter, including plans to ban the “use of stock or stolen avatar photos” and the “use of intentionally misleading profile information.”
“As platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,” Del Harvey, vice president for trust and safety at Twitter, and Yoel Roth, head of site integrity, wrote in a blog post. “We now may remove fake accounts engaged in a variety of emergent, malicious behaviors.”
Nina Jankowicz, a global fellow who specializes in disinformation at the Wilson Center's Kennan Institute, a research center that studies Russia, called Twitter’s automated troll enforcement “the type of proactive behavior we need to see more of” from social media companies.
Both Twitter and Facebook have devoted “war rooms” to fighting back against election-related disinformation and false conspiracy theories in the run-up to the 2018 midterms. Facebook has publicly struggled with the problem in the last week, as multiple reports have surfaced of viral troll accounts spreading conspiracy theories and false or racist memes.
Many of those memes are created or weaponized in small groups then pushed on Facebook pages or public Twitter accounts to achieve maximum virality.
“That’s where disinformation is thriving now, where there’s no content moderation and no ability to search or see what’s trending,” Jankowicz said. “It’s a real problem.”