regarding the Kremlin’s election interference efforts on social media to all of “state-controlled” media:
https://blog.twitter.com/en_us/topics/company/2019/advertising_policies_on_state_media.html
See the Aug. 29, 2019 BuzzFeed investigation, “This Is How Russian Propaganda Actually Works in the 21st
Century”: https://www.buzzfeednews.com/article/holgerroonemaa/russia-propaganda-baltics-baltnews
See for instance DFRLab’s “In Depth: Iranian Propaganda Network Goes Down,” March 26, 2019,
https://medium.com/dfrlab/takedown-details-of-the-iranian-propaganda-network-d1fad32fdf30
For an examination of how manipulative actors use “pseudoanonymity” to “impersonate marginalized,
underrepresented, and vulnerable groups to either malign, disrupt or exaggerate their cause,” see Friedberg and
Donovan’s piece in the MIT JODS: https://jods.mitpress.mit.edu/pub/2gnso48a
Clint Watts, “Advanced Persistent Manipulators”, Feb. 12, 2019: https://securingdemocracy.gmfus.org/advanced-
persistent-manipulators-part-one-the-threat-to-the-social-media-industry/
For a global inventory of actors organized for social media manipulation, see: Bradshaw, Samantha, and Philip
Howard. “Troops, trolls and troublemakers: A global inventory of organized social media manipulation.” (2017).
See also “False Leaks: A Look at Recent Information Operations Designed To Disseminate Hacked Material,” Camille
Francois, CYBERWARCON 2018. Video: https://www.youtube.com/watch?v=P8iXN8j4gMk
APT here refers to Advanced Persistent Threat, a term commonly used in the threat intelligence industry to describe
State-sponsored and state-affiliated groups engaged in hacking operations. See:
https://en.wikipedia.org/wiki/Advanced_persistent_threat
Clint Watts, “Advanced Persistent Manipulators,” Feb. 12, 2019: https://securingdemocracy.gmfus.org/advanced-
persistent-manipulators-part-one-the-threat-to-the-social-media-industry/
Guccifer is a social media persona who claimed to be the hacker who hacked the Democratic National Committee in
2016, and who used this deceptive identity to engage WikiLeaks and the media. The account was in reality operated by
Russian military intelligence: https://en.wikipedia.org/wiki/Guccifer_2.0
See for instance Google’s Kent Walker update on action taken against IRIB and broader State-Sponsored activity on
Google’s products: https://blog.google/technology/safety-security/update-state-sponsored-activity/
https://newsroom.fb.com/news/2018/12/take-down-in-bangladesh/
See for instance reporting by the Associated Press, “Facebook blocks 115 accounts ahead of US midterm elections”,
Nov. 6, 2018, https://www.apnews.com/19aabf8ba7b6466b859f4d0afd9e59be. The AP reports: “Facebook acted after
being tipped off Sunday by U.S. law enforcement officials. Authorities notified the company about recently discovered
online activity “they believe may be linked to foreign entities.”
See Ellen Nakashima’s reporting in the Washington Post, “U.S. Cyber Command operation disrupted Internet access
of Russian troll factory on day of 2018 midterms”, Feb, 26, 2019, https://www.washingtonpost.com/world/national-
security/us-cyber-command-operation-disrupted-internet-access-of-russian-troll-factory-on-day-of-2018-
midterms/2019/02/26/1827fc9e-36d6-11e9-af5b-b51b7ff322e9_story.html
I am borrowing here from a definition my colleagues and I have used to frame detection techniques. See Francois,
Barash, Kelly: https://osf.io/aj9yz/
“How Google Fights Disinformation,” Feb. 2019, available at: https://storage.googleapis.com/gweb-uniblog-publish-
prod/documents/How_Google_Fights_Disinformation.pdf
See “Coordinated Inauthentic Behavior Explained,” https://newsroom.fb.com/news/2018/12/inside-feed-
coordinated-inauthentic-behavior/
A blog post entitled “Removing Bad Actors On Facebook”, from July 2018, seems to be the first public reference to
“coordinated and inauthentic behavior”: https://newsroom.fb.com/news/2018/07/removing-bad-actors-on-facebook/
An example of a product change directly motivated by a platform’s need to tackle distortive behaviors on its products
can be found in the January 2019 YouTube announcement: “'To that end, we’ll begin reducing recommendations of
borderline content and content that could misinform users in harmful ways”:
https://youtube.googleblog.com/2019/01/continuing-our-work-to-improve.html
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1001
See for instance former YouTube engineer Guillaume Chaslot’s project regarding algorithmic reinforcement of fringe
and harmful views on YouTube: https://algotransparency.org/methodology.html
For an in-depth discussion of the various issues plaguing the content moderation industry, see Roberts, Sarah T.
Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press, 2019. or Gillespie, Tarleton.
Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press, 2018.
This is a good place for a quick reminder of the differences between misinformation and disinformation.
Dictionary.com, which made “misinformation” the word of the year in 2018, defines it as “false information that is
spread, regardless of whether there is intent to mislead.” It describes disinformation as “deliberately misleading or biased
information; manipulated narrative or facts; propaganda”.
https://help.pinterest.com/en/article/health-misinformation