Tyranny and totalitarianism begin with the control of information. How does this control occur in the modern world?
This investigation explores what “content moderation” is, otherwise known as shadow-banning. We investigate how shadow-banning occurs, expose precisely who develops these techniques, and reveal the economic motives of shadow-banning.
Shadow-banning originates from certain left-leaning think tanks that refer to themselves as “Counter Violent Extremists” (CVEs). Many CVE groups have developed content moderation techniques, but this article will focus on Jigsaw, a special CVE owned by Google. With the data advantage of the Google search engine, preferential treatment on the YouTube platform, and support from much of the CVE community, Jigsaw is a formidable foe to freedom of information, privacy, and social networking. Jigsaw also produces unscientific reports of hate crimes, far-right extremism, and violent white supremacy in order to market their programs.
It is crucial to point out that the tyranny that we face in the modern world (of which we are only in the initial stages of cataloging in this report) relies on certain justifications that are untrue, such as systemic racism in the West, hate, far-right extremism, and terrorism. I have analyzed every relevant dataset on these topics and published those results. It is critical that the citizen understands the truth of these topics and is capable of conveying that knowledge.
- For a review of Jigsaw’s (unscientific) disinformation campaign, click here.
- For a proper debunking of hate crime claims, click here.
- For a proper debunking of far-right extremism/terrorism claims, click here.
Now for the origin of shadow-banning and a peculiar relationship between Google and the New York Times.
Shadow-Banning from the Source
The 1st Program
Jigsaw partnered with the New York Times on “content moderation” for the newspaper’s social media pages and to test Jigsaw’s first shadow-ban program, called Perspective API. At that time, the program simply deleted or hid comments that clients (such as the New York Times) found disagreeable. A client could use the text-based AI program to highlight certain comments that the client could more conveniently browse, rather than browse through all comments to find “toxic” ones. The client could also allow the text AI program to simply delete or hide the comments rather than be involved in the process at all. If this second option were the case, then many comments would never be viewable to other users, and some users may not realize that his/her comment was hidden. This provides obscurity to the client and prevents the likelihood of user scrutiny. As this machine-learning (or text-based or speech-based AI) was improved, it expanded to include images and video as well as comments. If a group has a platform or page that they want to implement content moderation on, they can use Perspective API to sift through comments before they are published on the site or have the program listen to video to determine if it needs to be taken down.
To summarize, Perspective API allowed platforms or specific pages to hide or delete comments/content, and even do so before those comments are published online (such as a YouTube comment section). If content is hidden, it would not acquire more likes or comments or any other form of feedback, because other users could not see the hidden content. This program works without informing the users whose content is hidden or deleted.
Surprisingly, marketing videos of Perspective API exist, though some have been deleted recently. Here is a link that does not work. I found this as a “case study” justifying the use of Perspective API, where the New York Times was the client. This may not be the first time that I have followed a suspicious story of the New York Times where the news agency deleted articles or pages. For a link to a marketing video that works (for now), click here.
The 2nd Program
Jigsaw has another tool called Moderator, which is also open-sourced (at least in its earlier days), so denying its existence has that hurdle to overcome. Oddly enough, Jigsaw’s Moderator was also a tool that they developed in partnership with the New York Times. This is the second documented case of the New York Times social media pages being the testing grounds for shadow-ban software.
Moderator is a text-based AI “that leverages Perspective to prioritize comments”. Not only is the CVE deleting comments with Perspective API, but it is also prioritizing certain comments with Moderator. It is still unclear if Moderator bypasses base algorithms of a platform to prioritize comments or if machine likes are added to these chosen comments to boost them to the top of a page or comment section.
Technically, some components of content moderation (shadow-banning) are a form of censorship, such as Perspective API where opinions are suppressed. Moderator is more difficult to classify since it runs on preferential treatment of other opinions rather than deleting opposing views. Nonetheless, the two together produce a result where free speech cannot exist.
The deleting and suppressing of comments, along with the preferential treatment that other comments receive, is irrefutably a result of Jigsaw and their programs. However, how do we know if Jigsaw uses these programs for the better good of all? Is the CVE truly so far-left that it suppresses innocuous opinions?
In other words, are people being suppressed for having incorrect opinions?
Jigsaw’s Justifications for Shadow-Banning
Jigsaw’s disinformation projects use “data” from the Atlantic Council’s Digital Forensic Research Lab (DFRLab). The DFRLab collects data based on definitions of disinformation from several groups, including Facebook. Contrary to the Jigsaw narrative on far-right extremism in North America, the DFRLab ranks Russia and Iran as the top sources of disinformation campaigns. The lab’s findings do not necessarily support the specific campaigns that Jigsaw wages, but as I found with dozens of Jigsaw’s citations, many were misinterpreted or misused entirely.
The DFRLab released two articles of the Capitol riots where far-right extremism was characterized as being highly networked and reaching millions of “sympathizers” – a claim I have thoroughly debunked. This claim sets up the CVE community neatly so that they can tackle this problem with their software.
The DFRLab articles examined institutional outlets like Twitter, which were implied to be safer as opposed to the radical and dangerous “Parler, Gab, MeWe, Zello, and Telegram”. How convenient for Google, the owner of Jigsaw, who wants total network control. As with Jigsaw’s narrative about the white supremacy issue, Jigsaw’s narrative about disinformation leads the CVE to the conclusion that the world needs more network control from Google and other dominant platforms, as well as more authoritarian methods by the CVE community. These false narratives neatly lead to justifications for dominant tech platforms to expand their networks and implement information control methods.
“The migration reiterates that the challenge of online extremism is not limited to any one platform but rather an entire, largely unregulated ecosystem with very few barriers to engage or disseminate content.” This quote from the DFRLab highlights my claim about Jigsaw’s and the CVE’s conclusions; note their use of the term “unregulated.”
Furthermore, the DFRLab uses the appeal to authority fallacy without releasing data: “The team at the Atlantic Council’s Digital Forensic Research Lab has conducted exhaustive research”. This statement is the only proof that the lab produces for their claims, and it would appear that a single appeal to authority is all Jigsaw requires to implement shadow-ban methods on millions. YouTube also commits the same logical fallacy when the platform defends its news bar by characterizing those pages as “authoritative” sources. It is important to point out that users do not have the option to choose which pages show up in the YouTube news bar. It is chosen for them, and those news sources are implied to be credible with logical fallacies alone.
If it is not obvious already, the CVE community is using a small minority (a few thousand annually for hate crimes in the US; and about a hundred or less annually for terrorism globally) of sick, solitary individuals to characterize everyone who dissents. Jigsaw uses a mere 35 individuals to justify the CVE’s methods, and no empirical evidence is produced or cited. It is carefully planned wording to use “millions of sympathizers” when talking about these issues. The CVEs need to rely on storytelling because data, quantitative analysis, and statistics will only disprove the CVE claims about far-right extremism and white supremacy.
Google and Jigsaw are not talking about protecting the world from extremists. They are attempting to justify extreme control.
 The ARKA Journal. “A Hidden War On Free Speech: Google’s Jigsaw.” https://advocate-for-rights-and-knowledge-of-americans-arka.ghost.io/grand-manipulation/
 The ARKA Journal. “Progressive Movements: How Unscientific and Harmful Are They?”. https://advocate-for-rights-and-knowledge-of-americans-arka.ghost.io/progressive-movements/
 FBI. https://ucr.fbi.gov/hate-crime/2019/topic-pages/offenders
 FBI. https://ucr.fbi.gov/crime-in-the-u.s/2019/crime-in-the-u.s.-2019/topic-pages/violent-crime
 Statistical Atlas. https://statisticalatlas.com/United-States/Race-and-Ethnicity
 The ARKA Journal. “The False Agenda.” https://advocate-for-rights-and-knowledge-of-americans-arka.ghost.io/grand-manipulation-myths-of-the-software-developers/
 Institute for Economics and Peace. “Global Terrorism Index 2019 Measuring the Impact of Terrorism.”https://www.visionofhumanity.org/wp-content/uploads/2020/11/GTI-2019-web.pdf
 Jigsaw partnership with New York Times. https://www.youtube.com/jigsaw
 Jigsaw. “Perspective API” https://www.perspectiveapi.com/how-it-works/
 The ARKA Journal. “A Cyber Breach Reveals the NYT’s True Allegiance.” https://advocate-for-rights-and-knowledge-of-americans-arka.ghost.io/frightening-unions-and/
 Github. “Moderator” https://github.com/conversationai/conversationai-moderator
 Jigsaw. “Toxicity: Case Studies” https://jigsaw.google.com/the-current/toxicity/case-studies/
 Jigsaw. “Disinformation” https://jigsaw.google.com/the-current/disinformation/dataviz/
 DFRLab. “Dichotomies of Disinformation” Github. https://github.com/DFRLab/Dichotomies-of-Disinformation
 Atlantic Council’s Digital Forensic Research Lab. “What’s next for the insurrectionists” https://www.atlanticcouncil.org/content-series/fastthinking/fast-thinking-whats-next-for-the-insurrectionists/
 Atlantic Council’s Digital Forensic Research Lab. “How the Capitol Riot was Coordinated Online” [https://www.atlanticcouncil.org/content-series/fastthinking/fast-thinking-how-the-capitol-riot-was-coordinated-online/
 Youtube. “Greater Transparency for Users Around.” https://blog.youtube/news-and-events/greater-transparency-for-users-around/
Discussion about this post