The National Science Foundation’s Directorate for Technology, Innovation, and Partnerships (TIP) is helping tech developers build artificial intelligence programs that suppress digital speech by starving online companies of ad revenue and isolating them from the financial system.
As part of the Small Business Innovation Research (SBIR) program’s second phase, the Massachusetts-based Automated Controversy Detection, Inc. (AuCoDe) received just over $940,000 for a project titled “A Controversy Detection Signal for Finance.” The company received $225,000 during the first phase of the program for the same project, for a total just under $1.2 million. AuCoDe received this money over a span of four years, from 2018 to 2022.
According to LinkedIn, AuCoDe is an “NSF backed company that aims to make online communication more productive and less dangerous.” Its now-defunct website states that AuCoDe “use[d] state-of-the-art machine learning algorithms to stop the spread of misinformation online.”
The company developed artificial intelligence programs to identify “opposing sentiment,” “misinformation and disinformation,” “fairness and bias issues,” and “bot activity and its correlation with disinformation campaigns.” It used similar methods to “gain insight into sentiment and beliefs.”
Along these lines, the NSF-funded project’s goal was to develop technology that can “automatically detect controversy and disinformation, providing a means for financial institutions to reduce risk exposure” amid the increase of “public attention and political concern” being paid to disinformation.
Second phase SBIR grant money funded the “development of novel algorithms that automatically detect controversy in social media, news, and other outlets.” AuCoDe’s used this money to attempt the creation of “artificial intelligence and machine learning” programs that combat “the growing noise of controversy, mis- and dis-information, and toxic speech.”
According to the grant’s project outcomes report, AuCoDe developed several such “technologies.” The company created the “Squint,” controversy detection dashboard, and “Squabble, a proprietary controversy detection model.”
Squint and Squabble, “enable users to learn the controversy and toxicity levels of social media content, together with the stance score of an individual or company.” AuCoDe also created a free Chrome extension called “DETOXIFY” that enables users to blacklist and blur topics from their social media feeds.
Squint and Squabble are unavailable for public use.
AuCoDe also used this grant money to launch a YouTube channel where company members discuss “current controversies.” The channel boasts three total subscribers, and the most recent of its nine videos was uploaded eight months ago.
A paper, co-authored by AuCoDe staff members Shiri Dori-Hacohen, Keen Sung, Jengyu Chou, and Julian Lustig-Gonzalez, produced as a result of this grant detailed how “detecting information disorders and deploying novel, real-world content moderation tools is crucial in promoting empathy in social networks” like Parler and Reddit.
A supplemental video provided by the authors discussed the “cost of disinformation” both before and after Covid — partially AI-generated results “conservatively” estimated to be upward of $230 billion — and relied upon a report from the Global Disinformation Index to substantiate that brands like Amazon, Petco, and UPS “inadvertently funded disinformation stories leading up to the 2020 election.”
The Global Disinformation Index, of course, is a formerly State Department-backed British organization that provided advertising companies with blacklists to starve companies accused that were accused spreading disinformation of revenue.
AuCoDe’s research was aimed at helping the federal government further this goal through the algorithmic curation of digital speech.
In January 2021, using research gathered from these grants, the company published a piece titled “Misinformation drives calls for action on Parler: preliminary insights into 672k comments from 291k Parler users.” The company said it was “investigating the nature of accounts on alt-tech networks, with an eye toward who is spreading misinformation” and suggested that the platform’s very nature enabled users to circulate and engage with “mis- and dis-information.”
“In conclusion, our first look at our collection of Parler data finds a plethora of misinformation driving a desire for action,” the company wrote. “We also discovered that in addition to highly permissive content moderation, there is a lack of moderation around bots, leaving enormous potential for disinformation campaigns to be carried out on these networks — something we will be keenly exploring in the coming weeks.”
The reality is that AuCoDe interfered with Americans’ right to free speech because it didn’t align with the left-wing consensus and used federal tax dollars to run cover for Big Tech oligarchs. If the company was actually dogmatically concerned with “misinformation,” it would have gone after Facebook, which played a much larger role in hosting Jan. 6 discourse.
A source close to the company told The Federalist that AuCoDe closed in May 2023. More than $1 million in taxpayer money went to a government-backed start-up specifically focused on attacking the First Amendment rights of Americans and sabotaging businesses that deviate from left-wing orthodoxy.