Skip to content
Breaking News Alert Iowa Sen. Joni Ernst Endorsed Transgender Military Service

‘Twitter Files’ Show More Groups Used Hamilton 68’s Bogus Methodology To Sell The Russia Hoax

The government pushed for censorship based on bogus analyses from outsiders claiming accounts were pushing ‘foreign disinformation.’

Share

The federal government and the think tanks it funds asked Twitter to censor tens of thousands of users based on bogus analyses that pegged the accounts as pushing “foreign disinformation,” the latest installment of the “Twitter Files” reveals. 

On Thursday, independent journalist Matt Taibbi revealed that in addition to the fraudulent Hamilton 68 dashboard, at least three other groups pushed faulty “foreign disinformation” models, including the State Department’s Global Engagement Center, the Atlantic Council’s Forensic Research Lab, and the now-disband scandal-plagued group, New Knowledge, raising new questions about the breadth of the tangled Censorship Complex web.

The Global Engagement Center

The Global Engagement Center is an interagency center, housed in the State Department, that was tasked in 2016 with leading the federal government’s efforts to counter what it called “disinformation.” While the Global Engagement Center previously made several appearances in the “Twitter Files,” last week’s release of internal Twitter communications exposed a bigger scandal: The Global Engagement Center’s “ecosystem” approach to detecting supposed foreign disinformation was a “laughable” “crock,” according to Twitter’s experts.

To identify supposedly foreign disinformation accounts, the Global Engagement Center looked at hashtags and connections. “If you retweet a news source linked to Russia, you become Russia-linked,” one Twitter executive scoffed, adding: “Does not exactly resonate as a sound research approach.” 

Other emails showed Twitter’s team bemused at the ridiculous rationales the Global Engagement Center offered to support censorship requests. For instance, it equated a high volume of tweets with being a bot, connected involvement in the “yellow vest” movement in France with being a Russia-aligned account, and concluded that pro-China accounts were Russia-linked — except anti-China sentiments expressed in Italy were considered Russia-connected as well. And the Global Engagement Center saw “a surge in accounts that agreed with Moscow-aligned narratives” as meaning “Moscow controlled.” 

The Global Engagement Center’s accusations of foreign disinformation are “unverified” and “can’t be replicated by either external academics or by Twitter — so they aren’t operating with the greatest of credibility when they make pronouncements about accounts/widespread disinfo,” read another internal Twitter email. 

“Anyone can make unsubstantiated allegations, and if they release their” data, “they will likely get laughed out of the room by real researchers,” is how the Twitter team saw the “evidence” of foreign disinformation peddled by the Global Engagement Center.

Did It Follow Hamilton 68’s Methods?

One person working as a contractor for the Global Engagement Center until June 2017 was J.M. Berger, who helped develop the Hamilton 68 dashboard that purported to track Russian disinformation. Berger stressed to Taibbi, however, that at no point “did GEC have any input or involvement whatsoever with the Hamilton dashboard.” 

Even if the Global Engagement Center wasn’t involved with the Hamilton 68 dashboard, that’s only half the equation. What about Berger’s work with the Global Engagement Center? 

Given that in creating Hamilton 68, the Alliance for Securing Democracy “employed social network analytical techniques largely developed by J.M. Berger and Jonathon Morgan,” and that the dashboard has been outed as a sham, Berger’s work with the State Department raises serious questions. Did Berger share the same flawed methodologies used for the Hamilton 68 project with the Global Engagement Center?

New Knowledge Is the New Hamilton 68

Berger’s work with Morgan on the faulty Hamilton 68 dashboard triggers further questions about a disinformation report prepared by “New Knowledge,” the cybersecurity company Morgan founded in 2015 that also made an appearance in Thursday’s “Twitter Files.”

The New Knowledge team, led by supposed “disinformation” experts — Renee DiResta, the then-director of research at New Knowledge, and Jonathan Albright, an academic out of Columbia University’s Tow Center for Digital Journalism — prepared a 101-page report for the U.S. Senate Intelligence Committee that purported to reveal “the scope of Russia’s efforts during the 2016 election to cause discord among American citizens and sway the election.” 

Morgan claimed New Knowledge “deployed” its “technology in collaboration with Clint Watts, J.M. Berger, and Andrew Weisburd for the Securing Democracy Project’s Hamilton68 dashboard.” This raises the obvious question: Was New Knowledge’s report to the Senate Intelligence Committee another Hamilton 68 disinformation sham?

Twitter’s internal communications published last week also reveal the tech giant questioning the expertise of these players. When asked by a New York Times reporter why Twitter hadn’t hired “independent researchers” like DiResta, Albright, and Morgan, one executive retorted, “The word ‘researcher’ has taken on a very broad meaning.” 

The Atlantic Council’s Forensic Research Lab

Another player in what Twitter executives called the misinformation “cottage industry,” was the Atlantic Council. While the Atlantic Council’s involvement in the disinformation business was previously known, the most recent release of the “Twitter Files” now reveals the group’s Digital Forensic Research Lab, or DFRLab, pushed censorship requests based on bogus research.

In June of 2021, an analyst from the DFRLab wrote to Twitter claiming its researchers suspected “around 40k twitter accounts” of “engaging in inauthentic behavior … and Hindu nationalism more broadly.” It also provided Twitter with a spreadsheet of those accounts. Those 40,000 accounts, the DFRLab said, are suspected of being “paid employees or possibly volunteers” of India’s Bharatiya Janata Party.

“But the list was full of ordinary Americans, many with no connection to India,” Taibbi stressed. And an email response from Twitter’s then-head of trust and safety, Yoel Roth, confirmed he had “spot-checked a number of these accounts, and virtually all appear to be real people.”

While the Atlantic Council now claims it didn’t publish the “former researcher’s” report on the 40,000 accounts, “because we lacked confidence in its findings,” the fact that the Atlantic Council’s lab sent Twitter faulty research raises yet more questions given the group’s relationship with two other players in the Censorship Complex: the Election Integrity Partnership and the Global Disinformation Index.

During the 2020 election, the Election Integrity Partnership, of which the Atlantic Council was one of four members, fed censorship requests of supposed disinformation to Twitter. Did the Atlantic Council provide the Election Integrity Partnership with similarly unsupported claims of foreign disinformation related to the 2020 election? Did the Election Integrity Partnership then pass those censorship requests on to Twitter? And did Twitter respond by blocking the accounts?

The Atlantic Council’s DFRLab also shares a connection to the Global Disinformation Index, which was recently outed for publishing a blacklist for advertisers that targeted conservative outlets. The lab’s founder and a former senior fellow at the Atlantic Council, Ben Nimmo, who is now the global lead at Meta, served as an advisory panel member for the Global Disinformation Index. In conducting its “research,” did the Global Disinformation Index incorporate any of the flawed methodologies or techniques used by the DFRLab?

Likewise, the Atlantic Council connects back to the State Department and the Global Engagement Center, with the latter funding the DFRLab. The director at the Atlantic Council, Graham Brookie, however, denied its lab “uses tax money to track Americans, saying its GEC grants have ‘an exclusively international focus.’” 

Of course, cash is fungible, but beyond the government funding of the Atlantic Council’s DFRLab, Americans should be concerned by the breadth of disinformation being spread about disinformation. Now a total of four players in the Censorship Complex — the Atlantic Council’s DFRLab, the Global Engagement Center, New Knowledge, and the Alliance for Securing Democracy’s Hamilton 68 dashboard — are shown to have spread false charges of foreign disinformation. And that number is likely much higher, as Roth made clear in an email released last week that it is not “technologically possible to accurately identify potential Russian fingerprints on Twitter accounts through our public-facing” system. 

Yet the government, directly and through government-funded think tanks, pushed for censorship based on bogus analyses from outsiders claiming accounts were pushing “foreign disinformation.” And so, a cottage industry was launched based on disinformation about disinformation.


2
0
Access Commentsx
()
x