Spotify’s platform “revolutionized” content streaming, as Spotify likes to remind us.“We are the world’s most popular streaming subscription service with more than 615 million users,” the media service provider boasts.
Now Sweden-based Spotify is positioning itself as a gatekeeper of election information, rolling out a massive censoring machine targeting what the streaming giant deems as “disinformation” — just in time for a U.S. presidential election that has big-monied leftists uniting to keep former President Donald Trump from returning to the White House.
Sound familiar?
‘Attempts to Manipulate or Interfere’
On May 31, Spotify announced its plan, informing the world “How Spotify is Protecting Election Integrity in 2024.” The plan sounds a lot like what Big Tech — driven by Constitution-devouring federal agencies — did in the 2020 election. Social networks and internet providers shut down content claimed to contain disinformation, misinformation, and the Orwellian-sounding “malinformation,” suppressing information from primarily conservatives criticizing and, in many cases, debunking official information. As we now know, much of the censored content wasn’t wrong; it was inconvenient truth to the swamp powers bent on interfering in the election.
Spotify appears to be heading down the same path.
Dustee Jenkins, Spotify’s chief public affairs officer, noted the worldwide effort of the company’s plan to block speech.
“With billions of people from over 50 countries heading to the polls to cast their vote, 2024 is shaping up to be the largest election year in history. Safeguarding our platform during critically important global events is a top priority for our teams, and we’ve spent years developing and refining our approach,” Jenkins says. “Knowing that election safety is top of mind for many of our creators and users, we wanted to provide further insights into our approach at Spotify.”
That approach includes Platform Rules prohibiting content that “attempts to manipulate or interfere with election-related processes.” The rules of engagement, of course, vary from country to country, so Jenkins says Spotify won’t be offering a one-size-fits-all content suppression effort. Teams will conduct individual risk assessments considering various “indicators,” including the service’s “availability and specific product offerings in a country, historical precedents for online and offline harm during voting periods, and emerging geopolitical factors that may increase on-platform risks.”
The key is constant monitoring.
“We monitor these factors on an ongoing basis and use our learnings to inform policy and enforcement guidelines, customize in-product interventions, and determine where we may benefit from additional resourcing and/or third-party inputs,” Jenkins tells us.
‘Grossly Overstepped its Bounds’
Spotify brought a third-party into its corporate fold in 2022 when it acquired Kinzen, an Ireland-based content moderation company. The acquisition followed a partnership between the two tech entities that began in 2020. Founded three years before, Kinzen sees itself as a public protector from “dangerous misinformation, hateful content, violent content, violent extremism and dangerous organisations,” according to its website.
How does Kinzen determine just what is “dangerous misinformation”? Proprietary analytical tools.
Sound familiar?
“In essence, using their networks and analytical tools, Kinzen would be an outsourced third party for the state and tech platforms to thin the herd on anything they deem unacceptable online,” the Burkean, an online publication “founded and run by university students in Ireland that seeks to promote free speech and fresh ideas,” reported in 2021.
Learning from Twitter?
Kinzen is the “love child” of former RTÉ (Irish public broadcaster) and Twitter executive Mark Little “and a coterie of politically well connected journalists,” reports the Burkean. Little briefly served as director of Twitter’s Ireland operations until 2016, when he left amid a company shakeup. He notes on his Meta Threads account that he “Worked @Twitter in better days. Journalist in olden days.”
Seems he learned a thing or two from his days at the speech-squashing Twitter.
On said Meta Threads account, Little recently reposted a Washington Post article headlined, “After Jan. 6, Twitter banned 70,000 right-wing accounts. Lies Plummeted.”
“The study … suggests that if social media companies want to reduce misinformation, banning habitual spreaders may be more effective than trying to suppress individual posts,” reports the big government-backing Post, quoted by the former Twitter exec.
Little also quotes from the study, published in the journal Nature. Apparently, he sees himself as a freedom fighter in his speech-suppression work. “In today’s polarized political climate, researchers who combat mistruths have come under attack and been labelled as unelected arbiters of truth. But the fight against misinformation is valid, warranted and urgently required,” the study asserts.
Coincidentally, the journal Nature on March 17, 2020, published a paper that concluded that “SARS-CoV-2 [the virus that causes COVID-19] is not a laboratory construct or a purposefully manipulated virus.” The conclusion was endorsed by Anthony Fauci, then-director of the National Institute of Allergy and Infectious Diseases, as well as then-National Institutes for Health Director Francis Collins, and trumpeted by the accomplice media even though top virologists were dubious. Fauci and his band of complicit scientists were aware of gain-of-function experiments on the virus in labs in Wuhan, China.
“It is clear that the authors of the now infamous … paper — published by Nature Medicine in 2020 — had significant conflicts of interest and that the paper was written to vilify and discredit discussion about the lab leak theory,” the House Select Subcommittee on the Coronavirus Pandemic wrote in April.
The folks at Nature hasten to note that the publication is “editorially independent of Nature Medicine, and Nature’s news team is independent of its journals team,” although both publications are owned by the same company.
‘Grossly Overstepped Its Bounds’
Kinzen took a fair amount of heat for helping Ireland’s Department of Health detect “disinformation” about Covid. It also took plenty of Euros for the short-lived effort. Kinzen eventually lost its contract with the government agency amid scrutiny over the services it provided. As investigative news outlet Gript reported in October 2021, Minister of Health Stephen Donnolly confirmed the anti-disinformation firm had been sacked and that the “partnership with Kinzen” was procured “outside of normal tendering processing” because of the “extreme urgency” of the pandemic.
“Whilst we do not know why the partnership ended, we do know it ended on Friday the 8th of October, four days after Gript published a story showing that the [Health and Safety Executive] misinformation programme, which used data provided by Kinzen, had grossly overstepped its bounds, either deliberately or negligently,” the news outlet reported.
The article goes on to note the problems that companies like Kinzen present.
“These companies are being given increasing amounts of influence over public discourse, but they have no obligation to explain to the public what they are doing or to act in a transparent manner, as can be seen in Kinzen’s repeated failures to answer any questions we have asked them about their work,” wrote Gary Kavanagh, a reporter for the publication.
There’s a lot of that going around.
The Federalist, The Daily Wire, and the state of Texas are plaintiffs in a lawsuit alleging that the U.S. State Department is violating the U.S. Constitution through its funding of technology that silences Americans who question government claims. My Federalist colleague Joy Pullmann reports:
Through grants and product development assistance to private entities including the Global Disinformation Index (GDI) and NewsGuard, the lawsuit alleges, the State Department “is actively intervening in the news-media market to render disfavored press outlets unprofitable by funding the infrastructure, development, and marketing and promotion of censorship technology and private censorship enterprises to covertly suppress speech of a segment of the American press.”
The lawsuit aims to stop the government from using its counterterrorism centers in “one of the most audacious, manipulative, secretive, and gravest abuses of power and infringements of First Amendment rights by the federal government in American history.”
The song remains the same at Spotify, which plans to use its Kinzen arm to “reduce risk.”
If the past is any indicator, the risk is coming from the so-called defenders against “disinformation.”