The National Science Foundation’s Division of Social and Economic Sciences awarded $196,890 to Mississippi State University to fund the creation of an AI-based database analyzing how people used social media to discuss Covid-19.
The program, titled “RAPID: Analyses of Emotions Expressed in Social Media and Forums During the COVID-19 Pandemic,” included funding from the 2020 Coronavirus Aid, Relief, and Economic Security Act and was tasked with establishing a “database” of “location- and time-linked record of emotions that may be associated with increased vulnerability to virtual threats” and “answer[ing] fundamental research questions about the linkages of negative emotions experienced during the pandemic with regional variation and socioeconomic status.”
“The data will permit analyses of risks such as sharing of more personal information online, misinformation initiation and spread, relaxed security preferences, and insider threat,” reads the award’s abstract.
The program is meant to “advance science by informing community response and policymaking during pandemics through an analysis and understanding of how emotions are linked to local and regional social and geographic” factors.
“Artificial intelligence and data science techniques” were used to process and analyze the nearly 15,000,000 collected social media posts.
The researchers collected data from “10-15 social media and web-based forums” from December 2019 to December 2020, starting with discourse about the Chinese government’s early handling of Covid.
Posts were gathered from mainstream, albeit biased and predominantly leftwing, platforms like YouTube, Reddit, Tumblr, pre-Musk Twitter, and visual media hosting site Flickr; the conservative social media site Parler (prior to its post-Jan. 6 throttling) and Gab; 8kun and 4chan where fringe debates regardless of ideology can be held freely; and the leftist safe haven of Mastadon.
According to the grant’s project outcomes report, researchers “found that emotions were associated with events in the pandemic and varied according to associations with social institutions” like churches, hospitals, and schools and were heavily influenced by the “timeline of the pandemic.” Researchers were able to discern when and where negative emotions “cluster[ed].”
Researchers were able to further “identif[y] the influence that misinformation … had on perceptions of vaccinations” and the Covid-19 virus in general, and claimed to have found the “least amount of misinformation on platforms where there was more regulation, like Reddit.” Reddit is notorious for being a platform where leftist groupthink and radicalization are fostered through perverse incentives.
The project outcomes report goes on to claim this research “heightened the importance of examining aspects of social media platforms that may be involved in shaping both the accuracy of information sharing, storytelling, and the types of misinformation and counter-misinformation” available on social media.
“The importance of examining such factors include aspects connected with social institutions that formulate collective attitudes on topics like COVID-19 health policies,” like religious communities and political groups, the report continued.
Researchers also leveraged data from the U.S. Census and analyzed “county-level 2020 presidential voting, public health intervention data,” and “Facebook’s Social Connectedness Index which gauges how connected communities are based on Facebook Friendships.”
By using artificial intelligence to scrutinize publicly available census data, county-level voting patterns, and personal friendships, these researchers were able to build a “comprehensive database that collects, stores, and analyzes content related to fear, anxiety, sadness, and anger” about Covid-19 that could be used by the broader “research community.”
What exactly is the government’s utility in studying how people discussed Covid? More specifically, what does it stand to gain by analyzing how the religious, politically involved, and specific digital communities view the disease?
Missouri State University did not respond with comment by publication time.
Starting in 2019, the National Science Foundation also provided millions of dollars to the Oakland, California-based nonprofit YR Media to teach “underrepresented” and “underserved” youth how to integrate critical theory with artificial intelligence technologies.