Wyoming Republican congressional candidate Harriet Hageman blasted university researchers who said they are monitoring her Twitter account for “toxic language” as part of a project they claimed would be published in the Washington Post.
“I’ll tell you what’s ‘toxic’ — trying to freeze free speech with ominous warnings that ‘we’re watching you’ from pointy-headed college professors and the leftist corporate media,” Hageman told Fox News Digital.
Hageman received an email from the NDSU Center for the Study of Digital Society that has been obtained by Fox News Digital.
“Dear Harriet Hageman, We are two independent researchers at North Dakota State University. We are not affiliated with any partisan group in any way. We are writing to let you know we are conducting research on the use of toxic language on Twitter by candidates, specifically how use of such language affects election outcomes. We are monitoring your Twitter account [@HagemanforWY] and will compile your tweets that use toxic language,” the email stated.
HARRIET HAGEMAN SEES HER LANDSLIDE VICTORY OVER LIZ CHENEY IN WYOMING AS BEACON FOR THE NATION: ‘WE’RE FED UP’
“Just before the election, we will write a post on the Monkey Cage blog of the Washington Post that discusses our findings regarding patterns in the use of toxic language,” the letter continued.
It was signed by Dr. Daniel Pemstein, the co-director of the Study of Digital Society, and Dr. Yunus Orhan. It is unclear which other candidates received the email, as their message indicated they would be monitoring multiple people.
Hageman didn’t appreciate the warning that her speech is being closely monitored.
“University faculty should encourage vigorous discussion of competing ideas, not try to shut them down before ideas are even expressed. Here you have the unholy triumvirate of academia, the mainstream press, and Big Tech coming together to squelch free speech,” Hageman told Fox News Digital. “This is exactly why people are fighting back against the establishment — they’re sick of being told what they can and can’t think.”
NEW YORK TIMES SPILLS MORE INK ON ELON MUSK, BUT TWITTER’S ‘CHIEF TWIT’ WILL HAVE LAST WORD
Hageman is expected to easily win the race for Wyoming’s sole House seat in November. She soundly defeated Rep. Liz Cheney, R-Wyo., in their Republican primary race earlier this year. Cheney’s prominent role on the January 6 committee and her vocal opposition to former President Trump doomed her chances.
The Monkey Cage is an independent group published by the Post that aims to “connect political scientists and the political conversation by creating a compelling forum, developing publicly focused scholars, and building an informed audience,” according to its website.
TRUMP BLASTS LIZ CHENEY AFTER PRIMARY LOSS TO HARRIET HAGEMAN: ‘SHE CAN FINALLY DISAPPEAR’
The Monkey Cage’s Henry Farrell did not commit to running the findings when reached by Fox News Digital.
“A group of scholars from North Dakota State University pitched the Monkey Cage on an article on toxic language based on their research. The Monkey Cage editors indicated provisional interest, but have yet to see or evaluate a formal submission from them — that would involve further review of the validity of the data and methods. The pitch was unsolicited, and the authors are not affiliated with the Monkey Cage or The Washington Post,” Farrell said.
Fox News Digital sent the NDSU Center for the Study of Digital Society a series of questions, including who determines what language is toxic and whether Democrats and Republicans would be equally monitored.
Pemstein said the researchers are “submitting a post to the Monkey Cage like any other researchers” who publish there. He said the researchers used Google’s Perspective API to classify tweets.
“That is, we are using machine learning algorithms that have been trained to predict whether a typical person would find a social media post toxic to score post toxicity. The classifiers were developed and trained by computer scientists as general purpose tools for detecting toxicity in online speech and do not reflect any personal researcher judgment. Essentially, they reflect the average judgment of the regular people who coded a large corpus of text as toxic or not. They provide a probabilistic measure of the toxicity of a given snippet of text. So you feed the API a bit of text and it returns a probability that a typical person would think the text was toxic. In short, a computer algorithm trained on a huge collection of everyday online speech determines what language is toxic,” Pemstein said.
He also said Democratic candidates use Twitter “more” than Republicans.
“We have attempted to equally monitor candidates across parties, but, because Democratic party candidates use Twitter more than Republicans, our sample includes more Democrats than Republicans,” Pemstein said.
CLICK HERE TO GET THE FOX NEWS APP
Read the original article