The day after the attempted assassination of Donald Trump, the internet saw a huge spike in calls for violence, and in particular an increase in calls for a modern-day civil war — a chilling reflection of a small group of users who create and amplify messages glorifying mass shooters and perpetrators of targeted violence.
The spike was documented by Moonshot, a company that monitors domestic violent extremism, or DVE, spaces online. A team of six researchers documented 1,599 calls for civil war — a 633% increase from a normal day — across a range of online platforms including 4Chan and Reddit, more mainstream platforms like YouTube, and new sites for far-right discussions geared towards angry and disillusioned young men.
“The uptick in online calls is fairly typical of online discourse in spaces that glorify violence,” Elizabeth Neumann, chief strategy officer for Moonshot, told CBS News. “The fact is, there is an online ecosystem out there working day in, day out to encourage violence of all kinds, from political civil war to mindless school shootings,” she said.
The alarming findings follow a longstanding pattern. Every mass shooting or instance of targeted violence in the last decade has been followed by increased calls for violence online. In most cases, the perpetrators posted about violence prior to carrying out the act in the real world.
In the case of Matthew Crooks, the 20-year-old who opened fire at the Trump rally in Butler, Pennsylvania, last month, the FBI is still working to uncover his full online footprint, but CBS News has learned that investigators believe he posted troubling content online in the past.
Officials have found “a social media account which is believed to be associated with the shooter, in about the 2019, 2020 timeframe,” FBI Deputy Director Paul Abbate told a joint hearing of the Senate Judiciary and Homeland Security committees last week. “There were over 700 comments,” he said, that “appear to reflect antisemitic and anti-immigration themes, to espouse political violence, and are described as extreme in nature.”
The gunman grazed Trump’s ear, killed volunteer firefighter and father of two Corey Comperatore, and injured two more rally-goers. A Secret Service sniper shot and killed Crooks within seconds of him opening fire.
In the day following the shooting, Moonshot also found 2,051 specific threats or encouragements to violence online — more than double the regular volume of daily threats the group documents as part of its ongoing monitoring of extremist spaces.
Everytown for Gun Safety, an advocacy group combating gun violence, partnered with Moonshot on a new report out today that tracked interest and engagement in online discussions of mass shootings and targeted violence from January through June of last year.
Researchers found that the glorification of mass shootings and targeted violence, and the valorization of the perpetrators, was common in online discussions devoted to such content. They also found that Google searches were the gateway to other platforms that hosted the troubling chats. The report found that actual calls for carrying out violence in the real world came from a smaller subset of individuals online.
“In the aftermath of mass shootings, we often learn that the shooter was radicalized with help from vile content he found on sites like YouTube — and yet the leaders of these platforms consistently refuse to crack down on users who violate their own policies,” said John Feinblatt, president of Everytown for Gun Safety.
More research needs to be done on the connection between violent online rhetoric and violent attacks in the real world, according to Everytown, but for more than a decade, both have been on an upward trend.
Since the dawn of the internet, a small subset of chat rooms have harbored hateful content like Nazi glorification. But in the last 10 years, as the number of mass shootings — particularly school shootings — has increased, the online glorification of school shooters has ballooned.
“We call on these companies to put public safety ahead of traffic numbers, and proactively moderate spaces that are breeding grounds for hate and violence,” Feinblatt said.
Mainstream platforms like Facebook, Instagram and YouTube have devoted significant resources to clearing out such content, with some success, but YouTube in particular has struggled with the game of whack-a-mole to stamp out harmful content.
A spokesperson for YouTube said the company has a policy that explicitly prohibits content that glorifies or promotes violent tragedies, such as school shootings, and said in the first quarter of 2024 the company removed more than 2.1 million videos for violating its policies against harmful or dangerous content.
“YouTube’s Community Guidelines prohibit hate speech, graphic violence and content promoting or glorifying violent acts, and we strictly enforce these policies,” said Javier Hernandez, the YouTube spokesperson.
Fringe extremist platforms that make no attempt to monitor extremist content have been cropping up, including a website devoted to the discussion and glorification of mass shootings.
The perpetrators of the Columbine High School shooting in 1999 are celebrated the most in the online discussions, according to the report by Everytown and Moonshot.
“When I survived the shooting 25 years ago… I could never have imagined social media, let alone what these sites would become,” said Salli Garrigan, a Moms Demand Action volunteer and senior fellow with the Everytown Survivor Network. The shooting claimed the lives of 12 of her classmates and her teacher.
“As a mother now, it’s terrifying to know how easy it is to access violent content, especially when it’s content glorifying one of the worst days of my life,” Garrigan said.
A spokesperson for Reddit, who did not see the report prior to publication, said the platform’s content policy “strictly prohibits content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people,” and that this includes “mass killer manifestos and related media, as well as any support or cheering for these attacks.”
The Reddit spokesperson said the platform has dedicated “safety teams” that “rapidly monitor and remove violating content” following “significant external events.”
A spokesperson for 4Chan did not respond to CBS News’ request for comment.