What Can LEOs Do When Attacked By Bots?
“The question isn't whether law enforcement agencies will face bot army attacks, it's when.”
While Blake Lively and Justin Baldoni battle over alleged bot armies in Hollywood, law enforcement leaders should be paying close attention. It might seem strange that a Hollywood dispute would have anything to do with law enforcement, but the same digital weapons now targeting celebrities will inevitably turn toward police departments, and most agencies are nowhere near ready for it.
In my last article for the FPI, I took a look at the public battle taking place between Likely and Baldoni as a way to show how digital bots were being utilized as part of a smear campaign to sully the actors’ respective reputations. The genesis of the article wasn’t to keep LEO-minded readers informed on the latest celeb gossip but in fact shine a light on an issue that will become a potential problem for policing agencies everywhere.
The question isn't whether law enforcement agencies will face bot army attacks, it's when. Departments that wait until they're under digital assault to develop response capabilities will find themselves fighting a two-front war: investigating actual crimes while battling an invisible enemy designed to undermine their every move.
I searched for examples of U.S. law enforcement agencies being systematically targeted by bot armies and found very few documented cases, though coordinated digital misinformation campaigns against police have begun to emerge.
Most recent examples include a Police One report on cohesive social media anti-ICE campaigns leading to potential increases in attacks on officers. During the COVID-19 pandemic, social media posts circulated that police were selectively enforcing lockdown measures or using them for racial profiling. False narratives spread quickly on social media, further eroding trust in law enforcement in marginalized communities.
These digital issues, while cutting edge, are becoming more timely. Recently, FPI Fellow Dr. Joseph Lestrange and President Jim Bueermann co-authored an article, also for Police One, looking at how AI-driven misinformation is creating new challenges for law enforcement and its leaders need to be ready to respond.
The threat is real and growing, but while most departments haven't experienced it yet, it makes preparation even more critical.
Combating bot armies and intentional digital disinformation is going to be complicated but also something that will become deeply entrenched in law enforcement agencies that show a prescience when it comes to them. Let’s break down the steps that departments can take into two parts, the first of which I will tackle here and the second in my next submission.
Crisis Management Response Guide: When Bots Attack
Using the story idea from my previous article about a missing person’s case, in which a local woman vanished after leaving work, the community rallied for information, and police launched their investigation. Pretty normal stuff, the sort of work LEOs do regularly.
But in this case, the investigation quickly went off the rails. Within a few hours, a coordinated army of bots flooded social media with lies and conspiracy theories that tried to link a police officer with the missing woman. Before long, the deceit threatened the community as bots urged violence and unrest, essentially turning a search for truth into a battle for the department's reputation.
The First Hour: Detection and Response
Bot armies create recognizable patterns, causing unusual spikes in social media mentions, coordinated messaging with similar timestamps, clusters of new accounts posting inflammatory content, and rapid sentiment shifts that don't match genuine community responses.
In response to a bot army attack, police should deploy automated monitoring systems that alert designated personnel when these patterns emerge. But remember, human verification remains critical to distinguish between bot campaigns and legitimate community outrage.
Immediate Actions:
Activate incident commander (PIO, deputy chief, or digital response leader)
Notify key personnel: chief, legal counsel, IT department, patrol supervisors
Stop all non-essential department social media posting
Begin evidence preservation with screenshots and URL documentation
Issue holding statement acknowledging "coordinated online activity"
Communication Strategy
Speed matters, but accuracy matters more. Issue quick holding statements while verifying facts, but don't chase every false narrative because that plays into the bot strategy of forcing constant defense.
Deploy authentic voices: Video statements from command staff provide human faces that counter artificial bot communication. Community partners with pre-established relationships become force multipliers, amplifying accurate information through networks bots cannot penetrate.
Control information flow: Release factual corrections with verifiable information, but focus on what you can prove rather than what you can't disprove. Use multiple channels, such as traditional media, email lists, and text alerts, to bypass social media manipulation.
Platform and Legal Response
Report bot accounts immediately using law enforcement channels on major platforms. Coordinate mass reporting with allied accounts and request expedited review of threatening content. Don't expect immediate results, but consistent pressure creates foundation for account removals.
Document everything for potential legal action. While bot campaigns often operate in gray areas, some activities become criminal violations, such as direct threats, harassment, fraud, or conspiracy. Work with cybercrime units to assess whether charges are appropriate.
Resource Management and Protection
Bot campaigns create multiple demands via increased monitoring, additional patrol presence if threats suggest physical risk, expanded media relations capacity, all while continuing the investigation of the original case. Pre-plan resource allocation strategies that can quickly reassign personnel without abandoning core public safety functions.
Personal attacks on officers and their families require employee assistance programs and safety protocols. Balance transparency with protection by providing information to counter false narratives while safeguarding personnel who become individual targets.
Most critically, maintain investigation integrity. Don't let artificial pressure compromise the very investigations bot campaigns claim to be concerned about.
Surviving the Storm
While some bot campaigns are brief bursts of activity, others can sustain pressure for days or weeks. Departments that maintain consistent, factual communication while avoiding reactive responses may often find community members begin recognizing and rejecting artificial narratives.
The goal isn't winning every online argument. It's preserving your department's credibility and effectiveness while the digital storm passes. When the bots move on, your response becomes part of your permanent record. Make it one that demonstrates competence, transparency, and genuine commitment to serving your community, even when that community appears to be turning against you online.
In my next article, we'll explore the other half of the equation: how departments can build bot-proof defenses through technology investments, training programs, and community relationships that make them resilient targets before the attacks ever begin.
About the author
George Watson is a communications consultant, freelance writer and an adjunct professor of journalism at the University of Redlands. He spent 18 years as a reporter and editor, with focuses on law enforcement and government administration. His work has taken him all across the country and to war-torn places like Kabul, Afghanistan. To read his full bio click here.