“The department had rolled out the NeuroCalm system only three months earlier — a triage drone suite designed for mental health crisis interventions. Protocol 9.”

It was 3:43 p.m. when Dispatch flagged the call: armed subject, male, late 30s, possibly experiencing a psychotic episode, refusing to come out of an abandoned community center on 81st and Mercer. Officer Herman Marquez’s cruiser illuminated the street in neon-blue pulses as it pulled up silently. His partner, Officer Don Briggs, muttered, “Let me guess — we’re gonna send the bot in first?”

Marquez didn’t reply. He was already scanning the Overwatch Video Device (OVD) inside his contact lenses. It showed a topographical map of the building, overlaid with live thermal signatures. The subject was pacing in a cluttered room toward the back, muttering, one hand waving what looked like a knife — maybe a machete.

The department had rolled out the NeuroCalm system only three months earlier — a triage drone suite designed for mental health crisis interventions. Protocol 9. Marquez had spent weeks training with it. Briggs hadn’t.

“Used to be, you talked them down,” Briggs muttered, watching the spider-legged drone unfold from the back of the cruiser. Its matte surface shimmered as it adjusted camouflage tones to match the concrete around it. “Now it’s all gadgets and AI psych evals.”

“We’re not trained for half the stuff these people are going through,” Marquez said, his voice low. “This isn’t about us. It’s about not killing him.”

Briggs scoffed. “You can’t fix crazy with a robot.”

But the drone was already in motion. Inside, the NeuroCalm’s onboard AI analyzed speech patterns, heart rate (from heat and motion sensors), and micro-expressions through the broken windows. It determined — within 87% confidence — that the subject was experiencing a paranoid delusion, possibly schizoaffective in nature. He was talking to “the walls,” pleading for them to “stop showing teeth.”

The drone began its calming sequence: soft, slow bioluminescent pulses projected along the walls, accompanied by a synthesized voice modeled after Dr. Evelyn Chu, a psychologist whose tonal patterns had tested best for de-escalation. The AI’s voice was soothing, measured: “You’re safe. No one here is going to hurt you.”

Outside, Briggs paced, his arms folded tightly. “If this goes sideways, it’s our names on the report, not that machine’s.”

Marquez kept watching his OVD. “It doesn’t escalate unless we do.”

The drone, now inside, emitted a localized pheromone mix — oxytocin and lavender analogues — into the air, a non-invasive neurochemical sedative tested in hundreds of simulated cases. It was barely perceptible, but it worked. The man’s pacing slowed. His grip on the machete loosened. The voice in the walls faded — or at least, he was listening more to the drone than to them now.

Outside, a small crowd had gathered. A woman was filming with her glasses. “Are they using that new mood-tech thing?” she asked no one in particular. “My cousin says it violates consent.”

From the side, a protestor barked, “This is medicalizing policing! Send a social worker, not Skynet!”

Back inside, the drone extended a non-lethal retrieval tether — a soft, ballooning loop that tightened around the man’s wrist and gently pulled the weapon away. No shocks. No guns. Just tension and patience.

By 4:07 p.m., the man was sitting on the curb, wrapped in a smart blanket that monitored his vitals. The drone had tagged him for mandatory evaluation at New Chicago General’s Neuro-Behavioral Triage Unit. Marquez uploaded the footage and neurodata logs to Internal Affairs and Civilian Oversight.

Briggs shook his head. “And just like that, we’re babysitters for a robot shrink.”

Marquez didn’t answer. He just watched as the drone folded back into the cruiser. For now, it had done what decades of policing hadn’t — it had ended a dangerous call without anyone dead or cuffed.

But as he looked over at the crowd — some cheering, others furious — Marquez knew the story wasn’t over. The technology worked, but whether the world was ready for it… that was another matter.

Reflective Questions

  • What are the ethical implications of using neurochemical interventions (like scent-based sedation) without explicit consent in policing situations?

  • Does reliance on AI systems to make mental health assessments risk dehumanizing or over-simplifying complex human experiences?

  • How might older or more traditional officers’ resistance to new technologies impact the effectiveness or adoption of future policing tools?

  • In what ways does the use of advanced tech like NeuroCalm shift public perception of law enforcement — toward trust or fear?

  • Should responsibility and liability for outcomes during tech-assisted interventions lie with the human officers, the developers, or the technology itself?

Download a printable version of the story.

Next
Next

“The Man Who Wasn’t There”