:format(webp))
By Rick Smith, Founder & CEO, Axon
Axon paused facial recognition efforts in 2019 because the technology was not ready for responsible use. As we resume evaluation, our core commitment remains unchanged: to put ethics before deployment.
In 2019, Axon took an unusual step for a technology company: we said no. Facial recognition was at an inflection point. Agencies wanted it. But concerns over bias, accuracy, and oversight compelled us to halt deployment on body-worn cameras. We were the first in our industry to say: not yet.
We took that step because a tool of this magnitude must meet a higher bar. A technology that can identify people must be accurate, accountable, transparent, and guided by clear oversight. We paused to learn, to listen, and to ask hard questions. But we never stopped working.
Over the past five years, we have continued controlled, lab-based research, consulted experts, engaged communities, and explored how facial recognition might one day be used responsibly in public safety.
Facial recognition holds meaningful potential to strengthen public safety when deployed responsibly. The technology has become significantly more accurate, oversight frameworks are clearer, and law enforcement continues to express a need for tools that help solve crimes efficiently and safely. Many police leaders describe facial recognition as a “game-changer” for investigations—with potential in the future to aid in locating missing persons, identify individuals on curated lists of dangerous fugitives, support greater officer safety and improve case outcomes when used with proper oversight (Police1).
Public sentiment generally supports these uses when clear guardrails are in place. In the United States, a majority of adults believe facial recognition could help find missing people (78%) and solve crimes more quickly (74%) (Pew Research Center). In the United Kingdom, recent polling found that 84% of respondents believe police already use facial recognition and two-thirds believe it helps locate missing persons (Information Commissioner’s Office). In Canada, public-perspective research similarly shows strong awareness of the technology’s potential but emphasizes the need for transparent governance and safeguards to ensure equity and accountability (Office of the Privacy Commissioner of Canada).
Taken together, the research across these markets suggests growing understanding and acceptance of facial recognition’s public-safety value—paired with a shared expectation that it be implemented responsibly and transparently.
The reality is that facial recognition is already here. It unlocks our phones, organizes our photos, and scans for threats in airports and stadiums. The question is not whether public safety will encounter the technology—it is how to ensure it delivers better community safety while minimizing mistakes that could undermine trust or overuse that encroaches on privacy unnecessarily. For Axon, utility and responsibility must move in lockstep: solutions must be accurate enough to meaningfully help public safety, and constrained enough to avoid misuse.
At Axon, we believe it’s time to evaluate facial recognition in the field responsibly and transparently. That’s why we’re taking the next step in our research and development process by beginning a limited evaluation with the Edmonton Police Service in Alberta, Canada. This is not a launch. It’s early-stage field research focused on understanding real-world performance, operational considerations, and the safeguards needed for responsible use.
We chose Edmonton for a reason. They are a long-standing Axon partner with direct experience using facial recognition in their own operations. Their familiarity with the technology, combined with their thoughtful approach to testing and public engagement, makes them a well-suited partner to help us learn what works, what doesn’t, and what responsible deployment should require.
We recognize that facial recognition is being adopted in varying degrees around the world. Our approach has been intentional—starting our evaluations in places where use is already expanding, and where there is momentum to explore responsible, transparent implementation. By testing in real-world conditions outside the U.S., we can gather independent insights, strengthen oversight frameworks, and apply those learnings to future evaluations, including within the United States.
We are proceeding carefully, with insights from our Ethics & Equity Advisory Council and Community Impact teams. Success at this stage is not a product—it is proving that the technology can provide real benefits to community safety with safeguards that deliver very low rates of harmful error. Our guiding test is simple: the benefits must outweigh the cost and risks. Our evaluation must demonstrate that accuracy, human review, and transparency can be built in from the start, setting the standard for governance needs and trust-building requirements.
Our approach will be shaped by these guiding principles. At this stage in the process, these principles represent our foundation—not the finish line. As we learn in the field, we will refine and expand them to reflect real-world insights, always with a focus on equity, ethics, and earning public trust.
Reduce danger and defend privacy: The technology should help identify dangerous individuals and locate missing persons, especially children and others with diminished capacity, while protecting the privacy of the broader public. Each agency will set its own predetermined list of individuals relevant to safety or investigative needs, using clear and transparent processes for adding and retaining entries. Any detection that does not indicate a meaningful resemblance to someone on that list is discarded immediately.
Very low tolerance for misidentification: We tune the system so the chance of identifying the wrong person is extremely low, even if that means it may occasionally miss someone included in the agency’s predetermined list of individuals within their database. Public safety deserves technology that can be relied on and is built with strict accuracy expectations that keep mistakes tightly controlled to avoid harmful outcomes.
Human in the loop: Every resemblance notification will require human review and verification—no automated decisions. Whether from remote experts or trained officers in the field, human oversight will remain central.
Continuous humility in training and use: Regardless of confidence scores, results will always be presented as fallible and subject to error, reminding officers that these resemblance notifications are simply leads for further investigation—not definitive identifications.
Higher bar for immediacy: The closer the technology comes to real-time use in the field, the stricter the accuracy, oversight, and review standards.
Transparency and disclosure: We will ensure that each stage of progress is communicated with clarity and appropriate visibility. Our commitment to transparency means working closely with agencies to determine how best to share information about the evaluation and any future testing in a way that supports community understanding while protecting sensitive data and respecting operational requirements.
Facial recognition is already in the field—used by agencies globally. But if anyone should be exploring it, it should be Axon—with deep roots in public safety, strong ethics governance, and both a reputation and practice of pausing when appropriate. We’re not just participating in the technology’s evolution—we’re working to set the bar.
Some will say this approach is too cautious. Others will say it is too fast. We believe the right standard is one where facial recognition proves both useful and responsible—helping solve urgent cases and keeping officers safe while reducing, not amplifying, risk. Deployment will only proceed once we can demonstrate that balance through performance, oversight, and transparency.
We still believe what we said in 2019: facial recognition must be held to a higher standard. Today, we are taking the next step toward defining what that standard should be—through rigorous research, transparent evaluation, and public trust at the center.