The Hamilton Spectator

The risk of using more AI in policing

PATRICK G. WATSON AND CHRISTOPHER J. SCHNEIDER

The Toronto Police Services Board (TPSB) recently invited the public to provide feedback on how Toronto Police will eventually use artificial intelligence (AI) technology, regardless of numerous unresolved issues, notably privacy concerns. The TPSB, like other police agencies, have touted the “promise of improving effectiveness of policing ” that AI will allegedly bring.

Canadians should take notice as police services across the country will sometimes model their polices and procedures after the Toronto Police (think body-worn cameras). The promises of AI in policing are merely seductive, and are often based on beliefs and assumptions, not necessarily on evidence. The use of AI will not correct or improve policing; in fact, it may do the opposite.

A large body of scholarly research describes the growing phenomenon of “deskilling” — or how the introduction of technological innovations into workplaces fundamentally changes the relationship between the worker and work.

In the context of policing, we have seen deskilling occur with the introduction of, for example, in-car automated licence plate readers and mobile data terminals that produce perverse incentives for officers to stop and search individuals with prior offence records, but who were not necessarily doing anything illegal to initiate a stop. Not only is this fundamentally unjust — arguably a breach of the Charter of Rights and Freedoms — but it also takes away from the public interest skill of policing, where officers use their training and capabilities to see and intercept criminal activity as it actually occurs.

TPSB has not seriously considered the risk of new AI technologies and needs to rethink the rollout of any technology that could deskill police work.

Consider “text-to-speech” transcription software used to transcribe audio recorded on body cameras that has been reported as a “low risk” use of AI. The issue here is not strictly related to the error rate of this AI software per se, or whether police officers would be trained in understanding the significance of such an error rate, but rather that transcripts can create a misleading sense of confidence over what is contained in digitally recorded interactions.

This was an issue in the 2016 criminal trial for Michael Slager, a police officer in South Carolina who was accused of murder in the shooting of Walter Scott. Slager’s counsel attempted to introduce expert opinion evidence vis-a-vis a transcription of audio from the incident that an audio engineer produced after manipulating the distorted recording. The judge in the case ruled that such evidence was inadmissible.

How can the public trust police and investigators to not also treat such transcripts with undue certainty when applied to the often poor quality recordings of body cameras?

Virtually any researcher who has spent time observing police officers on duty will tell you that policing is a detailed craft where, as a matter of occupational skill and competence, officers learn to employ their discretion to intervene in a scenario, balanced along with the public interest in peace, order and freedom from state intrusions into individual affairs.

Any effort to automate the core competencies of policing, be it AI detection of motor vehicle infractions or automating scrutiny of video evidence to support criminal investigations, will further alienate police officers from their work and, more importantly, from the public they serve.

PATRICK G. WATSON IS ASSISTANT PROFESSOR OF CRIMINOLOGY AT WILFRID LAURIER UNIVERSITY FOCUSING ON POLICING AND CIVILIAN OVERSIGHT. CHRISTOPHER J. SCHNEIDER IS PROFESSOR OF SOCIOLOGY AT BRANDON UNIVERSITY AND AUTHOR OF “POLICING AND SOCIAL MEDIA: SOCIAL CONTROL IN AN ERA OF NEW MEDIA.”

OPINION

en-ca

2021-11-30T08:00:00.0000000Z

2021-11-30T08:00:00.0000000Z

https://thespec.pressreader.com/article/281758452566529

Toronto Star Newspapers Limited