North America

London Ontario police adopt AI governance framework

Local board sets rules without provincial standard, annual risk report may be partly withheld

Images

globalnews.ca

Police in London, Ontario have approved a formal framework governing how artificial intelligence can be used by the city’s police service, a move that underlines how quickly AI is spreading through law enforcement without a matching provincial rulebook.

Global News reports that the London Police Services Board adopted the policy at a Thursday meeting. Board chair Ryan Gauss said AI tools are becoming “increasingly embedded in policing,” while warning that the same tools carry risks around privacy, bias and public confidence. The framework requires “meaningful human oversight” and says any use must be justified and proportionate.

The policy is a local answer to a higher-level gap. According to Global News, there is no province-wide framework in Ontario, leaving police boards to set governance expectations themselves. London’s approach draws on measures adopted by boards in York, Peel and Toronto, suggesting a slow standardisation by imitation rather than by legislation.

The rules also build in reporting, but with an escape hatch. The framework calls for an annual “AI Technology Compliance and Risk Report” to be presented to the board. Parts of that report may be withheld from the public because of “operational, legal or security sensitivities,” with only a public-facing summary likely to be released.

That structure—broad permission, internal oversight, partial public disclosure—mirrors how policing technology often enters service: first as an efficiency tool, then as an accountability problem. AI systems can automate administrative work, help triage information, or support investigations, but they also encourage wider collection and faster use of data because the marginal cost of analysis falls.

For residents, the practical question becomes less whether AI is “allowed” than what kinds of decisions it influences: which incidents get prioritised, which people get flagged, and what data is fed into those models. A policy that insists on human oversight still leaves open how that oversight is documented and audited.

London police will now file an annual AI risk report, and then decide how much of it the public is permitted to read.