For years, Stacey Wales tried to brainstorm what she would say at the sentencing of her brother鈥檚 killer.
鈥淚 wanted to yell,鈥 Wales told the Star in an interview on Friday. 鈥淚 would have these thoughts bubble up, while I was driving or in the shower, often of anger or frustration, and just read them into my phone.鈥
In 2021, Wales鈥 brother, Christopher Pelkey, was fatally shot while at a red light in Chandler, Arizona. His killer, Gabriel Horcasitas, first faced a jury in 2023, but the case ended in a mistrial. After a retrial in March, he was found guilty of manslaughter.
When it came time for Wales to put pen to paper, all she could hear was Pelkey鈥檚 voice. So, she began to write in his words. It worked.
Then, with the help of her husband, who has experience using generative artificial intelligence, Wales set off to create a video of her brother鈥檚 likeness, reading the statement in his own voice.
The video was the last of 10 statements read out at the May 1 sentencing hearing.
鈥淭o Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,鈥 Pelkey鈥檚 facsimile, donning a grey baseball cap, told in court. 鈥淚n another life, we probably could have been friends.鈥
鈥淚 believe in forgiveness, and a God who forgives. I always have and I still do.鈥
A man killed in a road rage incident in 2021 directly addressed his killer in an AI-generated video statement during a sentencing hearing in an Arizona county court earlier this month.
Credit: Submitted by Stacey Wales
It wasn鈥檛 a perfect likeness. The recreation of Pelkey jolts unnaturally throughout the nearly four-minute video. But it seemed to leave a favourable impression on Maricopa County Superior Court Justice Todd Lang, who described it as 鈥済enuine.鈥
鈥淚 loved that AI,鈥 Lang said. 鈥淭hank you for that. And as angry as you are, and justifiably angry as the family is, I heard the forgiveness.
Horcasitas received just over 10.5 years鈥 jail time.
The case joins a growing list of U.S. court proceedings in which parties have reached for generative artificial intelligence.
In a high-profile example from 2023, former lawyer for President Trump, Michael Cohen, claimed he鈥檇 . More recently, a plaintiff in a New York court tried to employ 鈥 an attempt that was quickly swatted down by the judge.
For Ryan Fritsch, policy counsel of the Law Commission of Ontario, the rise in use 鈥渟peaks to the interest and enthusiasm out there for new forms of efficiencies in the criminal justice system.鈥
鈥淭here are some considerable promises,鈥 Fritsch told the Star on Friday. 鈥淏ut at the same time, concerns should arise if there are not sufficient rules, guardrails or governance models in place.鈥
How is AI used in the Canadian criminal justice system?
As it stands, the use of AI in the criminal justice system is more commonly found in policing, often controversially, in which services across the country have employed technology such as facial recognition systems and automatic licence plate readers.
In Canadian courts, AI has been less prevalent 鈥 though Fritsch says he鈥檚 starting to see upticks in its use. Just this week, the conduct of an Ontario lawyer was after a judge suspected ChatGPT had been used to craft a factum submitted in civil proceedings. She has since been ordered to attend a hearing with the judge to explain the discrepancies.
Where it鈥檚 becoming most common, he says, is in cases where people are self-represented.
鈥淩ight now, what we鈥檙e mostly seeing is an increasing number of self- and un-represented people relying on generalist AI tools like ChatGPT to make their case for them,鈥 he said. 鈥淎nd the consequence is that they鈥檙e actually spending more time disavowing the errors than reaping any benefits.鈥
Are there laws on the use of AI in Canadian courts?
There are currently no laws specific to the use of artificial intelligence in the Canadian justice system. In the absence of that framework, whether AI-generated material is permitted into a legal case often falls on the individual judge or justice.
As a result, some individual courts, police services and legal associations have started to come up with policies. 海角社区官网police, for example, were the first service in Canada to introduce , in 2022.
A patchwork of policies, however, can open the court up to unnecessary litigation, says Fritsch, and worsen backlogs and delays.
鈥淲ithout a framework, there鈥檚 going to be a lot of struggle for courts, cops and Crowns to interpret how our existing laws, and our civil rights, are going to apply to the use of AI,鈥 Fritsch said. 鈥淎nd there鈥檚 going to be a lot of varying opinions on that.鈥
Amending laws to regulate AI will take time, plus there鈥檚 the 鈥渓ong leg鈥 problem that court cases come months or years after new technology develops, Fristch said. 鈥淭here could be years of misuse in the meantime,鈥 he added.
What are the risks?
One of the most significant concerns for Fritsch is whether AI technologies can effectively understand and uphold Canadian standards of law.聽
鈥淲e know that AI is prone to bias,鈥 Fritsch said. 鈥淪o if it鈥檚 going to be used, we really need to make sure we鈥檙e interpreting its use through the lens of the Charter of Rights and Freedoms and procedural fairness.鈥
For example, in the U.S., algorithms have long been used to assess risk in bail and release decisions, but Fritsch says they鈥檝e been known to miss the mark.
鈥淲hat we鈥檝e seen from a couple of cases in the US is some really, really harsh recommendations about people who are facing first offences, or who are who are doing time for minor offences.鈥
As a result, the need for human oversight remains, whether through the due diligence of staff or the discretion of a judge.
Are there any potential benefits?
For most, the criminal justice system is unfamiliar, and navigating its nuances can be a daunting task. For older citizens or otherwise vulnerable populations, AI, if used properly and transparently, 鈥渃ould actually increase access and justice for a lot of people,鈥 Fritsch said.
The most common case for the use of AI in the public sector is efficiency, says Shion Guha, assistant professor at the University of Toronto鈥檚 Faculty of Information 鈥 something the courts are not known for.
鈥淎 lot of public sector agencies are basically looking towards generative AI as a way to reduce administrative overhead,鈥 Guha told the Star Friday. 鈥淭he idea is that this will increase human efficiency and reduce costs.鈥
Those promises, he says, have not been properly vetted, though.
鈥淭here hasn鈥檛 been any formal, finished research on whether or not this evaluative statement is true.鈥
Could generative AI be allowed to craft victim impact statements in Canadian courts?
In the absence of laws governing AI use, it鈥檚 hard to say 鈥 it would come down to the presiding judge or justice, says Fritsch.
In the Arizona case, he said, the judge likely admitted the video on the basis it served as an expression of the family鈥檚 feelings, not as a statement from Pelkey.
鈥淚 think the court, in their generosity, likely admitted it as almost a courtesy, and it might not be given a whole lot of weight.鈥
While Wales wrote the script for her brother鈥檚 video, Fritsch pointed out that AI could also be used to generate the statements read out by a person鈥檚 likeliness, further complicating the issue.
鈥淎I that can be trained on the sum total of all the comments a person may have made on social media or in emails or texts over years, and then used to simulate the person,鈥 Fritsch said.
鈥淭here鈥檚 no doubt it would not be admitted for the truth of its contents 鈥 because it鈥檚 all made up 鈥 but might it be allowed for, say, compassionate reasons only, and with no bearing on the sentencing?鈥 he asked. 鈥淲ho knows?鈥