Standard of proof: How emerging technologies affect court evidence

Digital evidence ought to be backed up by other forms of evidence so as to fill the grey areas.

Photo credit: Shutterstock

Artificial intelligence (AI) and other emerging technologies will shape the manner in which courts globally receive evidence. Increased dependence on e-commerce means that it is inevitable for digital evidence matters to find their way in courtrooms.

In 2020, Kenya passed a law that saw recognition of digital signatures. In the same year, a law was passed allowing for service of court process virtually, either through e-mails or social media channels.

These new developments took into account the practical reality of the time. Business transactions occur digitally more frequently and there was a need to change the law to accommodate the new reality.

An emerging issue in litigation and other court processes is how to handle digitally generated evidence.

The Evidence Act and other Kenyan laws already have provisions allowing for the recognition of digital evidence provided that a set of criteria is met before such is permissible.

The general rule in admissibility of digital evidence is that it must be admissible, authentic, complete, reliable and believable.

While the Evidence Act allows the admissibility of digital evidence, there is a real challenge on the credibility or quality of such evidence given the emergence of new technology that has the capability to alter digital evidence.

How can a litigant or a court prove that the digital evidence produced is authentic and reliable? Increased use of technologies such as AI and digital evidence raises important questions about the standard of proof, a fundamental concept in the administration of justice.

In certain cases, such as murder, the standard of proof in criminal cases is "beyond reasonable doubt," while in civil cases, it is "on a balance of probabilities." This means that should be no doubt that the evidence relied upon is accurate.

When it comes to admissibility of digital evidence, the challenge lies in ensuring that the evidence meets these standards, given its susceptibility to tampering.

Take the example where one relies on video evidence using deep fakes. How is a court to tell the difference between a real video and a deep fake when the court does not have the technical capacity in terms of technology to make such distinctions?

There are some technologies that can help in detection of deep fakes and perhaps this can be resorted to. There are also forensic experts who can be able to unravel the authenticity of digital evidence.

The new technologies mean that it may be very expensive to conduct a trial that largely relies on digital evidence. In the event that authenticity of such evidence is challenged then the litigants would have to spend a lot of money in proving the admissibility of their evidence.

When it comes to AI the reliability of such evidence is dependent on the reliability of the data used in the machine learning system.

Where the initial data is flawed then the entire evidence will be flawed and inaccurate. An example is relying on facial recognition AI evidence. If the primary data is flawed, then the entire evidence will also be flawed leading to inaccuracies.

In a fair trial, evidence ought to be produced by a witness and the witness can be cross examined. Issues of admissibility arise with AI generated evidence.

The law requires the maker of a document to personally produce the document and be cross examined on it. With AI it becomes difficult to cross examine a witness who is not a maker of a document.

The conclusion is that digital evidence ought to be backed up by other forms of evidence so as to fill the grey areas.

Ms Mputhia is the founder of C Mputhia Advocates. Email: [email protected]

PAYE Tax Calculator

Note: The results are not exact but very close to the actual.