On Jan. 21, the Ministry of Justice announced a call for evidence in relation to an expert review of the use of evidence generated by computer software in criminal proceedings.
Here, we will consider the hugely important area of computer evidence in criminal trials.
The review was triggered by issues with computer evidence from the Post Office Horizon IT scandal. At present, there is a rebuttable common law presumption that a computer producing evidential records is working correctly and that any evidence generated by software is accurate, so any evidence produced by a computer is treated as reliable unless there is evidence to the contrary.
In announcing its call for evidence, the government stated that this presumption "proved flawed during the Horizon scandal." Here, hundreds of innocent subpostmasters were falsely convicted on the basis of evidence from a computer that was, in fact, not operating correctly.
In response, the government is conducting an expert review regarding removing or changing this presumption. The review is open until April 15 and seeks submissions from those with experience of the criminal justice system, and/or computers and software, as to how computer evidence should be defined and what should fall into the scope of any change to the law.
The question of how to address the current law to avoid such miscarriages of justice, as seen in the Post Office scandal, presents a plethora of problems and places lawmakers in a catch-22 scenario. As we demonstrate, computer evidence is not only infallible but can result in dramatic distortions of evidence.
Although the current law presumes that computer evidence is correct unless proven otherwise, the situation was previously reversed. Under Section 69 of the Police and Criminal Evidence Act 1984, it was necessary for the prosecution to show that a computer was operating properly and not being used improperly before any statement in a document produced by a computer could be admitted in evidence.
The Law Commission recommended in a 1995 consultation paper that Section 69 be repealed without replacement, and this recommendation was implemented in 2000 by Section 60 of the Youth Justice and Criminal Evidence Act 1999.
Plainly, it is not practical for the law to revert to the pre-2000 position. Section 69 was repealed on account of the burden it imposed on the prosecution becoming unmanageable in light of increases in the volume of computer evidence, a situation that has only heightened since.
The volume and definition of computer evidence has expanded substantially since Section 69
was introduced in 1984, and its reinstatement today would encompass a much wider range of material, potentially including everything from social media posts and text messages, to email chains and accounting software used by commercial banks.
According to the Ministry Of Justice, a requirement to prove the accuracy and correct operation of all computer evidence would not "ensure people are better protected from miscarriages of justice,"as hoped. Rather, it would create further miscarriages, in the sense that justice delayed is justice denied. A reinstatement of the requirement under Section 69 would remove the possibility of criminal cases progressing efficiently and expeditiously, and place additional strain on already scarce court resources.
The law cannot remain as it stands either. Justice Minister Sarah Sackman KC suggested that the current presumption provides a blanket approach that has had devastating effects on individual's lives, and any change to the law must recognize and evolve from these mistakes.
The presumption is inherently flawed, as it is based on the premise that it is possible to assume a computer is working correctly, an assumption that is generally not accepted by those with computer science and software engineering expertise. Computers and software have the propensity to fail, can be inherently defective and are never guaranteed to be error-free — a fact that is readily accepted in other areas of law, such as in software contracts.
Most computer errors are either immediately detectable or result from an error in the data entered into the machine. The Law Commission appears to have placed reliance on this in recommending the repeal of Section 69 of the Police and Criminal Evidence Act 1984.
The accuracy of this statement has been refuted, but the weight of its impact remains, as, according to a 2024 article by Roger Bickerstaff at Bird & Bird LLP, the current presumption allows courts to accept computer evidence "without assessing … whether the software includes errors that could have an impact on the evidentiary quality."
The fact that the presumption is rebuttable does not offer adequate protection without legal guidance as to how particular flaws in the evidence may be identified and challenged.
Defendants are required to specifically identify the issue to which a disclosure request is relevant. For those seeking to challenge computer evidence, this is not always possible without the input of an expert where errors or bugs are not apparent or detectable.
The chickens have already come home to roost on this issue. The Ministry of Justice stated:
The miscarriages of justice that occurred in the Post Office cases were due to deliberate failures to properly interrogate and disclose evidence, which prevented postmasters and others from effectively challenging the reliability of the Horizon computer system evidence.
Practical Implications
The call for evidence sets out that the most important issues for consideration in determining this new approach will be the effective functioning of the criminal justice system, clarity on how computer evidence will be defined, and ensuring that any changes can withstand the ongoing changes in technology and increasing use of artificial intelligence.
The call for evidence demonstrates that the government wants to distinguish between wider digital evidence, such as texts and social media posts, and evidence that has been specifically generated by a computer system or software. It specifically confines any changes to the current law to only include "that evidence which is generated by software, including AI and algorithms," citing examples such as accounting programs and automated fraud or plagiarism-detection software. Evidence that is merely captured or recorded by a device, such as digital photographs and videos, mobile phone extraction records, and digital communications such as texts and social media posts, should be excluded.
This appears to propose an approach where the accuracy of evidence generated by computers and software must be proved, but the accuracy of evidence captured or recorded by a device or computer will be assumed. Where and how these battle lines will be drawn remains unclear.
It is perhaps overly simplistic to think that such clear lines in the sand can be drawn. Do social media posts created by bots using AI fall under evidence generated by software or digital communications recorded by a device?
The practical implications of any change in the current law are wide-ranging. There are countless examples of where it may not be possible to draw clear definitions and distinctions between evidence recorded and evidence generated, especially given that in our increasingly digitized world, even the simplest devices are highly sophisticated and will often be capable of providing both.
Clear and strongly defined parameters are certainly desirable on that basis, as industrious counsel permitted even an inch of nuance will surely take a mile, and the number of potential arguments conceivable may not facilitate the government's desire for any change in the law to aid the expeditious and effective functioning of the criminal justice system.
On the other hand, even where evidence generated by a computer can be easily defined, it is difficult to see how this will work in practice. The call for evidence specifically cites AI as evidence to which any change in the presumption will apply.
However, large language AI models can provide data that is wildly inaccurate, and their systems can be so complex that even their creators cannot explain how the model came to a certain conclusion. A change in the law requiring proof of the accuracy of evidence generated through AI systems risks rendering increasing amounts of evidence inadmissible. Proof will simply not be possible or practicable in some scenarios where AI is developing at a pace that exceeds human comprehension.
This article was first written for and published by Law360 on 17 February 2025. To read the full article, please click here.
More like this

Supermarket to let shoppers pay with ‘hand swipe’
BCL partner Julian Hayes comments on the use of biometrics and technology in France, which reads vein patterns and links them to customers' card details, in The Telegraph.




