In a significant ruling, England and Wales’ High Court has warned that lawyers could be criminally prosecuted for relying on fabricated information generated by AI tools in their cases, following notable instances where such materials were introduced in court.
High Court of England Warns Lawyers Against Use of AI-Generated Evidence

High Court of England Warns Lawyers Against Use of AI-Generated Evidence
British judges urge legal professionals to avoid presenting false evidence produced by artificial intelligence or face serious repercussions.
The Royal Courts of Justice in central London issued this strong advisement on Friday, emphasizing the need for urgent regulatory measures due to recent misuse of AI in legal arguments. Victoria Sharp, the president of the King’s Bench Division, along with Judge Jeremy Johnson, highlighted two particular cases where fictitious content was used in legal proceedings, prompting this decisive intervention.
In one case involving a lawsuit against two banks, both the claimant and their lawyer conceded that the AI had produced “inaccurate and fictitious” content, leading to the dismissal of the case last month. In a separate incident, a lawyer failed to clarify the sources of dubious cases referenced in her client’s suit against a local council, resulting in concerns over the validity of her arguments.
Judge Sharp utilized her authority to underscore the court's obligation to oversee procedural integrity and ensure that attorneys uphold their ethical responsibilities. She voiced profound concerns over potential damage to the justice system and public trust if artificial intelligence continues to be misapplied. The judges signaled that legal practitioners who utilize misleading AI-generated data could face either criminal charges or disqualification from practicing law, marking a critical moment in the intersection of technology and the legal field.
In one case involving a lawsuit against two banks, both the claimant and their lawyer conceded that the AI had produced “inaccurate and fictitious” content, leading to the dismissal of the case last month. In a separate incident, a lawyer failed to clarify the sources of dubious cases referenced in her client’s suit against a local council, resulting in concerns over the validity of her arguments.
Judge Sharp utilized her authority to underscore the court's obligation to oversee procedural integrity and ensure that attorneys uphold their ethical responsibilities. She voiced profound concerns over potential damage to the justice system and public trust if artificial intelligence continues to be misapplied. The judges signaled that legal practitioners who utilize misleading AI-generated data could face either criminal charges or disqualification from practicing law, marking a critical moment in the intersection of technology and the legal field.