Category: Cyber Security | Published: 2025-07-17
Proposed Law Aims to Outlaw Future Crime Predictions
At the centre of the debate is New Clause 30 (NC30), an amendment tabled by Green MP Siân Berry and backed by at least eight others, including Labour’s Clive Lewis and Zarah Sultana. If passed, the clause would explicitly prohibit UK police from using artificial intelligence (AI), automated decision-making (ADM), or profiling techniques to predict whether an individual or group is likely to commit a future offence.
Berry told the House of Commons that such systems are _“inherently flawed”_ and represent _“a fundamental threat to basic rights,”_ including the presumption of innocence. _“Predictive policing, however cleverly sold, always relies on historic police and public data that is itself biased,”_ she argued. _“It reinforces patterns of over-policing and turns communities into suspects, not citizens.”_
What Is Predictive Policing?
Predictive policing refers to the use of data analytics, AI and algorithms to identify patterns that suggest where crimes are likely to occur or which individuals may be at greater risk of offending. It takes two broad forms, i.e. place-based systems that forecast crime in particular geographic locations, and person-based systems that claim to assess the risk posed by individuals.
Already Piloted or Deployed
It’s worth noting that these systems have already been piloted or deployed in over 30 UK police forces. For example, according to a 2025 Amnesty International report, 32 forces were using location-focused tools, while 11 had tested or deployed systems to forecast individual behaviour. The aim, according to police, is to deploy resources more efficiently and prevent crime before it happens.
However, critics argue that the data used to train these systems, such as arrest records, stop-and-search data, and local crime statistics, is historically biased. This, they say, leads to feedback loops where marginalised and heavily policed communities are disproportionately targeted by future interventions.
Why MPs Are Taking a Stand Now
The renewed push for a legislative ban follows a string of revelations over the past 18 months about the growing use of algorithmic policing in the UK, often without public consultation or oversight. One of the most contentious examples was uncovered by Statewatch in 2025, i.e., the Ministry of Justice’s so-called _“Homicide Prediction Project”_, a system under development to identify individuals at risk of committing murder using sensitive data, including health and domestic abuse records - even in cases where no criminal conviction exists.
Statewatch researcher Sofia Lyall called the initiative _“chilling and dystopian,”_ warning that _“using predictive tools built on data about addiction, mental health and disability amounts to highly intrusive profiling”_ and risks _“coding bias directly into policing practice.”_
The amendment to the Crime and Policing Bill comes as the government continues to expand data-driven law enforcement under new legislation. The Data Use and Access Act (passed earlier this year) permits certain forms of automated decision-making that were previously restricted under the Data Protection Act 2018. More than 30 civil society groups, including Big Brother Watch, Open Rights Group, Inquest and Amnesty, have signed a joint letter condemning the changes and calling for a ban on predictive policing to be included in the new bill.
Bias, Surveillance and Lack of Transparency
At the heart of the pushback is the view that predictive systems do not eliminate human bias, but instead replicate and scale it. As Open Rights Group’s Sara Chitseko explained in a May blog, _“historical crime data reflects decades of discriminatory policing, particularly targeting poor neighbourhoods and racialised communities.”_
The concern is not just over potential inaccuracies, but the broader impact on civil liberties. Campaigners warn that predictive tools undermine the right to privacy and fuel what they call a _“pre-crime surveillance state,”_ in which individuals can be subjected to policing actions without having committed any crime.
This can include being flagged for increased surveillance, added to risk registers, or subjected to stop-and-search, all based on algorithmic assessments that may be impossible to scrutinise. Data from these tools is often shared across public bodies, meaning individuals can be affected in housing, education, or welfare decisions as a result of hidden profiling.
55 Automated Tools Identified
Researchers at the Public Law Project, which runs the Tracking Automated Government (TAG) register, have documented over 55 automated decision-making tools used across UK government departments, including policing. Many operate without publicly available data protection or equality assessments. Legal Director Ariane Adam said, _“People deserve to know if a decision about their lives is being made by an opaque algorithm - and have a way to challenge it if it’s wrong.”_
How the Crime and Policing Bill Fits In
The Crime and Policing Bill is part of a broader effort by the UK government to modernise policing powers and criminal justice processes. While not specifically focused on predictive technologies, the bill’s scope includes provisions for police data access, surveillance capabilities and crime prevention strategies.
Critics argue that without clear prohibitions, the bill risks giving predictive systems greater legitimacy. _“Predictive policing isn’t just a technical tool - it’s a fundamental shift in the presumption of innocence,”_ said Berry. _“We need the law to say clearly: you cannot be punished for something you haven’t done, just because a computer says you might.”_
A second proposed amendment from Berry seeks to provide safeguards where automated decisions are used in policing. This would include a legal right to request human review, improved transparency over the use of algorithms, and meaningful routes for redress.
What the Police and Government Are Saying
Police forces and government departments have largely defended their use of predictive technologies, arguing that they allow for more proactive policing. For example, the Home Office has supported initiatives such as GRIP, a place-based prediction system used by 20 forces since 2021 to identify high-crime areas.
Proponents claim these tools help reduce violence and make best use of limited resources. However, recent assessments suggest the benefits may be overstated. Amnesty found _“no conclusive evidence”_ that GRIP had reduced crime, while also warning it had _“reinforced racial profiling”_ in the communities it targeted.
The government has not yet formally responded to the proposed amendments. However, officials have previously argued that AI and ADM can be used responsibly with the right oversight. The Department for Science, Innovation and Technology’s 2023 White Paper on AI governance promoted voluntary transparency standards but fell short of recommending statutory controls.
Businesses and Civil Society
If the amendment banning predictive policing passes, it could reshape how AI and automation are used across public services, not just policing. For civil society and legal groups, it would mark a significant win for rights-based governance of AI.
For businesses working in AI, data analytics and security tech, the implications are mixed. Suppliers of predictive systems to police forces may lose a key customer base, while developers of ethical or human-in-the-loop systems could find new demand for tools that meet stricter legal standards.
More broadly, companies operating in sectors such as insurance, HR tech, or public procurement may face growing scrutiny