As technology advances in public protection, data analysis, and artificial intelligence, the United States has been ushered into a new age of security characterized by a constant (and often controversial) balance between policing tactics and privacy. Few issues have been subject to such scrutiny and debate as have the implications of predictive policing measures and surveillance programs. Predictive policing uses advanced algorithms and other technology to analyze current and historical data to predict and prevent crime. At first glance, many would view these advances as evidence of technology being leveraged for justice. However, there are many concerns about the effect predictive policing has had and will have on the future of security and privacy in America.
The principal arguments for implementing predictive policing measures are its potential to prevent crimes and allocate law and order forces more efficiently. These potential advantages also have other beneficial effects, such as saving time and money spent on amassing police forces across large areas and instead focusing on high-crime areas identified through predictive data analytics. In addition, advanced data analysis techniques allow law enforcement to reduce their response times or discourage crimes by concentrating forces on areas with high crime rates. Finally, other methods, using historical crime data and criminal records, may allow police to identify repeat offenders and crime locations, potentially leading to better crime prevention and higher conviction rates.
The potential advantages of predictive policing are highly controversial and raise questions surrounding bias, constitutionality, and historical discrimination among many police forces in American cities. Opponents of predictive policing measures argue that historical data used to analyze crime rates and locations may be tarnished by a bias against individuals living in disadvantaged areas. Allocating additional police resources to these areas could reinforce these biases and further increase arrest rates, essentially creating a feedback loop. Other inherent issues related to predictive analytics are the behind-the-scenes algorithms, typically generated by private companies contracted by local governments. These ambiguous algorithms may have built-in biases, regardless of historical data, conceivably leading to unequal enforcement of police forces. In addition to the concerns surrounding the data and algorithms is the fundamental concern around the lack of privacy associated with predictive policing. This concern arises from predictive techniques seeking to prevent crimes rather than responding to crimes post facto. In order to prevent these crimes, massive amounts of data must be pulled from archives and actively collected from surveillance footage, internet activity, and other forms of monitoring. These techniques naturally raise concerns around privacy and Fourth Amendment rights, which prevent "unreasonable searches and seizures" and require "probable cause" to conduct detailed warrants and perform arrests. Recent history, including the National Security Administration's warrantless "Stellar Wind" surveillance program, has displayed the government's willingness to pursue domestic surveillance to any extent. Conduct such as this threatens individuals' safety and privacy and potentially influences their actions as they may fear excessive government surveillance.
Some American cities, including Los Angeles, New York, and Chicago, have implemented predictive initiatives to varying degrees of success and failure. Many have been terminated altogether due to concerns around bias and legality. Research on the overall efficacy of predictive data analysis on crime reduction has concluded that predictive policing has a modest reduction in crime rates with the caveats that legal, social, and ethical concerns must be addressed along with the potential for overreliance by law enforcement departments on these algorithms. New York City has begun piloting an autonomous police robot in its Times Square subway station. This more overt form of artificial intelligence for public safety has drawn additional concerns over facial recognition technology and its potential use for the city's predictive policing tactics.
If the policy of predictive policing is to be continued, at least in some major United States population centers, there must be strict guidelines for its implementation and supervision. Policymakers and high-level police authorities must form legislation, such as the European Union's Artificial Intelligence Act, which bans most applications of facial recognition software and predictive policing tactics. Such policies, with adequate oversight and regular audits, can ensure that police and security agencies maintain a balance between public safety and personal privacy. As the federal government begins taking steps toward the regulation of artificial intelligence, significant considerations should be made to promote its legal, ethical, and constitutional use in predictive policing and public security.
Patrick Darcy, Fellow | Pdarcy@econsultsolutions.com
Patrick Darcy is a fellow supporting ESI's thought leadership initiative, ESI Center for the Future of Cities. He is a recent graduate from Temple University where he graduated with a BBA in Economics, and is currently working towards his MS in Financial Analysis with an anticipated graduation date of 2024.
Samriddhi Khare, Fellow | skhare@econsultsolutions.com
Samriddhi Khare is a fellow supporting ESI's thought leadership initiative, ESI Center for the Future of Cities. She currently attends the University of Pennsylvania and will be graduating in 2024. Samriddhi will receive her Master's in City Planning with concentrations in smart cities and technology.