Pitt Task Force calls for transparency and public participation when government agencies use algorithms

According to a Pittsburgh Task Force on Public Algorithms, government agencies that use algorithms should be more transparent about how these systems work and allow the public to have a say in how they are used.

Over the past two years, a task force comprised of academic experts from the University of Pittsburgh and government advisers from Pittsburgh and Allegheny County studied how agencies in the region are using automated decision-making tools . The Pittsburgh Task Force on Public Algorithms released its first report on Wednesday. The group also made recommendations to address biases in these systems.

“There is a growing realization that algorithms can – and in some cases have – extraordinary power to shape our lives, both in the private and public sectors. We believe this is a critical time for governments in our region to act,” reads the report.

Proponents of public algorithmic systems argue that they make data processing faster and more efficient by removing the human variable. But the task force found that algorithms can also reflect existing human biases and even accelerate them.

The study examined systems used by the Allegheny County Child Welfare Department and Courts and a pilot project by the Pittsburgh police. These algorithms were used to decide whether allegations of child abuse should be investigated; whether to confine an accused before trial; and where the police should be patrolling.

The task force found that the transparency of how these systems were developed and how they are used varied greatly from agency to agency. A system used to screen allegations of child abuse in Allegheny County was found to have more public input and be rated more frequently than a program used by the Pittsburgh Police Department to predict hotspots of crime.

“Such opacity appears to be far too common across the country, particularly in law enforcement applications of algorithmic systems,” the report argues.

The task force studied a pilot program launched by the Pittsburgh police and Carnegie Mellon University in 2017, designed to make predictions about where crime might occur in the future. Police would then target patrols in so-called “hot spots”.

The system relied on crime reports and 911 calls from 2011 to 2016 to make predictions, which could reflect longstanding racial and economic disparities in Pittsburgh, the report said.

“We can actually exacerbate inequality and that’s the problem,” said David Hickton, founding director of Pitt’s Institute for Cyber ​​Law, Policy and Security.

The predictive policing system was suspended in 2020 when concerns arose about racial bias inherent in the system. Facial recognition software has been found to work less effectively on people with darker skin. The use of facial recognition software led to the wrongful arrest of a black man in Michigan in 2020.

“If we’re trying to fix the overwatch, using these algorithms would defeat that,” Hickton said. The predictive policing system used by Pittsburgh police was suspended in 2017.

The Pittsburgh City Council passed a measure in 2020 that would guide how facial recognition could be used by police in the future. But the task force recommends against using the technology at all for the foreseeable future due to the deep racial and gender disparities associated with biometric systems.

The task force also noted concerns about privacy issues regarding facial recognition software informed by the Allegheny County surveillance camera network.

“Even if the accuracy issues eventually improve, [the systems] could lead to invasive surveillance that would infringe on privacy,” the report noted.

The task force makes seven recommendations and suggests several best practices for agencies using algorithms to make decisions. Among the recommendations is more public commentary and education on how algorithms are used to deliver government services and make criminal justice decisions.

A spokesperson for the Allegheny County Department of Social Services said Wednesday that developing algorithms in the public eye is a priority for the department.

“DHS Director Erin Dalton is pleased to see the task force highlight the need for transparency and community engagement in these processes,” the spokesperson said. “We’ve long recognized that the public needs to understand how their government makes critical decisions and have a proven track record to ensure we develop these tools in public view.”

While the report makes recommendations for best practices and improvements, the task force recognizes that mitigating some issues is not a magic bullet for systemic biases.

“We should not expect perfection from our government algorithms,” the report asserts. “But we should expect agencies to be able to demonstrate that algorithmic systems produce equal or better results than human processes, and there must be a way for the public to question and challenge these systems. “

The report calls for external reviews of systems used in high-risk situations like predictive policing and sentencing, agencies to release information on algorithmic systems, public announcements on new contracts that could bring new algorithmic systems and public involvement when systems are developed or modified.

“We don’t have to accept the false choice between technological progress and civil and constitutional rights,” Hickton said. “People of good will can find ways to balance freedom and security in the digital age, leveraging technological innovation in fair and transparent ways.”

The Public Algorithms Working Group is hosting community meetings to discuss the report’s findings. You can find more information about these meetings and the full report here.

Ashley C. Reynolds