Public decision-making is handed over to machines at a rapid pace. In light of recent advances in AI technology, public sectors globally are adopting AI to become more efficient, and to make quicker, more consistent and fair decisions. Who gets parole, who receives welfare, and who gets to cross a border is increasingly often decided by an AI system.
Yet, the race toward AI adoption has not been systematically studied, even though it carries substantial risks. Early findings suggest that algorithms do not make better predictions than conventional regression models, companies have falsely claimed to use AI in order to win public contracts, and mistakes made by AI-based fraud detection systems have already forced governments to resign. Automated decision-making may also be incompatible with the special normative demands around legitimate authority that characterize public decision-making in liberal democracies.
The aim of this six-year project is to build an interdisciplinary research environment that analyzes the proliferation of AI in the public sector, its impact on the decisions being made and its effects for democracy. Six researchers, drawing on a wide set of theoretical paradigms, empirical methodologies and normative analyses will, first, provide an account of how public decision-makers use AI in decision-making and under what conditions it is democratically legitimate, and second, provide scientific support for policies about the proper use of AI in public decisions.