In various countries AI is being explored to improve public welfare, healthcare, security, and support green initiatives.

Yet, these applications remain in the experimental phase. E.g. in Denmark, there exists a dichotomy in political perspectives regarding these AI experiments. One side underscores the imperative for Denmark to seize the opportunities presented by artificial intelligence, advocating for proactive engagement. Conversely, there's a cautious stance, underscored by ethical considerations, questioning the timing and appropriateness of deploying AI experiments within the welfare system

Significant problems have arisen when accelerating automated case processing. One notable example is the scandal in Holland, where questionable data and political pressures have led to controversial implementations.The Dutch childcare benefits scandal, known as the 'toeslagenaffaire', involved wrongful accusations of fraud by the Tax and Customs Administration in the Netherlands. From 2005 to 2019, about 26,000 parents were wrongly accused of fraudulent benefit claims, leading to demands for repayment of received allowances. This caused severe financial distress, with some families owing tens of thousands of euros. The scandal, uncovered in September 2018, revealed discriminatory practices against minority parents and systemic biases within the administration. Following a parliamentary inquiry in January 2021 that found violations of fundamental principles of the rule of law, the third Rutte cabinet resigned, marking significant political fallout.

Similarly, in Denmark, the use of AI in welfare services has sparked debates about privacy, fairness, and the balance between e.g. fraud prevention and citizens' rights. With a high level of benefits expenditure, the government has become increasingly focused on detecting and preventing fraud. However, concerns have been raised about the scale of data collection, potential privacy violations, and the fairness of algorithmic decision-making.

Critics argue that Denmark's approach to AI and welfare amounts to systematic surveillance and disproportionately targets welfare recipients. The expansion of data collection and algorithmic profiling has raised questions about human rights and the potential for discrimination. Despite claims of effectiveness, challenges remain in ensuring transparency, accountability, and respect for citizens' rights in the use of AI in welfare services.

Sources: 
How Denmark’s Welfare State Became a Surveillance Nightmare
The Dutch childcare benefit scandal, institutional racism and algorithms