The European Union Agency for Fundamental Rights (FRA) published a report on Thursday (8 December) dissecting how biases develop in algorithms apply to predictive policing and content moderation models.
The research concludes by calling on EU policymakers to ensure that these AI applications are tested for biases that could lead to discrimination.
This study comes as the proposal on the Artificial Intelligence Act makes its way through the legislative process, with the European Parliament particularly considering the introduction of a fundamental rights impact assessment for AI systems at high risk of causing harm.
“Well-developed and tested algorithms can bring a lot of
improvements. But without appropriate checks, developers and users run a
high risk of negatively impacting people’s lives,” said FRA’s director
Michael O’Flaherty. (...)
Sem comentários:
Enviar um comentário