quarta-feira, 14 de dezembro de 2022

EU fundamental rights agency warns against biased algorithms

 

The European Union Agency for Fundamental Rights (FRA) published a report on Thursday (8 December) dissecting how biases develop in algorithms apply to predictive policing and content moderation models.

The research concludes by calling on EU policymakers to ensure that these AI applications are tested for biases that could lead to discrimination.

This study comes as the proposal on the Artificial Intelligence Act makes its way through the legislative process, with the European Parliament particularly considering the introduction of a fundamental rights impact assessment for AI systems at high risk of causing harm.

“Well-developed and tested algorithms can bring a lot of improvements. But without appropriate checks, developers and users run a high risk of negatively impacting people’s lives,” said FRA’s director Michael O’Flaherty. (...)

Sem comentários:

Enviar um comentário

Sabe o que é “economia de fuga”? Esta pode ser a oportunidade das marcas para atraírem consumidores

  Sabia que 91% das pessoas globalmente buscam formas de escapar da rotina diária? Esta é uma das principais conclusões do estudo global “...