While the draft EU AI Act prohibits harmful ‘subliminal techniques’, it doesn’t define the term – we suggest a broader definition that captures problematic manipulation cases without overburdening regulators or companies, write Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding and Rafael A. Calvo.
Juan Pablo Bermúdez is a Research Associate at Imperial College London; Rune Nyrup is an Associate Professor at Aarhus University; Sebastian Deterding is a Chair in Design Engineering at Imperial College London; Rafael A. Calvo is a Chair in Engineering Design at Imperial College London.
If you ever worried that organisations use AI systems to manipulate you, you are not alone. Many fear that social media feeds, search, recommendation systems, or chatbots can unconsciously affect our emotions, beliefs, or behaviours.
The EU’s draft AI Act articulates this concern mentioning “subliminal
techniques” that impair autonomous choice “in ways that people are not
consciously aware of, or even if aware not able to control or resist”
(Recital 16, EU Council version). Article 5 prohibits systems using
subliminal techniques that modify people’s decisions or actions in ways
likely to cause significant harm. (...)