Algorithmic state surveillance: Challenging the notion of agency in human rights

Eleni Kosta First published: 07 July 2020

https://doi.org/10.1111/rego.12331

Abstract

This paper explores the extent to which current interpretations of the notion of agency, as traditionally perceived under human rights law, pose challenges to human rights protection in light of algorithmic surveillance. After examining the notion of agency under the European Convention on Human Rights as a criterion for applications’ admissibility, the paper looks into the safeguards of notification and of redress – crucial safeguards developed by the Court in secret surveillance cases – which are used as examples to illustrate their insufficiency in light of algorithmic surveillance. The use of algorithms creates new surveillance methods and challenges fundamental presuppositions on the notion of agency in human rights protection. Focusing on the victim status does not provide a viable solution to problems arising from the use of Artificial Intelligence in state surveillance. The paper thus raises questions for further research concluding that a new way of thinking about agency for the protection of human rights in the context of algorithmic surveillance is needed in order to offer effective protection to individuals.

6 Conclusions

This paper explored the extent to which current interpretations of the notion of agency, as traditionally perceived under human rights law, pose challenges to human rights protection in light of algorithmic surveillance. It examined the notion of agency under the European Convention on Human Rights as a criterion for applications’ admissibility. It elucidated on the two admissibility criteria: the entities that can file an application and the victim status. The interpretation of the victim status in secret surveillance cases has been expanded in the case law of the ECtHRs in order to accept in abstracto claims under specific criteria.

The safeguards of notification and of redress – crucial safeguards developed by the Court in the context of protection of human rights, and in particular the right to privacy, in secret surveillance cases – were used as examples to illustrate their insufficiency in light of algorithmic surveillance. The use of algorithms for state surveillance creates new surveillance methods and challenges fundamental presuppositions on the notion of agency in human rights protection. A close analysis of the Court’s approach to the admissibility criteria of Article 34 ECHR in recent ECtHR case law on secret surveillance showed that the focus of the Court’s analysis lies on the victim status. The existence of effective remedies is crucial in order to acknowledge the victim status to applicants with or without the need to demonstrate that they are potentially at risk.

This approach however is not providing a viable solution to problems arising from the use of AI in state surveillance, as it assumes that the individuals can suspect that they are potentially at risk and argue that in front of the Court. Even when applicants do not need to show that they are potentially at risk, there would need to be some indication that the secret surveillance measure resulting from the use of AI can be relevant for them. Problems relating to group profiles or dynamic groups discussed in the paper show that in some cases it is impossible to actually know who is or could be victim of the surveillance measure. Therefore, further research is needed in order to find a completely new way of thinking about agency for the protection of human rights in the context of algorithmic surveillance in order to offer effective protection to individuals.

The full paper is available at: https://onlinelibrary.wiley.com/doi/full/10.1111/rego.12331

Related Posts