Negligence and AI’s Human Users

100 Boston University Law Review 1315 (2020)

UCLA School of Law, Public Law Research Paper No. 20-01

Posted: 28 Jan 2020, Last revised: 2 Oct 2020

Andrew D. Selbst

UCLA School of Law

Date Written: March 11, 2019

Abstract

Negligence law is often asked to adapt to new technologies. So it is with artificial intelligence (AI). But AI is different. Drawing on examples in medicine, financial advice, data security, and driving in semi-autonomous vehicles, this Article argues that AI poses serious challenges for negligence law. By inserting a layer of inscrutable, unintuitive, and statistically-derived code in between a human decisionmaker and the consequences of that decision, AI disrupts our typical understanding of responsibility for choices gone wrong. The Article argues that AI’s unique nature introduces four complications into negligence: 1) unforeseeability of specific errors that AI will make; 2) capacity limitations when humans interact with AI; 3) introducing AI-specific software vulnerabilities into decisions not previously mediated by software; and 4) distributional concerns based on AI’s statistical nature and potential for bias.

Tort scholars have mostly overlooked these challenges. This is understandable because they have been focused on autonomous robots, especially autonomous vehicles, which can easily kill, maim, or injure people. 

Read more at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3350508

Related Posts