BLACK BOX AS A JUSTIFICATION FOR STRICT LIABILITY FOR AI-RELATED DAMAGE
Abstract
Strict liability is increasingly recognised as an appropriate framework for governing high-risk artificial intelligence (AI) systems, particularly those with ‘black-box’ characteristics, where internal operations are opaque and difficult to interpret. The inherent complexity of AI, including strong black-box features and unpredictability post-deployment, challenges the applicability of traditional tort law, which relies on establishing fault or negligence. Strict liability provides a means to hold entities accountable, addressing the difficulties in attributing fault in AI contexts. This work evaluates the merits and drawbacks of strict liability, explores its implications within the general liability regime, and provides concrete examples of AI-related harms that support this approach. The principle of AI neutrality and the persistence of fault-based elements within ostensibly strict liability frameworks like the Product Liability Directive are also examined, underscoring the complexities in regulating AI. Serbian legal doctrines regarding dangerous objects and activities provide courts with flexibility to adjudicate AI-related damages. Judges must comprehend the nuances of AI, including distinctions between traditional deterministic software and AI exhibiting emergent behaviour. While strict liability is beneficial for victim compensation and risk management, it can also stifle innovation and impose burdens on small enterprises. A balanced approach is essential to manage AI-related risks while promoting innovation.
Keywords
Full Text:
PDFReferences
Arkoudas, K., & Bringsjord, S. (2014). Philosophical foundations. In W. M. Ramsey & K. Frankish (Eds.), The Cambridge handbook of artificial intelligence (pp. 34–63). Cambridge: University Press. doi: 10.1017/CBO9781139046855.004
Arsenijević, B. (2023). Odgovornost za štetu od veštačke inteligencije [Liability for Damage Caused by Artificial Intelligence]. In: Petrović, Z., Čolović V., Obradović D. (Ed.): XXVI International scientific conference - Causation of Damage, Damage Compensation and Insurance (135-155). Beograd, Valjevo: The Institute of Comparative Law, The Association for Tort Law and Judicial Academy. doi: 10.56461/ZR_23.ONS.08
Barbosa, F., & Valadares, L. (2023). Artificial intelligence: A claim for strict liability for human rights violation. Revista de Direito Internacional, 20(2), 149-158. doi: 10.5102/rdi.v20i2.9119
Bathaee, Y. (2018). The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology (Harvard JOLT), 31(2), 889–938.
Cvetković, M. (2020). Causal Uncertainty: Alternative Causation in Tort Law. Teme-Časopis Za Društvene Nauke, 44(1), 33–47. doi: 10.22190/TEME191115007C
Duffourc, M., & Gerke, S. (2023). Decoding U.S. Tort Liability in Healthcare’s Black-Box Al Era: Lessons from the European Union. Stanford Technology Law Review, 27(1), 1-70.
Frankish K. & Ramsey M. (Eds.). (2014). The Cambridge Handbook of Artificial Intelligence. Cambridge: University Press. doi: 10.1017/CBO9781139046855
Hacker, P. (2023). The European AI liability directives–Critique of a half-hearted approach and lessons for the future. Computer Law & Security Review, 51, 1-42. doi: 10.1016/j.clsr.2023.105871
Heiss, S. (2020). Towards Optimal Liability for Artificial Intelligence: Lessons from the European Union’s Proposals of 2020. Hastings Sci. & Tech. LJ, 12, 186-224.
Howells, G., & Twigg-Flesner, C. (2022). Interconnectivity and Liability: AI and the Internet of Things. In C. Poncibò, L. A. DiMatteo, & M. Cannarsa (Eds.), The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics (pp. 179–199). Cambridge: University Press. doi: 10.1017/9781009072168.019
Karanikić Mirić, M. (2017). General Clause on Strict Liability in Comparative Perspective. In B. Milisavljević, T. Petrović Jevremović, & M. Živković (Eds.), Law and Transition. Collection of Papers, Belgrade (pp. 345–356).
Karanikić Mirić, M. (2024). Obligaciono pravo (2. izd.) [Law of Obligations]. Beograd: Službeni glasnik. https://plus.cobiss.net/cobiss/sr/sr_latn/bib/147456265
Knetsch, J. (2022). Are Existing Tort Theories Ready for AI?: A Continental European Perspective. In C. Poncibò, L. A. DiMatteo, & M. Cannarsa (Eds.), The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics (pp. 99–115). Cambridge: University Press. doi: 10.1017/9781009072168.013
Monot-Fouletier, M. (2022). Liability for Autonomous Vehicle Accidents. In C. Poncibò, L. A. DiMatteo, & M. Cannarsa (Eds.), The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics (pp. 163–178). Cambridge: University Press. doi: 10.1017/9781009072168.018
Noto La Diega, G., & Bezerra, L. C. (2024). Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive. International Journal of Law and Information Technology, 32(1), 1-21. doi: 10.1093/ijlit/eaae021
Pavlekovic, B., & Petrovic, J. (2021). Civil Law Aspects of Artificial Intelligence in Medicine. Pravni letopis, 1, 103–124.
Proposal for a Directive of the European Parliament and of the Council on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive) (2022).
Proposal for a Directive of the European Parliament and of the Council on Liability for Defective Products (2022).
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (2024).
Soyer, B., & Tettenborn, A. (2022). Artificial intelligence and civil liability—Do we need a new regime? International Journal of Law and Information Technology, 30(4), 385–397. doi: 10.1093/ijlit/eaad001
Tai, E. T. T. (2022). Liability for AI Decision-Making. In C. Poncibò, L. A. DiMatteo, & M. Cannarsa (Eds.), The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics (pp. 116–131). Cambridge: University Press. doi: 10.1017/9781009072168.014
Tasić, A. (2018). Терет доказивања у антидискриминационим парницама на примеру одлуке Врховног касационог суда [Burden of Proof in Anti-Discrimination Lawsuits: An Example from a Supreme Court of Cassation Decision]. Зборник Радова Правног Факултета у Нишу, 57(78), 325–336. doi:10.5937/zrpfni1878323T
Wendehorst, C. (2020). Strict Liability for AI and other Emerging Technologies. Journal of European Tort Law, 11(2), 150–180. doi: 10.1515/jetl-2020-0140
Wendehorst, C. (2022). Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks. In O. Mueller, P. Kellmeyer, S. Voeneky, & W. Burgard (Eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives (pp. 187–209). Cambridge: University Press. doi: 10.1017/9781009207898.016
Zakon o obligacionim odnosima [Act on Obligations], Sl. list SFRJ. br. 29/78, 39/85, 45/89 - odluka USJ, 57/89. Sl. list SRJ. br. 31/93. Sl. list SCG. br. 1/2003 - Ustavna povelja. Sl. glasnik RS. br. 18 (2020)
Zakon o zabrani diskriminacije [Act on the Prohibition of Discrimination], Sl. glasnik RS. br. 22 (2009). 52 (2021)
DOI: https://doi.org/10.22190/TEME241108019C
Refbacks
- There are currently no refbacks.
© University of Niš, Serbia
Creative Commons licence CC BY-NC-ND
Print ISSN: 0353-7919
Online ISSN: 1820-7804