Legal and moral responsibility in the conditions of development of autonomous artificial intelligence systems

Authors

  • Oleksandr Klykov Graduate Student of the Department of Philosophy of the Yaroslav Mudryi National Law University

DOI:

https://doi.org/10.37772/2518-1718-2025-3(51)-11

Keywords:

moral status of artificial intelligence, artificial intelligence and law, responsibility of autonomous systems, quantification of degrees of responsibility, sociotechnical approach to responsibility, legal responsibility in a technological context, responsibility of lethal autonomous weapons systems

Abstract

Problem setting. In the context of the development of a digital society, a number of increasingly relevant questions have arisen regarding the development and use of artificial intelligence in relation to morality. The implementation of artificial intelligence systems is expanding in all spheres of society, which also means that questions of responsibility arise in various contexts. Therefore, it is necessary to investigate whether the use of autonomous systems sometimes leads to responsibility-related problems and to what extent it makes sense to assign responsibility to artificial autonomous systems. Analysis of recent researches and publications. The study of issues of legal and moral responsibility in a technological context is quite complex. This is due to the fact that neither academic literature nor legislation provides a unified view on a number of contentious questions that arise when examining this topic. Certain aspects of legal responsibility have been explored in the works of well-known Ukrainian legal scholars, including studies on legal responsibility as a systemic phenomenon (R. Maidanyk), analyses of doctrinal approaches to understanding the essence of legal responsibility (V. Zhornokui), and examinations of legal responsibility as a form of state coercion (V. Nadeon, R. Shyshka), among others. Although there are seven main concepts for understanding legal responsibility as a legal phenomenon, none of them provides answers to the questions associated with the development and use of artificial autonomous intelligent systems. Certain legal aspects related to technology have been addressed in the works of Ukrainian scholars such as D. Adamiuk, H. Androshchuk, O. Davydiuk, B. Paduchak, M. Shvets, and others. In Ukrainian legal scholarship, the issue of moral responsibility in the technological context has been scarcely explored. The analysis of legal and moral responsibility of artificial autonomous systems requires new methodological frameworks. Purpose of the research. The aim of the study is to identify approaches to the issue of responsibility in the context of the development and use of artificial autonomous systems, which contributes to ensuring their ethical and reliable behavior. Article’s main body. The article argues that society’s growing dependence on artificial-intelligence autonomous systems sharply raises questions of responsibility. It is noted that there are a number of ethical dilemmas associated with the development and use of artificial intelligence. The article emphasizes that discussions and debates continue within the academic community regarding the issue of responsibility, demonstrating significant differences in possible approaches to this topic. It is shown that responsibility in the context of AI systems requires an interdisciplinary approach. It is established that applying a philosophical approach to identifying responsibility-related problems in autonomous systems contributes to ensuring the reliability of such systems. Some of the most common general approaches to studying the concept of responsibility are characterized. Attention is focused on the relevance of the sociotechnical approach to the issue of responsibility. It is stressed that there is no single doctrinal approach to understanding legal responsibility; rather, there are at least seven main concepts interpreting legal responsibility as a legal phenomenon. It is identified that different dimensions of responsibility are connected to interdisciplinary challenges in the design, development, and use of reliable autonomous systems and can help address them. Therefore, ensuring the reliability of autonomous systems is an important interdisciplinary task. It is assumed that if autonomous systems in their automated decision-making processes focus exclusively on maximizing efficiency indicators, they are likely to ignore responsibility-related issues. It is concluded that, in order to minimize harmful consequences and ensure ethical and reliable behavior of AI-based autonomous systems, they should, in the future, be allowed to reason about potential responsibility-related problems. Conclusions and prospects for development. As it is becoming increasingly clear that it is impossible to expect all components of an autonomous system to behave properly, there is a need to develop and implement general methods to ensure reliability and resilience. The concept of responsibility can serve as a basis for conceptualizing reliability and resilience, which creates the need to develop a set of methods capable of analyzing the trade-off between efficiency and resilience in sociotechnical systems, as well as designing comprehensive models of sustainable collaboration between humans and artificial intelligence. Ensuring the reliability of autonomous systems and artificial intelligence is an important interdisciplinary task. A philosophical approach can be applied to identify issues of responsibility in autonomous systems; a sociotechnical approach can be used to quantitatively assess degrees of responsibility and task coordination based on this; and a legal approach can be used to define the concept of legal responsibility in a technological context. If, in the future, artificial-intelligence-based autonomous systems are allowed to reason about potential responsibilityrelated problems, harmful consequences can be minimized, and ethical and reliable behavior can be ensured. If autonomous systems in their automated decision-making processes focus exclusively on maximizing efficiency indicators, they are likely to ignore issues of responsibility (from accountability to legal liability). Therefore, a comprehensive study of this issue in the technological context is currently relevant in order to determine who, and to what extent, is responsible for the long-term behavior of an artificial intelligence autonomous system or for a particular decision it makes.

References

1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). EUR-Lex Access to European Union law. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng [in English].

2. Yazdanpanah, V., Gerding, E.H., Stein, S. et al. (2023). Reasoning about responsibility in autonomous systems: challenges and opportunities. AI & Soc 38, 1453–1464 https://doi.org/10.1007/s00146-022-01607-8 [in English].

3. Suspilstvo, liudyna, pravo: suchasnyi sotsiokulturnyi kontekst : monohrafiia (2025) / [O. P. Andrushchenko, O. H. Danylian, O. P. Dzoban ta in.] ; za red. O. H. Danyliana. ; Nats. yuryd. un-t im. Yaroslava Mudroho ; NDI derzh. bud-va ta mists. samovriaduvannia NAPrN Ukrainy. Kharkiv : Pravo, 2025. 328 s. [in Ukrainian].

4. Lethal autonomous weapons systems : report of the Secretary-General. UN Digital Library. 2024. URL: https://digitallibrary.un.org/record/4059475?ln=en&v=pdf. [in English].

5. Pedro A., Martínez M. (2023). Los Sistemas de Armas Autónomos Letales y el Derecho Internacional Humanitario en la Guerra de Ucrania. Relaciones Internacionales. N 53. P. 71-90. DOI: https://doi.org/10.15366/relacionesinternacionales2023.53.004 [in Spanish].

6. Korotkyi V. (2024). Zbroia i shtuchnyi intelekt: u sviti zanepokoieni zberezhenniam liudskoho kontroliu. Ukrinform. 06.05.2024. URL: https://www.ukrinform.ua/rubric-world/3860523-zbroa-i-stucnij-intelekt-u-sviti-zanepokoenizberezennam-ludskogo-kontrolu.html

7. Entsyklopediia osvity (2008). Akad. ped. nauk Ukrainy ; holov. red. V. H. Kremen. Kyiv : Yurinkom Inter. 1040 s. [in Ukrainian].

8. Fairchild H. P. (1960). Dictionary of Sociology and Related Sciences. Littlefield : Adams, 342 р. [in English].

9. Hornby A. S. (1998). Oxford Advanced Learner’s Dictionary of Current English. Oxford : Oxford University Press, 1476 р. [in English].

10. Marchenko M. (2010). Problemy yurydychnoi ta sotsialno-politychnoi vidpovidalnosti biznesu. Pro ukrainske pravo. № 5. S. 62–69. [in Ukrainian].

11. Vitkovska I. M., Yevdokymova I. A. (2020). Stanovlennia poniattia vidpovidalnosti u filosofii ta sotsiolohii. Aktualni problemy filosofii ta sotsiolohii. S. 77-81. DOI https://doi.org/10.32837/apfs.v0i27.925 [in Ukrainian].

12. Psykholohichna entsyklopediia / avt.-uporiad. O. M. Stepanov. (2006). Kyiv : Akademvydav, 424 s. [in Ukrainian].

13. Yonas H. (2001). Pryntsyp vidpovidalnosti. U poshukakh etyky dlia tekhnolohichnoi tsyvilizatsii. Kyiv : Libra, 400 s. [in Ukrainian].

14. Zhornokui V. H. (2024). Yurydychna vidpovidalnist: sim doktrynalnykh pidkhodiv do rozuminnia sutnosti. Pravo i bezpeka. 2024. № 1 (92). S. 90–100. DOI: https://doi.org/10.32631/pb.2024.1.08. [in Ukrainian].

Published

2025-11-10