Principles of artificial intelligence in law
DOI:
https://doi.org/10.37772/2518-1718-2026-1(53)-11Keywords:
law and technology, principles of artificial intelligence, ethics of artificial intelligence, legal regulation of artificial intelligence, artificial socialityAbstract
Problem setting. The technological advances we have seen in recent years have allowed autonomous AI systems to become increasingly complex. This offers great benefits for both individual users and society. However, it is important that such systems do not violate social norms. As autonomous systems are no longer viewed as mere tools, but are gradually taking on functions that previously could only be performed by humans, even to the role of guardians and interactive agents, the appropriateness of their actions and choices necessarily includes normative considerations. This implies regulatory sensitivity and a certain level of regulatory decision-making, the consequences of which are far-reaching. Analysis of recent researches and publications. A wide range of ethical principles and values to be relied upon in the development and deployment of autonomous robots has been presented by various authoritative organizations. In particular, these principles were formulated by the European Commission in 2019 in the Ethics Guidelines for Trustworthy AI, in the Montreal Declaration for a Responsible Development of AI, developed under the auspices of the University of Montreal following the Forum on the Socially Responsible Development of AI in November 2017, and in the ASILOMAR AI Principles, developed under the auspices of the Future of Life Institute in cooperation with participants of the highlevel ASILOMAR Conference in January 2017. In Ukraine, the regulation of АІ is at an early stage. In 2020, the Concept for the Development of AI in Ukraine was approved, aiming to propose an approach to regulating artificial intelligence technologies. In December 2024, the Voluntary Code of Conduct on the Ethical and Responsible Use of АІ was adopted, guiding companies toward the implementation of ethical principles in their internal processes, providing for risk assessment and adaptive implementation of measures, as well as the establishment of a self-regulatory body to support cooperation and exchange of experience. The first version of the АІ Development Strategy of Ukraine until 2030 was presented in November 2025 at the WINWIN Summit. It outlines a step-by-step plan on how technology can improve public administration, education, defense, healthcare, and other areas of life in Ukraine. Ukrainian scholars primarily focus their research on the analysis of current legal issues related to the development and implementation of АІ (S. Vozniuk, V. Hryshko, N. Rudenko, Zh. Udovenko, etc.), the determination of the limits of technology use in various fields (D. Bielov, M. Bielova, Ya. Bernaziuk, etc.), and the protection of personal data collected by АІ systems, including the establishment of rules for their storage and use (I. Bochkova, K. Vrublevska-Misiuna, etc.). At the same time, insufficient attention is paid in domestic academic sources to the analysis of АІ principles in law, according to which highly autonomous AI systems should be designed in such a way that their goals and behavior are reliably aligned with human values through their operation and remain compatible with the ideals of human dignity, rights, freedoms, and cultural diversity, while also being trustworthy. The purpose of this research is to consider the basic principles of artificial intelligence, which can be reduced to working rules that an autonomous agent will follow, and which will remain compliant with modern legal requirements. Article’s main body. The article substantiates that, in order to prevent harm – especially as artificial intelligence systems at various levels become increasingly integrated into society and directly coexist with users – he need to embed ethical and legal norms into autonomous agents is becoming ever more pronounced. It is emphasized that since autonomous systems are no longer regarded merely as tools but are gradually assuming functions that previously could be performed only by humans – even taking on the roles of caregivers and interactive agents – the appropriateness of their actions and choices necessarily involves normative considerations. This presupposes normative sensitivity and a certain level of normative decision-making, the consequences of which are far-reaching. It is noted that ensuring the safe and reliable functioning of autonomous artificial intelligence systems requires careful attention to the legal, ethical, and cultural context in which they operate. A broad range of ethical principles and values presented by various authoritative organizations, which should serve as a foundation for the development and deployment of autonomous robots, is identified. Legal principles are also outlined, including the right to privacy, respect for human dignity, transparency and the right to due process, the right to information, the right to self-determination and nondiscrimination, as well as socio-economic rights and the rights to security and social protection. Conclusions and prospects for development. The rapid progress in the development of autonomous agents may lead to the emergence of applications that can significantly enhance well-being, while at the same time having the potential to cause harm. To ensure their safe and reliable functioning, it is essential to pay close attention to the legal, ethical, and cultural context in which they operate. Since the legal regulation of this technology is still at an early stage, in order to ensure further progress, regulatory norms should be guided by ethical principles rather than prohibitions and remain aligned with contemporary legal requirements. To prevent harm, especially as these technologies become increasingly integrated at closer levels with their users, it is necessary not only to generalize the fundamental principles of artificial intelligence, which can be reduced to operational rules to be followed by an autonomous agent, but also to develop methods for embedding social norms into autonomous agents. These norms are often expressed as abstract high-level principles that are not easily translated into operational rules that an autonomous system must follow. One of the methods for implementing these high-level normative principles into operational rules of behavior of an autonomous system, in a way that can be trusted and satisfy users, lies in combining mandatory requirements and voluntary standards.
References
1. Driver, J. (2007). Normative Ethics. In Frank Jackson & Michael Smith, The Oxford Handbook of Contemporary Philosophy. New York: Oxford University Press UK [in English].
2. Bicchieri, C., Ryan, M., Sontuoso, A. (2023). Social Norms. The Stanford Encyclopedia of Philosophy. Zalta, E. N., Nodelman, U. (eds.). URL: https://plato.stanford.edu/archives/win2023/entries/social-norms/ [in English].
3. Zhang, F., Cully, A., & Demiris, Y. (2019). Probabilistic real-time user posture tracking for personalized robot-assisted dressing. IEEE Transactions on Robotics, 35(4), 873–888 [in English].
4. Coşar, S., Fernandez-Carmona, M., Agrigoroaie, R., et al. (2020). Enrichment: Perception and interaction of an Assistive Robot for the Elderly at Home. International Journal of Social Robotics, 12(3), 779–805 [in English].
5. LATOUR, Bruno; VENN, Couze. Morality and technology. Theory, culture & society, 2002, 19.5-6: 247-260 [in English].
6. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5 [in English].
7. Floridi, Luciano & Cowls, Josh & Beltrametti, Monica & Chatila, Raja & Chazerand, Patrice & Dignum, Virginia & Lütge, Christoph & Madelin, Robert & Pagallo, Ugo & Rossi, Francesca & Schafer, Burkhard & Valcke, Peggy & Vayena, Effy. (2021). An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. https://doi.org/10.1007/978-3-030-81907-1_3 [in English].
8. Umbrello, S., & Van de Poel, I. (2021). Mapping value sensitive design onto AI for social good principles. AI and Ethics, 1(3), 283-296 [in English].
9. European Commission: Directorate-General for Communications Networks, Content and Technology and Grupa ekspertów wysokiego szczebla ds. sztucznej inteligencji. Ethics guidelines for trustworthy AI, Publications Office. 2019. URL: https://data.europa.eu/doi/10.2759/346720 [in English].
10. Pro skhvalennia Kontseptsii rozvytku shtuchnoho intelektu v Ukraini. Rozporiadzhennia Kabinetu Ministriv Ukrainy vid 2 hrudnia 2020 r. № 1556-r. URL: https://zakon.rada.gov.ua/laws/show/1556-2020-%D1%80#Text.
11. Hryshko, V. I., Vozniuk, S. S. (2024). Problemni aspekty vprovadzhennia shtuchnoho intelektu u sferi yurysprudentsii. Elektronne naukove vydannia «Analitychno-porivnialne pravoznavstvo» – Electronic scientific publication “Analytical and Comparative Jurisprudence”, 2, 29–34. URL: http://journal-app.uzhnu.edu.ua/article/view/302954/294963 [in Ukrainian].
12. Udovenko, Zh. V., Rudenko, N. V. (2023). Perevahy ta nedoliky vprovadzhennia systemy shtuchnoho intelektu u pravosuddia Ukrainy. Aktualni pytannia u suchasnii nautsi – Current issues in modern science, 4(10), 252–262. https://doi.org/10.52058/2786-6300-2023-4(10)-252-262 [in Ukrainian].
13. Bielov, D. M., Bielova, M. V. (2023). Shtuchnyi intelekt v sudochynstvi ta sudovykh rishenniakh, potentsial ta ryzyky. Naukovyi visnyk Uzhhorodskoho natsionalnoho universytetu. Seriia: Pravo – Scientific Bulletin of Uzhhorod National University. Series: Law, 78(2), 315–320. https://doi.org/10.24144/2307-3322.2023.78.2.50 [in Ukrainian].
14. Bernaziuk, Ya. (2025). Shtuchnyi intelekt u pravosuddi: konstytutsiino-pravovyi aspekt. Elektronne naukove vydannia «Analitychno-porivnialne pravoznavstvo» – Electronic scientific publication “Analytical and Comparative Jurisprudence”, 2, 130–136. https://doi.org/10.24144/2788-6018.2025.02.16 [in Ukrainian].
15. Bochkova I.I., Vrublevska-Misiuna K.M. . 2025. Dostup do danykh ta konfidentsiinist v aspekti shtuchnoho intelektu. Naukovyi visnyk Uzhhorodskoho natsionalnoho universytetu. Seriia: Pravo – Scientific Bulletin of Uzhhorod National University. Series: Law, 90(3), 139–146. https://doi.org/10.24144/2307-3322.2025.90.3.19 [in Ukrainian].
16. Kemp, S. (2025). Digital 2026: more than 1 billion people use AI. DataReportal. Oct. 15, 2025. URL: https://datareportal.com/reports/digital-2026-one-billion-people-using-ai [in English].
17. McKinsey (2025). The state of AI in 2025: Agents, innovation, and transformation. QuantumBlack. Nov. 5, 2025. URL: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai [in English].
18. How AI is Revolutionizing Anti-Money Laundering and Compliance (2026). The Sumsuber. Nov. 10, 2025. URL: https://sumsub.com/blog/ai-in-anti-money-laundering-and-compliance/ [in English].
19. Ross, W., Stratton-Lake, P. (2002).The right and the good. Canadian Journal of Philosophy, 25, 571-594 [in English].
20. Olderbak, S., Sassenrath, C., Keller, J., Wilhelm, O. (2014). An emotion-differentiated perspective on empathy with the emotion specific empathy questionnaire. Frontiers in psychology, 5, 1-14. https://doi.org/10.3389/fpsyg.2014.00653 [in English].




