AI-GOVERNANCE AND ALGORITHMIC ACCOUNTABILITY: RETHINKING LEGAL STANDARDS IN THE AGE OF AUTONOMOUS DECISION-MAKING

Authors

  • Loso Judijanto IPOSS Jakarta, Indonesia Author
  • Rustiyana Rustiyana Universitas Bale Bandung, Indonesia Author
  • Rusdi Rusdi Universitas Muhammadiyah Sorong, Indonesia Author

Keywords:

AI governance, algorithmic accountability, autonomous decision-making, legal standards

Abstract

The rapid development of artificial intelligence has driven the use of autonomous decision-making systems in various sectors, including public administration, finance, healthcare, and law enforcement. While AI offers greater efficiency and objectivity, its application also poses serious challenges related to algorithmic governance and accountability, particularly when the resulting decisions significantly impact individual rights and obligations. This research aims to critically examine the existing legal framework and regulatory standards addressing the phenomenon of AI-based autonomous decision-making, and to evaluate the extent to which the principles of accountability, transparency, fairness, and legal responsibility can be applied to algorithmic systems. The research method used is a literature review, examining academic sources, international regulations, public policies, and reports from global institutions relevant to AI governance and algorithmic accountability. The results show that traditional legal standards still face limitations in accommodating the complex, adaptive, and often opaque (black box) characteristics of AI. Therefore, this research emphasizes the need for a new regulatory approach that is adaptive, risk-based, and ethically oriented to ensure that the use of AI remains aligned with human rights protection, legal certainty, and public trust in the era of autonomous decision-making.

Downloads

Download data is not yet available.

References

Areo, G. (2025). AI Governance: The Implications of Autonomous Decision- Makers in Government.

Atoum, I. (2025). Revolutionizing AI Governance: Addressing Bias and Ensuring Accountability Through the Holistic AI Governance Framework. International Journal of Advanced Computer Science and Applications, 16. https://doi.org/10.14569/IJACSA.2025.0160283

Balakrishnan, A. (2024). ETHICAL AND LEGAL IMPLICATIONS OF AI JUDGES: BALANCING EFFICIENCY AND THE RIGHT TO FAIR TRIAL [Master Thesis]. https://studenttheses.uu.nl/handle/20.500.12932/48242

Binns, R. (2022). Human Judgment in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance, 16(1), 197–211. https://doi.org/10.1111/rego.12358

Boch, A., Hohma, E., & Trauth, R. (2022). Towards an Accountability Framework for AI: Ethical and Legal Considerations. https://doi.org/10.13140/RG.2.2.10231.50086

Buiten, M., de Streel, A., & Peitz, M. (2023). The law and economics of AI liability. Computer Law & Security Review, 48, 105794. https://doi.org/10.1016/j.clsr.2023.105794

Bullock, J. B., Chen, Y.-C., Himmelreich, J., Hudson, V. M., Korinek, A., Young, M. M., & Zhang, B. (2024). The Oxford Handbook of AI Governance. Oxford University Press.

Chaudhary, G. (2024). Unveiling the Black Box: Bringing Algorithmic Transparency to AI. Masaryk University Journal of Law and Technology, 18(1), 93–122.

Cheong, B. C. (2024a). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6. https://doi.org/10.3389/fhumd.2024.1421273

Cheong, B. C. (2024b). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6. https://doi.org/10.3389/fhumd.2024.1421273

Clarissa Hennig Leal, M., & Soares Crestane, D. (2023). Algorithmic discrimination as a form of structural discrimination: Standards of the Inter-American Court of Human Rights related to vulnerable groups and the challenges to judicial review related to structural injunctions. UNIO – EU Law Journal, 9(1), 29–44. https://doi.org/10.21814/unio.9.1.5271

Gladwin, O. (2024). Human Rights in the Age of AI: Establishing Fairness and Accountability through Algorithmic Governance. https://doi.org/10.13140/RG.2.2.13279.06567

Green, B. (2022). Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness. Philosophy & Technology, 35(4), 90. https://doi.org/10.1007/s13347-022-00584-6

Gryz, J., & Rojszczak, M. (2021). Black box algorithms and the rights of individuals: No easy solution to the “explainability” problem. Internet Policy Review, 10(2), 1–24. https://doi.org/10.14763/2021.2.1564

Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22(4), 307–320. https://doi.org/10.1007/s10676-017-9428-2

Hassen, M. Z. (2025). Beyond the algorithm: Applying critical lenses to AI governance and societal change. AI and Ethics, 5(6), 5759–5765. https://doi.org/10.1007/s43681-025-00781-x

Jedličková, A. (2025). Ethical approaches in designing autonomous and intelligent systems: A comprehensive survey towards responsible development. AI & SOCIETY, 40(4), 2703–2716. https://doi.org/10.1007/s00146-024-02040-9

Kumar, A. B., Sanjaya, K., & Saleem, M. (2025). Justice in the Age of Algorithms: Ensuring Transparency, Accountability, and Fairness in AI-Driven Legal Systems. In Artificial Intelligence in Peace, Justice, and Strong Institutions (pp. 191–210). IGI Global Scientific Publishing. https://doi.org/10.4018/979-8-3693-9395-6.ch009

Land, M. K., & Aronson, J. D. (2020). Human Rights and Technology: New Challenges for Justice and Accountability. Annual Review of Law and Social Science, 16(Volume 16, 2020), 223–240. https://doi.org/10.1146/annurev-lawsocsci-060220-081955

Nuredin, A. (2024). ALGORITHMIC BIAS IN LAW: THE DISCRIMINATORY POTENTIAL AND LEGAL LIABILITY OF AI-BASED DECISION SUPPORT SYSTEMS (p. 125). https://doi.org/10.55843/ISC2024conf105n

Rico, P. (2024). AI and Data Governance: A Legal Framework for Algorithmic Accountability and Human Rights.

Sannerholm, R. (2022). Responsibility and Accountability: AI, Governance, and the Rule of Law. The Swedish Law and Informatics Research Institute, 223–246. https://doi.org/10.53292/208f5901.40d940a1

Thalpage, N. (2023). Unlocking the Black Box: Explainable Artificial Intelligence (XAI) for Trust and Transparency in AI Systems. Journal of Digital Art & Humanities, 4(1), 31–36. https://doi.org/10.33847/2712-8148.4.1_4

von Eschenbach, W. J. (2021). Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy & Technology, 34(4), 1607–1622. https://doi.org/10.1007/s13347-021-00477-0

Vorras, A., & Mitrou, L. (2021). Unboxing the Black Box of Artificial Intelligence: Algorithmic Transparency and/or a Right to Functional Explainability. In T.-E. Synodinou, P. Jougleux, C. Markou, & T. Prastitou-Merdi (Eds.), EU Internet Law in the Digital Single Market (pp. 247–264). Springer International Publishing. https://doi.org/10.1007/978-3-030-69583-5_10

Yazdanpanah, V., Gerding, E. H., Stein, S., Dastani, M., Jonker, C. M., Norman, T. J., & Ramchurn, S. D. (2023). Reasoning about responsibility in autonomous systems: Challenges and opportunities. AI & SOCIETY, 38(4), 1453–1464. https://doi.org/10.1007/s00146-022-01607-8

Yazdanpanah, V., Gerding, E., Stein, S., Dastani, M., Jonker, C. M., & Norman, T. (2021). Responsibility Research for Trustworthy Autonomous Systems. 57–62. https://doi.org/10.48448/w5y9-qk13

Downloads

Published

22-01-2026