| Article title | Artificial Intelligence and Personal Data: On the Search for a Balance of Interests |
|---|---|
| Authors |
Oleh Posykaliuk
Candidate of Law, Associate Professor, Head of the Sector for the Organization of State Power and Local Self-Government of the Department for Legal Policy, Organization of Public Power of the Research Service of the Verkhovna Rada of Ukraine, First Deputy Editor-in-Chief of the Legal Journal “Law of Ukraine” (Kyiv, Ukraine) ORCID ID: https://orcid.org/0000-0002-8841-8481 oleg.posykaliuk@gmail.com
|
| Magazine name | Legal journal «Law of Ukraine» (Ukrainian version) |
| Magazine number | 7 / 2025 |
| Pages | 122 - 137 |
| Annotation | The article explores the fundamental conflict between the needs of modern artificial intelligence (AI) systems, particularly large language models (LLMs), for vast amounts of data (“data hunger”) and personal data protection standards. The relevance of the research is driven by the rapid adoption of generative AI (ChatGPT, Gemini, Copilot), which creates systemic risks to privacy, including algorithmic bias, deanonymization, andthe opacity of “black boxes”. Technological innovations are significantly outpacing the development of adequate legal regulation. The purpose of the article is to examine the legal, technical, and organizational measures being taken to find a balance between the interests of AI developers and the rights of data subjects. The article analyzes: 1) the incompatibility of key principles of the General Data Protection Regulation (data minimization, purpose limitation) with the architecture of large language models (LLMs) and the complementary role of the EU AI Act; 2) the effectiveness of technical measures (PETs), such as synthetic data, trusted execution environments, and federated learning; 3) the organizational approaches and privacy policies of OpenAI, Google, and Microsoft. The conclusions emphasize that companies have implemented a dichotomy in protection guarantees: robust contractual terms for corporate clients and an “opt-out” model for mass consumers, which shifts the burden of privacy protection onto the user. The implementation of technical measures remains partial due to their resource-intensive nature. It is concluded that existing measures are insufficient to ensure a real balance of interests for individual users. |
| Keywords | artificial intelligence; personal data; General Data Protection Regulation (GDPR); EU AI Act; large language models (LLMs); privacy policy; ChatGPT; Gemini; Copilot |
| References | Bibliography Journal articles 1. Almufarreh A, Ahmad A, Arshad M, Choo Wou O, Elechi R, ‘Ethical implications of ChatGPT and other large language models in academia’ [2025] 8 Frontiers in Artificial Intelligence doi: 10.3389/frai.2025.1615761. 2. Davies T, ‘Data Hunger: The Deep Connection Between the AI Chatbot and the Human’ [2025] 44(1) IEEE Technology and Society Magazine 43–50. 3. Mühlhoff R, Ruschemeier H, ‘Regulating AI with Purpose Limitation for Models’ [2024] 1 Journal of AI Law and Regulation 24–39. 4. Parsons J, Lukyanenko R, Greenwood B, Cooper C, ‘Understanding and Improving Data Repurposing’ [2025] MIS Quarterly 1–50 https://doi.org/10.48550/arXiv.2506.09073. 5. Rajasekharan A, Zeng Y, Padalkar P, Gupta G, ‘Reliable Natural Language Understanding with Large Language Models and Answer Set Programming’ [2023] 385 Electronic Proceedings in Theoretical Computer Science 274–287. 6. Ruschemeier H, ‘Generative AI and data protection’ [2025] 1 Cambridge Forum on AI: Law and Governance doi:10.1017/cfl.2024.2. 7. Wolff J, Lehr W, Yoo C S, ‘Lessons from GDPR for AI Policymaking’ [2024] 27(4) Virginia Journal of Law & Technology 20, 22. 8. Yu P, Xu H, Hu X, Deng C, ‘Leveraging Generative AI and Large Language Models: A Comprehensive Roadmap for Healthcare Integration’ [2023] 11(20) Healthcare 2776. 9. Bazalytskyi V, ‘Vrehuliuvannia pytannia obrobky personalnykh danykh shtuchnym intelektom u Zahalnomu rehlamenti iz zakhystu personalnykh danykh (GDPR)’ [2024] 6(24) Aktualni pytannia u suchasnii nautsi. Ceriia “Pravo” 406–419 (in Ukrainian). 10. Bielova M, Bielov D, ‘Vyklyky ta zahrozy zakhystu personalnykh danykh u roboti zi shtuchnym intelektom’ [2023] 79(2) Naukovyi visnyk Uzhhorodskoho natsionalnoho universytetu. Seriia: Pravo 17–22 (in Ukrainian). 11. Braichevskyi S, ‘Problema personalnykh danykh v systemakh Internetu rechei z elementamy shtuchnoho intelektu’ [2019] 3 Informatsiia i pravo 61–67 (in Ukrainian). 12. Hachkevych A, ‘Nahliad natsionalnykh orhaniv YeS iz zakhystu danykh za obrobkoiu personalnykh danykh systemamy shtuchnoho intelektu (na prykladi ChatGPT)’ [2025] 2(4) Analitychno-porivnialne pravoznavstvo 154–160 (in Ukrainian). 13. Kolesnikov A, Chapelskyi Ya, Budnyk V, Kozhenovskyi Yu, ‘Transformatsiia systemy zakhystu personalnykh danykh pid vplyvom rozvytku tekhnolohii shtuchnoho intelektu’ [2025] 2 Ekonomika. Finansy. Pravo 106–109 (in Ukrainian). 14. Nekrutenko V, ‘Do pytannia systematyzatsii ryzykiv, sprychynenykh obroblenniam personalnykh danykh iz vykorystanniam tekhnolohii shtuchnoho intelektu’ [2021] 4(119) Visnyk Kyivskoho natsionalnoho universytetu imeni Tarasa Shevchenka (iurydychni nauky) 53–58 (in Ukrainian). 15. Punda O, Arziantseva D, ‘Zabezpechennia zakhystu personalnykh danykh fizychnykh osib v umovakh rozvytku shtuchnoho intelektu’ [2024] 2(30) Nauka i tekhnika sohodni. Seriia “Pravo” 132–142 (in Ukrainian). 16. Rezvorovych K, Bereda M, ‘Vplyv shtuchnoho intelektu na pravovu systemu ta zakhyst personalnykh danykh u tsyfrovu epokhu’ [2024] 4(4) Uspikhy i dosiahnennia u nautsi. Seriia “Pravo” 241–248 (in Ukrainian). 17. Zaiarnyi O, Derkachenko Yu, ‘Deiaki osoblyvosti obrobky personalnykh danykh pry vykorystanni chat-botiv zi shtuchnym intelektom na prykladi ChatGPT’ [2023] 29 Yurydychnyi biuleten 55–62 (in Ukrainian). Conference papers 18. Sap M, Shwartz V, Bosselut A, Choi Y, Roth D, ‘Commonsense Reasoning for Natural Language Processing’, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts (Association for Computational Linguistics 2020) 27–33. Websites 19. Afonja G, Sim R, Lin Z, Inan H A, Yekhanin S, ‘The Crossroads of Innovation and Privacy: Private Synthetic Data for Generative AI’ (May 29, 2024) (accessed 27.07.2025). 20. AI Principles (accessed 27.07.2025). 21. Artificial Intelligence – Model Personal Data Protection Framework. Office of the Privacy Commissioner for Personal Data, Hong Kong 2024 (accessed 27.07.2025). 22. Business data privacy, security, and compliance (accessed 27.07.2025). 23. Confidential Computing (accessed 27.07.2025). 24. Generative AI in Google Workspace Privacy Hub (accessed 27.07.2025); Enterprise data protection in Microsoft 365 Copilot and Microsoft 365 Copilot Chat (accessed 27.07.2025). 25. Improving Image Generation with Better Captions (OpenAI) (accessed 27.07.2025). 26. Microsoft Copilot privacy controls (accessed 27.07.2025). 27. Microsoft Responsible AI: Principles and approach (accessed 27.07.2025). 28. Numoto T, ‘Microsoft Trustworthy AI: Unlocking human potential starts with trust’ (Sep 24, 2024) (accessed 27.07.2025). 29. OECD (2025), “Sharing trustworthy AI models with privacy-enhancing technologies”, OECD Artificial Intelligence Papers, No. 38, OECD Publishing, Paris, https://doi.org/10.1787/ a266160b-en. 30. OpenAI Privacy Portal (January 12, 2024) (accessed 27.07.2025). 31. Privacy policy (27 June 2025) (accessed 27.07.2025). 32. Schwabe C, ‘AI and data protection in practice - between innovation and regulation’ (7 April 2025, Robin Data GmbH) (accessed 27.07.2025). |
| Electronic version | Download |