AI, Deepfake and the Undermining of Once Incontestable Evidence

Introduction

Content

Author

Introduction

The article discusses how artificial intelligence and deepfake technology challenge the reliability of visual and audio evidence that was traditionally considered definitive in legal and social contexts. It outlines the risks posed by increasingly sophisticated AI-generated content and the legislative responses at European and national levels.

Content

Deepfake technology uses advanced AI to generate highly realistic fake photos, audio and video that can misrepresent individuals and events, undermining trust in media evidence. These tools enable manipulation of content in ways that could fabricate witness statements, falsify documents or place people in false situations, threatening evidence credibility in legal and public arenas. The article highlights ongoing EU efforts, including proposed legislation and the AI Act, to regulate harmful deepfake content and protect individuals from non-consensual or malicious use. It also notes that Romania has initiated a draft law addressing deepfakes, but legal frameworks remain in development while the technology continues to evolve rapidly.

We deliver specialized legal services focused on economic offences, corporate criminal law, investigations and high-stakes litigation. Our approach emphasizes clarity, discipline and senior-level involvement.

Get Started

Our office

Șoseaua Pavel D. Kiseleff 45, București 011343

© 2025 Bradulex. All rights reserved.