Deepfakes are digitally altered photos, videos or audio that appear incredibly realistic. They’re designed to make it seem as though someone is saying or doing something that they never actually said or did.
Deepfake technology uses AI to create realistic digital content like videos, images, and audio recordings that can deceive individuals. However, in some cases, deepfakes can be harmless.
Since deepfakes are so realistic, they’re often very difficult to detect. And the increasing availability of deepfake technology is making it easier for individuals with harmful intentions to create convincing fake content.
In this context, deepfakes are a serious threat to society as it becomes harder to distinguish between what’s real and what’s fake. This raises concerns about the potential impact of deepfakes on our trust in the media, politics and society.
Currently there are no universal standards or guidelines for the creation and dissemination of deepfakes. This lack of regulation makes it difficult to establish clear rules for how deepfakes should be used and shared. Regulatory bodies around the world are challenging to introduce effective legislation.
Additionally, the internet allows deepfakes to be shared and disseminated across borders. Therefore, it’s challenging for national regulators to enforce deepfake laws on a global scale.
The European Union (EU) regulates the deepfakes through the proposed AI Act. This Act requires transparency from creators and disseminators of deepfakes, mandating them to disclose the artificial origin of the content and provide information about the techniques used. However, some member states of the European union have adopted their own laws.
France is one of the member states of EU that has adopted SREN law which regulate the digital environment, protect children from online pornography and combat online fraud.
Furthermore, Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.
United Kingdom does not specific law that regulates the deepfakes, but some existing laws may be applicable to resolve disputes concerning deepfakes.
Beyond Europe, other jurisdictions are expanding deepfakes regulations.
In the United States, there is no comprehensive federal law regulating deepfakes.
Since each US state has its own laws, there is significant variation in what should be considered as offence, when the use of deepfakes is prohibited and the penalties for non-compliance. For example, Texas and California have laws prohibiting the creation and distribution of explicit deepfake videos without the subject’s consent.
China’s regulations require all AI-generated content to be clearly labeled, both visibly and in metadata. The use of AI to generate news content is also restricted, with unlicensed providers prohibited from publishing AI-generated news.
Australia has not passed any laws specifically about deepfakes.
Republic of North Macedonia has not passed special law regulating the deepfakes yet. However, bearing in mind the rapid evolution of technology from one hand and potential harmful effects of deepfakes on the other hand it is desirable as soon as possible to be adopted law regulating AI and deepfakes.
We can conclude that the global trends concerning regulation of deepfakes underscore clear priorities: transparency, consent and rapid content takedown requirements. Businesses operating internationally must prepare for overlapping but distinct regulatory frameworks in this area.