The Copyrighted Face: Denmark, Deepfakes, and the Struggle for Data Sovereignty
When your face becomes intellectual property, the battle for data sovereignty stops being abstract, it becomes personal, visible, and for sale.
SOURCE: Denmark Leads EU Push to Copyright Faces in Fight Against Deepfakes
I. The Ghost in the Portrait: A Historical Prelude
In 1890, Samuel Warren and Louis Brandeis penned what would become the most influential law review article in American jurisprudence: “The Right to Privacy.” Their motivation was visceral and immediate. Warren’s wife had been subjected to invasive society page coverage; her private moments had been rendered public spectacle by the emerging technologies of photography and tabloid journalism. The solution they proposed was revolutionary: a legal right “to be let alone,” a barrier between the intimate self and the consuming gaze of publicity.
What made their argument radical was not merely its recognition of privacy as a right, but its acknowledgment that technology had fundamentally altered the nature of exposure. The photograph, they understood, was not simply a recording device; it was a weapon of replication, capable of divorcing image from context, presence from consent. “Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life,” they wrote, with a prescience that echoes across 135 years of accelerating technological intrusion.
Today, we face the same invasion, only magnified by orders of magnitude. The intruder is no longer a photographer with a glass plate camera but a neural network trained on billions of scraped images. The violation is not a single unauthorized portrait in a newspaper but the perpetual possibility of your face performing actions, speaking words, inhabiting contexts you never consented to, all rendered with such fidelity that authentication itself becomes impossible.
In June 2025, Denmark proposed a legislative response as bold as Warren and Brandeis’s original provocation: make every citizen the copyright holder of their own likeness. It is an audacious attempt to graft the mechanics of intellectual property law onto the terrain of personal identity; to treat the face not as an extension of the self, inviolable and dignified, but as intellectual property, ownable and controllable through the same legal mechanisms that govern Mickey Mouse and the Beatles’ catalog.
Yet beneath this idealism lies a profound and troubling tension. When we copyright the face, do we protect it from exploitation, or do we complete its transformation into a commodity? Can this asset be bought, sold, licensed, and ultimately transferred to another person? And if the face becomes property, who ultimately owns it: the individual, or the platforms and corporations with the legal sophistication and economic leverage to extract that ownership through contract?
II. From Dignity to Data: The Commodification Paradox
To understand what Denmark is attempting, we must first understand the fundamental distinction between personality rights and intellectual property, a distinction that this proposal threatens to collapse.
The Architecture of Personality Rights
Personality rights, as they developed across European and American jurisprudence, were conceived as an extension of human dignity. They protect what copyright cannot: the integrity of one’s image, voice, and identity as expressions of personhood itself. Crucially, these rights were typically inalienable; they could not be sold or transferred like property because they were not understood as property in the first place. They were attributes of being human, as inherent and non-transferable as consciousness itself.
The German concept of “Persönlichkeitsrecht” (right of personality) perhaps expresses this most clearly: it treats the image not as an asset but as an emanation of the self, deserving protection precisely because it cannot be separated from the person without violence to their dignity. When German law permits public figures to use their likeness for limited commercial purposes, it does so with the understanding that such use cannot compromise the core of personal identity, the “Menschenwürde” or human dignity that forms the foundation of the constitutional order.
The Logic of Intellectual Property
Copyright law, by contrast, was built for commerce from its inception. The Statute of Anne (1710), the first modern copyright law, was explicitly designed to regulate the book trade, to create artificial scarcity in reproducible goods, and thereby generate markets. Copyright treats creative works as property that can be owned, traded, inherited, subdivided, and exploited. Its purpose is not to protect dignity but to incentivize creation and regulate distribution.
Critically, copyright is alienable; it can be transferred from the creator to a corporation, from an individual to a platform. Indeed, the entire economic engine of creative industries depends on this transferability. Musicians sign away their rights to record labels. The authors grant the publishers the right to publish. Actors transfer performance rights to studios. The creator retains authorship, but ownership, and with it, control, migrates elsewhere.
The Alchemical Transformation
Denmark’s proposal performs a kind of legal alchemy, transmuting the base metal of personality rights into the gold standard of intellectual property. By making likenesses copyrightable, it extends an economic framework into the moral domain. It says, in effect: your face is not merely an aspect of your dignity; it is an asset in your portfolio, governable by the same rules that govern software code and pop songs.
As IP lawyer Luca Schirru warns: “Copyright can be a trap; it can turn our bodies into consumer goods.” The trap is subtle but deadly. Once the face becomes property, all the apparatus of property law comes into play: contracts, licenses, transfers, assignments, and sales. In the asymmetric power dynamics of the digital economy, the party with superior bargaining power almost always emerges victorious.
Consider the mechanics: a platform could simply update its Terms of Service to require, as a condition of access, that users grant an irrevocable, perpetual, worldwide license to use their copyrighted likeness for “platform enhancement purposes.” Users would face a Hobson’s choice: grant the license or be excluded from digital social life. The copyright, intended as a shield, becomes a sword pointed at the user’s own throat.
This is not hypothetical. It is precisely how platforms have already extracted value from user-generated content under existing copyright regimes. The pattern is well-established: users create content, platforms claim platform rights to that content through contractual terms, and users retain nominal ownership but lose practical control. Extending this model to personal identity is not a protection; it is a capitulation dressed in the language of empowerment.
III. The Consent Catastrophe: When Permission Becomes Transaction
At the heart of Denmark’s experiment lies a deeper problem: the complete inadequacy of consent as currently conceptualized in digital systems. The crisis is not merely that platforms fail to enforce consent boundaries, though they do, consistently and egregiously. The crisis is that consent itself has been perverted from a continuous, contextual right into a one-time transaction.
The Illusion of Control
Under existing frameworks, such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), platforms are already required to obtain consent, honor removal requests, and provide transparency regarding data use. Yet enforcement is glacial, opaque, and systematically biased in favor of platform interests. When a deepfake video surfaces, victims are instructed to submit removal requests, only to wait days or weeks while the content goes viral, accumulating millions of views and becoming indelibly embedded in the internet’s distributed memory.
The platforms are slow to act because speed works against their economic interests. Their business models are built on virality, the exponential spread of content through network effects. A viral deepfake generates engagement, which in turn generates ad impressions, ultimately leading to revenue. Every hour that harmful content remains online is profitable. The incentive structure ensures that harm will always propagate faster than remedy.
But the problem runs deeper than platform incentives. The entire conceptual framework of “consent” in digital systems treats permission as a binary switch: granted or withheld, on or off. This model is rendered obsolete by the emergence of generative AI and neural replication technologies. When my face is used to train a model, when does consent expire? When the model generates a synthetic image of me, who is responsible: the model creator, the model user, the platform hosting the output, or all of them simultaneously?
Consent as Continuous Governance
The actor Scott Jacqmein’s case illustrates this collapse perfectly. He licensed his likeness for specific uses, but those licenses were interpreted by recipients as permission for AI replication and unlimited synthetic reproduction. He gave consent once, in a specific context, for a defined purpose. What he got was perpetual, context-free exploitation of his digital ghost.
This is the consent catastrophe: the moment permission is granted, it metastasizes beyond the boundaries of the original agreement. In a world of neural networks and generative models, there is no such thing as limited use. Every image becomes training data. Every licensed face becomes a template for an infinite number of variations. Consent, conceived as a transaction, becomes a trap.
What we need is consent as a continuous governance mechanism, not a one-time permission, but an ongoing relationship of accountability and revocability. This requires a fundamental reconceptualization: consent is not merely the moment of agreement, but rather the entire lifespan of data use. It must be auditable, trackable, and terminable at any point.
IV. Data Sovereignty and the Trust Ledger: Beyond Ownership to Governance
This brings us to the critical concept that Denmark’s proposal gestures toward but does not fully embrace: Data Sovereignty. Sovereignty is not merely the right to delete, it is the right to decide how, where, and for what purpose one’s data exists. It is not possession, but governance; not ownership, but authority.
The Inadequacy of the Ownership Model
The ownership model of data protection, whether through copyright, as Denmark proposes, or through property rights more broadly, is founded on a simple reality: data is not like physical property. It can be copied infinitely at zero marginal cost. It can exist simultaneously in multiple locations. It can be transformed, recombined, and synthesized with other data to create emergent properties that no single data point possesses.
When we say “I own my data,” we are making a category error. Data is not something that can be possessed, but a relationship that must be governed. My face in isolation is not particularly valuable. Still, my face combined with my voice, behavioral patterns, location history, social network, and ten thousand other data points becomes a high-fidelity model of my identity that can be used to predict, manipulate, and simulate me.
Ownership cannot capture this relational complexity. What we need is a framework that recognizes data as existing within ecosystems of creation, transformation, and use, ecosystems that require not property rights but governance rights.
Trust Value Management: A Framework for Digital Sovereignty
This is where the emerging paradigm of Trust Value Management (TVM) offers a radically different approach. TVM treats data not as static artifacts to be owned but as living signals within a dynamic ecosystem of trust relationships. The core insight is that every piece of data carries with it a provenance: a traceable history of consent, transformation, and use.
Imagine a Trust Ledger that records not ownership transactions but trust events: every instance of consent granted, every modification made, every context in which the data is used. This ledger is not about assigning monetary value but about sustaining epistemic integrity, the traceable chain of who did what, when, and under whose permission.
In such a system, your face is not a copyrighted asset but a trust signal. When you upload a photo, you create a trust event: this image exists, with this provenance, under these consent conditions. If a platform wants to use that image for model training, another trust event is recorded: consent is requested, granted (or denied), terms are specified, and duration is defined.
If a deepfake is created, the Trust Ledger immediately registers a violation: a synthetic artifact claiming to represent you but lacking the provenance chain that would establish legitimate use. Platforms, model creators, and end users can all query the ledger to verify: does this image have trustworthy provenance? Can its consent lineage be traced?
From Ex Post Facto Protection to Real-Time Governance
This shifts us from a world where your likeness is “protected” under law after harm occurs to one where every interaction with your data is auditable in real time. That is the essence of data sovereignty; not possession, but governance through verified trust.
The technical architecture would resemble blockchain-based provenance systems, but with a crucial difference: the goal is not immutability but accountability. Trust events can be revoked, consent can be withdrawn, and the ledger must update to reflect these changes. The system must be both permanent (in the sense that history is recorded) and flexible (in the sense that current permissions are always mutable).
Consider how this would work in practice:
Creation: You upload a photo. The system generates a cryptographic hash and records the trust event, including the image created by [you], the timestamp [now], and the consent status [as defined by you].
Transformation: A platform wants to use your image for model training. It requests consent through the Trust Ledger. You grant time-limited consent: “Yes, for facial recognition research, for 12 months, non-commercial use only.”
Derivation: A model trained on your image generates a synthetic representation of your face. The model automatically queries: Do I have consent to create derivatives? The ledger responds: yes, but only for research, and only until [expiration date].
Violation: Someone creates a deepfake video of you. They cannot query the Trust Ledger (or they do, and it returns: no consent exists). Platforms equipped with verification tools can immediately flag this as potentially inauthentic, not because it looks fake, but because it lacks trust provenance.
Revocation: You withdraw consent. The ledger updates. Models trained on your data must either retrain without your data or seek renewed consent. Platforms must remove synthetic derivatives or face verifiable proof of violation.
This is not science fiction. The technical components are in place: cryptographic hashing, distributed ledgers, consent management platforms, and content authentication standards such as C2PA (Coalition for Content Provenance and Authenticity). What’s missing is the social and legal infrastructure to make these tools universal, interoperable, and enforceable.
V. The Moral Ledger: When Dignity Meets the Market
Denmark’s move sits uneasily beside another 2025 experiment that exposes the dark side of biometric commodification: Sam Altman’s Worldcoin, which offered cash payments for iris scans. The project promised a universal basic income funded by cryptocurrency, accessible to anyone willing to provide biometric data.
Brazil banned it immediately, declaring biometric barter incompatible with human dignity. The Brazilian data protection authority was blunt: “The human body is not a commodity. Biometric data cannot be traded for access to services or financial benefit.”
Yet the cultural logic persists. People are already trading fragments of selfhood for access, convenience, or cash. Every facial recognition opt-in, every biometric unlock, and every personalized service that requires surrendering data are all micro-transactions in the attention economy, where privacy is the currency and surveillance is the cost of admission.
The Jacqmein Parable: The Face That Acted Without Permission
Scott Jacqmein thought he understood the bargain. As an actor, he licensed his likeness for specific commercial uses, including advertisements, promotional materials, and product endorsements. Standard practice in the entertainment industry. But then his face appeared in TikTok ads he never filmed, selling products he never endorsed, performing in scenarios he never agreed to.
The technology was straightforward: someone had used his licensed images to train a generative model, then used that model to create synthetic videos. Legally, the licensee argued, no violation had occurred; Jacqmein had granted permission to use his likeness, and AI generation was simply a new modality of that use.
Jacqmein disagreed. He had sold specific performances, not his face as a perpetual template. He had consented to discrete uses, not unlimited synthetic replication. The legal system offered no recourse because the law had not contemplated this scenario: consent given for analog purposes could be reinterpreted for digital infinity.
This is the parable Denmark forces us to confront. If ownership can be transferred, if consent can be reinterpreted, if the face can be licensed like a software API, can identity ever truly be reclaimed? Or does the moment of commodification, however well-intentioned, mark the point of no return?
The Worldcoin Warning: Selling Your Eyes for Crypto
Worldcoin’s model was even more explicit in its transactionalism. In exchange for an iris scan, users received cryptocurrency and access to a “proof of personhood” system that was supposed to protect them from AI impersonation. The pitch was seductive: your biometric data is valuable, so you should be compensated for it. Privacy is not a right; it’s an asset to be monetized.
The problem, as Brazil recognized, is that once we accept the premise that biometric data can be traded, we lose the moral ground to object to its extraction. If your iris has a market price, why shouldn’t employers require scans as a condition of employment? Why shouldn’t social media platforms require facial biometrics for account creation? Why shouldn’t governments mandate DNA databases for “public safety”?
The answer is the same reason we prohibit the sale of organs or votes: some things are so fundamental to human dignity that placing them in markets degrades both the thing being sold and the humanity of the seller. When we allow biometric barter, we accept the premise that personhood itself is divisible and marketable, that you can sell part of what makes you “you” and remain whole.
VI. The European Experiment: Digital Sovereignty as Geopolitical Strategy
Denmark’s proposal does not exist in isolation. It is part of a broader European project aimed at asserting digital sovereignty against the twin hegemonies of American platform capitalism and Chinese state surveillance. Understanding this geopolitical context is essential to understanding both the promise and the limitations of the copyrighted face.
Europe’s Digital Sovereignty Agenda
Since the GDPR’s implementation in 2018, Europe has positioned itself as the global leader in data protection and digital rights. The strategy is both defensive and aspirational: defensive in protecting European citizens from exploitative data practices, and aspirational in establishing European values, such as privacy, dignity, and consent, as universal norms.
The Digital Markets Act, Digital Services Act, AI Act, and now Denmark’s copyright proposal form a regulatory ecosystem designed to constrain platform power and rebalance the relationship between individuals and corporations. But there is tension between the European model and the reality of global digital infrastructure.
Platforms are American. Cloud infrastructure is American or Chinese. AI development is dominated by American and Chinese firms. Europe is a rule-maker trying to govern a terrain it does not control. The regulations are sophisticated, but enforcement is challenging. Compliance is often performative. The pace of technological change consistently outpaces the pace of legislative adaptation.
France’s Image Rights Precedent
France already uses image rights law to combat non-consensual media, particularly “revenge porn” and other forms of digital harassment. The French approach regards images as an extension of the right to private life, as outlined in Article 9 of the Civil Code. Violators can be held liable for damages, and platforms can be compelled to remove the content.
Yet enforcement remains difficult. Courts are slow. Platforms are uncooperative. And the damage—reputational, psychological, social—often occurs before legal remedy is possible. The French model offers lessons for Denmark: legal rights without enforcement mechanisms are aspirations, not protections.
The Open Future Vision: Democratizing the Data Commons
The Open Future think tank, based in Amsterdam, has proposed something more radical than either copyright or personality rights: a redistributive framework that treats AI training data as a commons, benefiting all contributors. Their argument is elegant: AI doesn’t just exploit individual creators, it exploits the collective substrate of human knowledge, culture, and expression.
Every image on the internet contributes to training data. Every text, every video, every artifact of digital culture becomes part of the corpus that makes machine learning possible. The benefits of this collective contribution accrue overwhelmingly to corporations; platform companies and AI developers capture the value, while creators and everyday internet users receive nothing.
Open Future’s solution is to treat training data as a resource subject to shared governance, with mechanisms to ensure that value flows back to contributors. This might take the form of compulsory licensing schemes, revenue-sharing arrangements, or public ownership of foundational models.
The vision is compelling but faces enormous practical challenges. How do you identify and compensate millions of anonymous contributors? How do you prevent free-riding? How do you enforce commons governance across jurisdictional boundaries? How do you balance the collective good against individual rights?
These are not merely technical questions but profound political questions about how we organize digital society. Denmark’s copyright proposal is one answer; individualistic, property-based, and legally conservative. Open Future’s commons framework is another collective, redistributive, and structurally radical approach. Both struggle with the same fundamental problem: we are trying to govern 21st-century technologies with 20th-century legal concepts.
VII. The Technical Architecture of Trust: Building Verifiable Sovereignty
If neither pure ownership nor pure commons governance can solve the data sovereignty problem, what can? The answer lies in building technical infrastructure that makes trust verifiable, consent auditable, and provenance traceable—what we might call the architecture of trust.
The Components of Trust Infrastructure
A comprehensive trust architecture requires several interoperating components:
1. Content Provenance Standards
The Coalition for Content Provenance and Authenticity (C2PA) has developed technical standards for embedding cryptographically signed metadata in digital content. When a photo is taken, the camera signs it with information about the device, the timestamp, and the location. When it’s edited, the editing software adds its signature. When it’s published, the platform adds its signature.
The result is a provenance chain, a verifiable history of the content’s journey from creation to publication. Deepfakes and synthetic media would lack this chain, making them identifiable as potentially inauthentic.
But C2PA faces adoption challenges. Camera manufacturers must implement the standards. Software developers must support them. Platforms must display provenance information. And users must learn to check it. Without universal adoption, provenance becomes a weak signal; authentic content might lack it simply because it was created on old equipment.
2. Consent Management Platforms
Consent management today is primitive, relying on binary checkboxes in Terms of Service agreements. Advanced consent management would treat consent as a dynamic, context-specific, continuously adjustable parameter.
Imagine a personal consent management system where you maintain a dashboard of all consents you’ve granted: which platforms can use your data, for what purposes, and until when. You can revoke consent with a click. You can set graduated consent levels: “Yes for facial recognition, no for advertising, maybe for research, subject to review.”
Platforms would query your consent management system in real time: “User 12345 wants to upload this image, do I have permission to store it? To analyze it? To use it for training?” Your system responds with the current consent status. If you revoke consent, platforms must comply or face verifiable proof of violation.
The technology for this exists, including OAuth protocols, GDPR consent management platforms, and data clean rooms. What’s missing is standardization and interoperability. Every platform has its own consent system. Your consents are siloed, non-portable, and revocable only through platform-specific processes.
3. Trust Verification Networks
To verify that content has legitimate provenance and proper consent, we need verification networks, decentralized systems where anyone can query: Does this content have trustworthy origins?
This is where distributed ledger technology becomes useful, not as a cryptocurrency mechanism but as a public verification layer. When content is created, a hash is recorded on the ledger, along with metadata about its provenance and consent. Anyone can query the ledger to check authenticity.
The ledger doesn’t store the content itself (which would be inefficient and privacy-violating) but only the cryptographic proof that content with specific properties exists and has a particular status of consent. If someone presents a deepfake, verification tools can query the ledger and find: no legitimate provenance exists for this content.
4. Regulatory Enforcement Mechanisms
Technical infrastructure means nothing without enforcement. Platforms must be legally required to:
Implement provenance standards
Honor consent revocations immediately
Make consent status queryable.
Remove content that lacks verifiable provenance when challenged.
Pay meaningful penalties for violations.
The DSA and AI Act create some of these obligations, but penalties remain too low and enforcement too weak. A robust trust architecture requires teeth—penalties substantial enough to change platform behavior, enforcement mechanisms fast enough to prevent harm, and legal standing for individuals to sue for violations.
The Interoperability Challenge
The most challenging problem is interoperability. Trust systems only work if they’re universal. A provenance standard adopted by 80% of cameras is a weak signal. A consent management system incompatible with 20% of platforms leaves gaps for exploitation.
This requires something rare in digital infrastructure: coordination across competing interests. It involves collaboration among camera manufacturers, software developers, platforms, regulators, and civil society to agree on common standards and then to implement them effectively.
History suggests this is possible. Email works because everyone has adopted standard protocols (SMTP, IMAP). The web works because everyone adopted HTML. Credit cards work because banks adopted common standards. However, history also shows that these coordination efforts often take decades and frequently require regulatory intervention.
VIII. Beyond the Face: Identity, Authenticity, and the Crisis of the Real
Ultimately, the copyrighted face is a symptom of a more profound crisis: the collapse of authenticity as a reliable signal in digital culture. When any image, video, or voice can be synthesized with sufficient fidelity to fool human perception, we lose the ability to trust our senses. The crisis is not merely legal or technical, but epistemological; a crisis of knowing itself.
The Authenticity Crisis
Humans evolved to trust sensory evidence. Seeing was believing. Hearing was knowing. The photograph, as Susan Sontag argued, became “a certificate of presence,” a proof that something existed, at some moment, before a camera.
Deepfakes shatter this certificate. A video of you committing a crime is no longer evidence of your guilt. A recording of you speaking words is no longer proof of your position. An image of you in a location is no longer confirmation of your presence. The evidentiary value of media collapses.
This affects not just individuals but institutions. Journalism loses credibility when any video can be dismissed as fake. Courts struggle when visual evidence can be synthesized. Elections become vulnerable when candidates can be impersonated with perfect fidelity.
We face what philosopher C. Thi Nguyen calls “epistemic exhaustion,” the cognitive cost of constantly verifying authenticity becomes so high that people give up and retreat into partisan information bubbles, trusting only sources that align with their pre-existing beliefs.
The Human Cost of Synthetic Replication
For individuals, the harm is more intimate and devastating. When deepfake pornography of you circulates online, the violation is not merely to your image but to your sense of self. Your face, your body, your voice, the most intimate markers of your identity, are puppeteered without your consent.
Victims describe a profound disorientation: seeing yourself do things you never did, watching your body perform in scenarios you never experienced. It is a digital dissociation, a rupture between self-perception and external representation.
Psychologically, it resembles trauma, an experience of powerlessness, violation, and loss of control over one’s own narrative. And like trauma, it often carries shame: victims feel complicit in their own violation, as if by existing in digital space they somehow invited exploitation.
The law has struggled to address this harm because it doesn’t fit traditional categories. It’s not quite defamation (because the content doesn’t claim to be real). It’s not quite fraud (because there’s often no financial deception). It’s not quite harassment (because the perpetrator may be unknown). It exists in a legal lacuna, a harm without a name.
Reclaiming Reality: The Politics of Authenticity
Denmark’s proposal, for all its limitations, recognizes something profound: in an age of perfect replication, authenticity becomes a political question. Who has the power to determine what is real? Who can impose their version of reality on digital spaces?
Currently, that power belongs to platforms; they decide what content to promote, suppress, and label as potentially inauthentic. But platforms are not neutral arbiters. They have economic incentives, political pressures, and ideological biases.
A proper data sovereignty framework would redistribute this power. Individuals would have the authority to make authoritative claims about their own identity: “This is me. That is not me. These are my words. Those are not my words.” And institutions, such as platforms, courts, and media, would be obligated to respect those claims unless they can present counter-evidence.
This inverts the current burden. Today, if a deepfake of you appears, you must prove it’s fake, often an impossible task. In a sovereignty framework, the burden would shift: anyone using your likeness would need to prove they have legitimate provenance and consent. The default assumption would be your authenticity, not the platform’s content.
IX. The Futures of the Self: Three Scenarios
As Denmark’s proposal makes its way through legislative processes, and as other jurisdictions watch to see whether it succeeds or fails, we can imagine three possible futures for digital identity and data sovereignty:
Scenario 1: The Propertized Self (Current Trajectory)
In this future, Denmark’s copyright model becomes the global standard. Faces, voices, and biometric data are treated as intellectual property, governed through ownership and contract law.
The result is a hyper-commodified landscape of identity. Everyone owns their likeness, but that ownership is constantly negotiated away through Terms of Service agreements, employment contracts, and platform requirements. Legal control exists in theory but is practically unexercisable due to power asymmetries.
A parallel market emerges: people with valuable likenesses (celebrities, influencers, attractive individuals) can license their faces for substantial income. Others find their likenesses valueless, owned in principle but worthless in practice.
Deepfakes continue to proliferate because enforcement remains weak, courts remain slow, and platforms remain uncooperative. The copyright provides a legal basis for lawsuits after harm occurs, but it does not provide a mechanism to prevent damage from happening.
This is the pessimistic trajectory, the completion of neoliberal logic, where even the most intimate aspects of personhood are marketized, and the fiction of ownership obscures the reality of exploitation.
Scenario 2: The Trust Commons (Aspirational)
In this future, Europe’s regulatory efforts succeed in creating a robust trust infrastructure. Provenance standards become universal. Consent management becomes real-time and interoperable. Verification networks become ubiquitous.
Platforms are required and must actually comply with these systems. Creating synthetic media without proper provenance becomes technically difficult and legally risky. Consent violations are detected immediately and penalized severely.
A new social contract emerges: you maintain sovereignty over your digital identity through continuous governance rather than one-time ownership. Your consent is auditable, revocable, and respected. Your likeness cannot be used without your ongoing permission.
AI development continues, but within boundaries. Training data requires consent. Synthetic media requires provenance. The benefits of AI are distributed more equitably because the contributors to training data retain leverage over how their data is used.
This is the optimistic scenario; the realization of digital sovereignty as genuine self-determination, enabled by technology and enforced by law.
Scenario 3: The Authenticity Crisis (Dystopian)
In this future, neither legal protection nor technical infrastructure succeeds in containing synthetic media. Deepfakes become ubiquitous and indistinguishable from authentic content.
Society adapts by abandoning trust in the media entirely. Visual evidence becomes legally inadmissible. Journalism loses credibility. Political discourse becomes a pure narrative contest, untethered from factual constraint.
New forms of authentication, such as live witness testimony, in-person verification, and blockchain-based identity systems, are emerging, but these are expensive, exclusive, and create new forms of inequality. Those who can afford strong authentication are perceived as credible; others are dismissed as potentially synthetic.
Identity itself becomes unstable. When your face can be perfectly replicated, when your voice can be synthesized, when your behavior can be predicted and simulated, what remains of “you”? The self becomes a contested site, perpetually vulnerable to appropriation and replication.
This is the dystopian scenario: the complete collapse of digital trust and the emergence of a two-tiered society, comprising the authenticated and the dubious.
X. Conclusion: The Face in the Mirror
The question Denmark forces us to ask is not ultimately legal, but moral and existential: in an age of perfect replication, what does it mean to be oneself?
If your face can be synthesized, your voice can be cloned, and your behavior can be predicted, if artificial intelligence can generate a version of you that is, in some sense, indistinguishable from the “real” you, then what remains as the irreducible core of identity?
The answer cannot be purely legal. Copyright law can protect your economic interest in your likeness, but it cannot protect the ontological integrity of selfhood. Personality rights can shield your dignity from commercial exploitation, but they cannot shield your sense of self from the existential disorientation of seeing yourself act without your consent.
What we need is a re-enchantment of the human, a recognition that identity is not reducible to data, that personhood is not capturable in models, that the self exceeds its representations. This is not a mystical claim but a pragmatic one. If we treat human identity as nothing more than a collection of biometric data points and behavioral patterns. If we have already conceded that replication is theft of the essential self.
The alternative is to recognize that identity is relational, contextual, and continuously enacted, that “you” are not a static essence but a dynamic process of self-creation in relationship with others. Your face matters not because it is property but because it is the interface through which you encounter the world and through which the world encounters you.
Data sovereignty, properly understood, is not about ownership but about maintaining the conditions under which that continuous self-creation remains possible; conditions of consent, dignity, and genuine agency.
Denmark’s copyrighted face is a stepping stone, not a destination. It signals a moral awakening to the crisis of digital identity, even if the proposed solution is inadequate. The future will require something more sophisticated: trust systems that make consent auditable, technical infrastructure that makes provenance verifiable, and social norms that make authenticity valuable.
But underneath all the technology and law, we need something more straightforward and more profound: the collective commitment that human identity deserves protection not because it is valuable but because it is human. Not because it can be monetized, but because it cannot.
Because in the end, sovereignty is not about who owns your face. It’s about whether you can still look in the mirror and recognize the person looking back, whether that face remains yours, not in the legal sense of possession, but in the lived sense of being.
In that recognition lies the difference between data and dignity, between replication and reality, between the ghost and the self. Denmark has opened the conversation. Now we must finish it, not with copyright amendments but with a reimagining of what it means to be human in an age of artificial reproduction.
The copyrighted face may protect your likeness. But only trust can defend yourself.
Please take a look at our take on Data Sovereignty:



