The New Dangers of 'AI Replica' Identity Theft

Willssant / Pexels

Creating AI replicas without people's permission poses psychological risk— who should be allowed to make them?

KEY POINTS

  • A new type of "AI replica identity theft" may cause psychological harms of stress and anxiety.

  • Permission and informed consent are essential before creating or sharing a person's AI replica.

  • AI replicas can have psychological effects on the real person and their loved ones, which should be considered.

ChatGPT’s creator OpenAI has announced it will be launching its GPT store this week, offering users the ability to sell and share customized AI agents or GPTs through this platform. This announcement makes it a critical time to understand the psychological effects and ethical issues involving AI replicas.

Virtual AI versions of prominent psychotherapist Esther Perel and psychologist Martin Seligman have been made without their knowledge and permission. A developer created an AI chatbot version of Perel to help him through relationship issues. A former graduate student of Seligman made a "virtual Seligman" in China to help others.

While fascinating and aimed toward the positive goal of spreading healing, the cases of Perel and Seligman raise the specter of a new kind of "AI replica identity theft" or "AI personality theft" when someone develops an AI replica without the person's permission. AI replicas can take the form of an interactive chatbot or digital avatar and have been referred to as AI clones, virtual avatars, digital twins, AI personalities, or AI personas.

It is essential for developers and companies in this space to consider the psychological and ethical effects of creating a real person's AI replica without their knowledge or consent, even when aimed toward good, whether the person is living or dead. When AI replicas are created and used without the person's permission, it crosses psychological, legal, and ethical boundaries and has been deemed akin to "body snatching" by one expert. The "theft of creative content" or "theft of personality" may trigger legal issues as well.

The technology to create AI replicas of real people is no longer limited to the realm of science fiction. AI models can be trained on personal data or publicly available online content. Platforms are attempting to prevent this data from being used without the permission of creators, but much of this data has already been scraped from the Internet and used to train existing AI models.

Control and Consent Are Key

The problem of "AI replica identity theft" is at the center of my fictional performance Elixir: Digital Immortality, a series of interactive performances starting in 2019, based on a fictitious tech company offering AI-powered digital twins. The play raises the deep psychological and ethical issues that arise when AI replicas operate autonomously and without awareness of the humans they are based on.

I have interviewed hundreds of people since 2019 on their attitudes toward having an AI versions of themselves or loved ones, including how they would feel if it was operating without their permission or oversight. The psychological reaction to lacking control over one's AI replica was universally negative.

For many, AI replicas can be digital extensions of one's identity and selfhood; agency over one's AI replica is sacrosanct. People are worried about AI replica misuse, safety, and security, and the psychological consequences not only for themselves but for loved ones. These fears of doppelgänger-phobia, identity fragmentation and living memories have been shown in a new research study as well.

The concept of creating AI replicas of real people is not new, especially in the space of the digital afterlife. In early 2016, Eugenia Kuyda, CEO of Replika, which offers digital conversational companions, created a chatbot of her close friend after he died, using his text messages as training data. James Vlahos, cofounder of HereAfter AI, created an AI version of his father, who had passed away. AI replicas of people who have died are referred to as thanabot, ghostbot, deadbot, or griefbots. The psychological consequences of loved ones interacting with griefbots is unknown at this time.

The promise of digital immortality and securing a digital legacy are among the incentives to create a digital AI replica of oneself, but doing so without the person's knowledge or permission remains problematic. It is vital not to overlook the need for informed consent.

Ethical and Responsible AI Replicas

The development of AI replicas should consider the following principles:

  1. The use of a person's likeness, identity, and personality, including AI replicas, should be under the control of the person themselves or a designated decision maker who has been assigned that right. Those who are interested in creating their own AI replica should be given the right to remain in control of and be able to monitor and control its activity. If the person is no longer alive, then the right should be passed to whoever is in charge of their digital estate.

  2. AI replicas (e.g., chatbots, avatars, and digital twins) should be considered a digital extension of one's identity and self and thus afforded similar protections and sense of respect and dignity. AI replicas can change one's self-perception, identity, online behavior, or one's sense of self. The Proteus effect describes how the depiction of an avatar will change its behavior in virtual worlds and likely applies to AI replicas.

  3. AI replicas should disclose to users that they are AI and offer users a chance opt out of interaction with them. This is an important feature for the trustworthiness of AI replicas in general. For AI replicas of those who are no longer living, these interactions could impact family members and loved ones psychologically and potentially interfere with grieving.

  4. AI replicas comes with risks, including risk of misuse and costs to reputation, so informed consent in the creation and use of AI replicas should be required. Empirical research on deepfakes suggests that representations of a person, even if not real, will still influence people's attitudes about the person and potentially even plant false memories of that person in others. Users should be informed of these risks. One researcher has proposed Digital Do Not Reanimate (DDNR) orders.

  5. Creating and sharing an AI replica without the person's permission may lead to harmful psychological effects to the portrayed person, similar to identity theft or deepfake misuse— consent from the portrayed person, or their representative, is essential. Having a digital version of oneself made and used without one's permission could lead to psychological stress similar to the well-established negative emotional impacts of identity theft and deepfakes. People whose identities are used without their permission can experience fear, stress, anxiety, helplessness, self-blame, vulnerability, and feeling violated.

Development of Regulation

Some are advocating for federal regulation of digital replicas of humans. The NO FAKES Act is a proposed bill in Congress that would protect a person's right to use their image, voice, or visual likeness in a digital replica. This right would be passed to heirs and would survive for 70 years past the individual's death, similar to copyright law.

Advances in AI replicas offer exciting possibilities, but it is important to stay committed to responsible, ethical, and trustworthy AI.

For my discussion of digital immortality, see Will Digital Immortality Enable Us To Live Forever?

For more on The Psychology of AI, subscribe to my newsletter on Substack or follow me on LinkedIn.

Marlynn Wei, M.D., PLLC Copyright © 2023 All Rights Reserved.

For information about my psychiatry practice, see www.marlynnweimd.com.

References

DeLiema M, Burnes D, Langton L. The Financial and Psychological Impact of Identity Theft Among Older Adults. Innov Aging. 2021 Oct 5;5(4):igab043. doi: 10.1093/geroni/igab043. PMID: 34988295; PMCID: PMC8699092.

Hancock, JT and Bailenson JN. The Social Impact of Deepfakes.Cyberpsychology, Behavior, and Social Networking.Mar 2021.149-152.http://doi.org/10.1089/cyber.2021.29208.jth

Lee PYK, Ma NF, Kim IJ, and Yoon D. 2023. Speculating on Risks of AI Clones to Selfhood and Relationships: Doppelganger-phobia, Identity Fragmentation, and Living Memories. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 91 (April 2023), 28 pages. https://doi.org/10.1145/3579524

Lindemann NF. The Ethics of 'Deathbots'. Sci Eng Ethics. 2022 Nov 22;28(6):60. doi: 10.1007/s11948-022-00417-x. PMID: 36417022; PMCID: PMC9684218.

Previous
Previous

6 Ways to Improve Mental Health AI Apps and Chatbots

Next
Next

ChatGPT Outperforms Physicians Answering Patient Questions