How AI changes us.

The New Dangers of 'AI Replica' Identity Theft

Willssant / Pexels

Creating AI replicas without people's permission poses psychological risk— who should be allowed to make them?

KEY POINTS

  • A new type of "AI replica identity theft" may cause psychological harms of stress and anxiety.

  • Permission and informed consent are essential before creating or sharing a person's AI replica.

  • AI replicas can have psychological effects on the real person and their loved ones, which should be considered.

ChatGPT’s creator OpenAI has announced it will be launching its GPT store this week, offering users the ability to sell and share customized AI agents or GPTs through this platform. This announcement makes it a critical time to understand the psychological effects and ethical issues involving AI replicas.

Virtual AI versions of prominent psychotherapist Esther Perel and psychologist Martin Seligman have been made without their knowledge and permission. A developer created an AI chatbot version of Perel to help him through relationship issues. A former graduate student of Seligman made a "virtual Seligman" in China to help others.

While fascinating and aimed toward the positive goal of spreading healing, the cases of Perel and Seligman raise the specter of a new kind of "AI replica identity theft" or "AI personality theft" when someone develops an AI replica without the person's permission. AI replicas can take the form of an interactive chatbot or digital avatar and have been referred to as AI clones, virtual avatars, digital twins, AI personalities, or AI personas.

It is essential for developers and companies in this space to consider the psychological and ethical effects of creating a real person's AI replica without their knowledge or consent, even when aimed toward good, whether the person is living or dead. When AI replicas are created and used without the person's permission, it crosses psychological, legal, and ethical boundaries and has been deemed akin to "body snatching" by one expert. The "theft of creative content" or "theft of personality" may trigger legal issues as well.

The technology to create AI replicas of real people is no longer limited to the realm of science fiction. AI models can be trained on personal data or publicly available online content. Platforms are attempting to prevent this data from being used without the permission of creators, but much of this data has already been scraped from the Internet and used to train existing AI models.

Control and Consent Are Key

The problem of "AI replica identity theft" is at the center of my fictional performance Elixir: Digital Immortality, a series of interactive performances starting in 2019, based on a fictitious tech company offering AI-powered digital twins. The play raises the deep psychological and ethical issues that arise when AI replicas operate autonomously and without awareness of the humans they are based on.

I have interviewed hundreds of people since 2019 on their attitudes toward having an AI versions of themselves or loved ones, including how they would feel if it was operating without their permission or oversight. The psychological reaction to lacking control over one's AI replica was universally negative.

For many, AI replicas can be digital extensions of one's identity and selfhood; agency over one's AI replica is sacrosanct. People are worried about AI replica misuse, safety, and security, and the psychological consequences not only for themselves but for loved ones. These fears of doppelgänger-phobia, identity fragmentation and living memories have been shown in a new research study as well.

The concept of creating AI replicas of real people is not new, especially in the space of the digital afterlife. In early 2016, Eugenia Kuyda, CEO of Replika, which offers digital conversational companions, created a chatbot of her close friend after he died, using his text messages as training data. James Vlahos, cofounder of HereAfter AI, created an AI version of his father, who had passed away. AI replicas of people who have died are referred to as thanabot, ghostbot, deadbot, or griefbots. The psychological consequences of loved ones interacting with griefbots is unknown at this time.

The promise of digital immortality and securing a digital legacy are among the incentives to create a digital AI replica of oneself, but doing so without the person's knowledge or permission remains problematic. It is vital not to overlook the need for informed consent.

Ethical and Responsible AI Replicas

The development of AI replicas should consider the following principles:

  1. The use of a person's likeness, identity, and personality, including AI replicas, should be under the control of the person themselves or a designated decision maker who has been assigned that right. Those who are interested in creating their own AI replica should be given the right to remain in control of and be able to monitor and control its activity. If the person is no longer alive, then the right should be passed to whoever is in charge of their digital estate.

  2. AI replicas (e.g., chatbots, avatars, and digital twins) should be considered a digital extension of one's identity and self and thus afforded similar protections and sense of respect and dignity. AI replicas can change one's self-perception, identity, online behavior, or one's sense of self. The Proteus effect describes how the depiction of an avatar will change its behavior in virtual worlds and likely applies to AI replicas.

  3. AI replicas should disclose to users that they are AI and offer users a chance opt out of interaction with them. This is an important feature for the trustworthiness of AI replicas in general. For AI replicas of those who are no longer living, these interactions could impact family members and loved ones psychologically and potentially interfere with grieving.

  4. AI replicas comes with risks, including risk of misuse and costs to reputation, so informed consent in the creation and use of AI replicas should be required. Empirical research on deepfakes suggests that representations of a person, even if not real, will still influence people's attitudes about the person and potentially even plant false memories of that person in others. Users should be informed of these risks. One researcher has proposed Digital Do Not Reanimate (DDNR) orders.

  5. Creating and sharing an AI replica without the person's permission may lead to harmful psychological effects to the portrayed person, similar to identity theft or deepfake misuse— consent from the portrayed person, or their representative, is essential. Having a digital version of oneself made and used without one's permission could lead to psychological stress similar to the well-established negative emotional impacts of identity theft and deepfakes. People whose identities are used without their permission can experience fear, stress, anxiety, helplessness, self-blame, vulnerability, and feeling violated.

Development of Regulation

Some are advocating for federal regulation of digital replicas of humans. The NO FAKES Act is a proposed bill in Congress that would protect a person's right to use their image, voice, or visual likeness in a digital replica. This right would be passed to heirs and would survive for 70 years past the individual's death, similar to copyright law.

Advances in AI replicas offer exciting possibilities, but it is important to stay committed to responsible, ethical, and trustworthy AI.

For my discussion of digital immortality, see Will Digital Immortality Enable Us To Live Forever?

For more on The Psychology of AI, subscribe to my newsletter on Substack or follow me on LinkedIn.

Marlynn Wei, M.D., PLLC Copyright © 2023 All Rights Reserved.

For information about my psychiatry practice, see www.marlynnweimd.com.

References

DeLiema M, Burnes D, Langton L. The Financial and Psychological Impact of Identity Theft Among Older Adults. Innov Aging. 2021 Oct 5;5(4):igab043. doi: 10.1093/geroni/igab043. PMID: 34988295; PMCID: PMC8699092.

Hancock, JT and Bailenson JN. The Social Impact of Deepfakes.Cyberpsychology, Behavior, and Social Networking.Mar 2021.149-152.http://doi.org/10.1089/cyber.2021.29208.jth

Lee PYK, Ma NF, Kim IJ, and Yoon D. 2023. Speculating on Risks of AI Clones to Selfhood and Relationships: Doppelganger-phobia, Identity Fragmentation, and Living Memories. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 91 (April 2023), 28 pages. https://doi.org/10.1145/3579524

Lindemann NF. The Ethics of 'Deathbots'. Sci Eng Ethics. 2022 Nov 22;28(6):60. doi: 10.1007/s11948-022-00417-x. PMID: 36417022; PMCID: PMC9684218.

Read More
AI, psychology of ai, digital immortality Mar Hwa Wei AI, psychology of ai, digital immortality Mar Hwa Wei

Will Digital Immortality Enable Us to Live Forever?

Digital immortality refers to uploading, storing, or transferring a person's personality into a digital entity or cyberspace. As the technology of digital immortality becomes more popular and available, people will find new ways of using AI digital personas, twins, clones or replicas. Researchers are currently studying human-AI social relationships, and the psychological impacts are not yet entirely known.

  • Digital immortality refers to uploading, storing, or transferring a person's personality into a digital entity or cyberspace.

  • As the technology of digital immortality becomes more popular and available, people will find new ways of using AI-generated digital personas.

  • Researchers are currently studying human-AI social relationships, and the psychological impacts are not yet entirely known.

A grieving mother meets her daughter in a virtual world. A Holocaust activist speaks at her own funeral using AI-powered video technology. Nirvana released a "new" AI-generated song, "Drowned in the Sun," decades after the death of Kurt Cobain. Holograms of late music icons perform for live audiences.

These are all real events, not science fiction. Artificial intelligence (AI), including deep learning methods such as neural networks, is making digital immortality more of a reality each day.

Digital immortality refers to the concept of uploading, storing, or transferring a person's personality into something digital, such as a computer, virtual human, digital avatar, or robot.

These entities are referred to as AI clones, replicas, agents, personalities, or digital personas.

Digital immortality was predicted by technologists decades ago.

In 2000, Microsoft researchers Gordon Bell and Jim Gray published the paper “Digital Immortality” and posited that it would become a reality during this century. In the same year, Raymond Kurzweil, an American inventor and computer scientist, predicted that by 2030, we would have the means to scan and upload the human brain and re-create its design electronically.

Timothy Leary, a psychologist known for his advocacy of psychedelics, wrote in Design for Dying, "If you want to immortalize your consciousness, record and digitize." A futuristic version of digital immortality is the hypothetical concept that technology will eventually allow us to upload one's consciousness, thoughts, and whole "identity" into a digital brain that can persist indefinitely.

In the television series Upload, humans can “upload" themselves into a virtual afterlife. But the "mind uploading" process is still very much in the air. Some startups have suggested they have a way to "back up" your brain, but the process would be fatal. There are also thornier philosophical questions of personal identity and whether consciousness is even transferable in the first place.

Other forms of digital immortality exist today. Maggie Savin-Baden and David Burden define digital immortality as an active or passive digital presence of a person after their death and refer to two categories of digital immortality: one-way and two-way.

One-way immortality is the passive "read-only" digital presence of people after death, such as static Facebook memorial pages.

Two-way immortality includes interactive digital personas, such as chatbots based on real people or interactive digital avatars trained on real people's data.

In the Black Mirror episode "Be Right Back," a widow meets and interacts with a virtual version of her late husband–a concept that inspired the programming of "griefbots," or bots that can interact and respond like loved ones after their death.

After losing her close friend in 2015, Eugenia Kuyda created the startup project Luka, a chatbot that interacted and responded like her friend. Kuyda later went on to create Replika, a chatbot program that learns and mimics the user's language style. Users can also use Replika to create personalized “AI friends" or digital companions. Griefbots and "human-AI" social relationships are actively being studied, and the long-term psychological impact of human-AI social relationships is not entirely known.

Savin-Baden and Burden have identified three categories of digital immortality "creators":

  1. Memory creators– digital memorialization of someone before and/or after their death, not typically made by the person.

  2. Avatar creators– interactive digital avatars with some ability to conduct a limited conversation, with minimal likelihood of being mistaken for someone who is a live human (virtual humanoid).

  3. Persona creators– digitally interactive personas that evolve and can interact with the world, high likelihood of being mistaken for a live human (virtual human, chatbot, griefbot).

Image is AI-generated by author

AI-powered digital personas are not just for the dead.

As the technology for digital immortality expands, companies and celebrities are increasingly finding ways of using their AI selves when they are still alive. Spotify created an immersive AI experience that offered fans a personalized "one-on-one" experience with the digital version of The Weeknd. Digital Deepak, an AI version of wellness figure Deepak Chopra, can lead a meditation. By collaborating with StoryFile, William Shatner made an interactive AI version of himself that can answer questions.

OpenAI is releasing a new feature to create customized personalized chatbots. These custom chatbots will be able to be monetized in the upcoming OpenAI app Store, GPT Store, where custom GPTs will be available for public download.

Users want to use their AI-powered personas now.

In my interactive and immersive play, Elixir: Digital Immortality, based on a fictional AI tech startup that offers AI-powered digital twins, a surprising user feature request from the audience came up repeatedly– people were curious to meet their AI digital persona while still alive. The idea of leaving an AI version of oneself as a legacy to others received a more lukewarm response, except in the case of educating and preserving stories for future generations.

The question of what to do with AI personas once they are created from individual data (i.e., what companies do with these personas once the creator dies) will likely become an ad hoc consideration, similar to how companies navigated social media profiles of deceased users. Legacy-building through AI personas will be a secondary consideration.

AI clones, replicas, digital twins, and agents will become the new reality.

Users will be motivated to find new ways to use AI personas and integrate them seamlessly into daily life. With the incentive of increased productivity, enhancing AI agency will become necessary for goals like content creation and task automation. This shift toward increased AI agency will raise new ethical and legal questions.

The expansion of digital immortality and popularization of AI agents raise a host of psychological, philosophical, ethical, and legal issues.

  • How will human-AI interactions affect us emotionally? What are the potential uses for having a digital persona while one is alive?

  • Will the creation and use of AI digital personas become socially accepted and integrated into daily life like social media profiles?

  • Will people be alerted to the fact they are interacting with or viewing an AI agent?

  • Who will be responsible for the actions of an AI agent?

  • What are the ethical limits of using one's own AI agent? What about using other people's AI agent without their consent?

  • Who owns and manages the digital persona, its data, and any profits from its activity? Will it become a part of one's digital estate?

  • What regulations should be in place regarding the use of that person’s AI and its agency after the creator dies?

  • How would data security and privacy be ensured? How does one prevent the unauthorized use of a digital persona, including deepfake videos?

  • Does leaving a digital version of oneself interfere with the grieving process for others? Does having an AI help preserve one's legacy? How will AI digital personas transform the grieving process, and how will they fit into existing cultural rituals?

These questions are increasingly relevant as the technology for interactive AI agents advances and is integrated into our everyday lives.

Copyright © 2023 Marlynn Wei, MD, PLLC

Subscribe to The Psychology of AI by Dr. Marlynn Wei on LinkedIn or Substack.

Read More