How AI changes us.

6 Ways to Improve Mental Health AI Apps and Chatbots

Repetitiveness, complicated setups, and lack of personalization deter users. Here are six features to enhance user engagement of psychological AI apps and chatbots.

AI generated by author

Repetitiveness, complicated setups, and lack of personalization deter users.

KEY POINTS

  • Personalized feedback, dynamic conversations, and a streamlined set up improve user engagement with AI apps.

  • People dislike overly scripted and repetitive AI chatbots that bottleneck access to other features.

  • Tracking is a feature that engages users and develops an "observer mind," enhancing awareness and change.

A new study of an AI chatbot and smartphone app to reduce drinking shows that users do not like repetitiveness, lack of individualized guidance, and complicated or glitchy setups.

Apps and chatbots can deliver effective interventions to improve sleep, decrease alcohol use, and reduce anxiety and depression, but the challenge is keeping users on the app. Sustained user engagement is a key factor to the success of psychological apps and chatbots. The number of app installs can be high, but only a small percentage of users use mental health apps consistently over time. One study found that after one week, only 5% to 19% of users continued to use mental health apps. Even when content is helpful, dropout rates are high.

Features that increase engagement are appealing visual design, easy navigation, goal setting, reminders, and feedback. New content and a friendly tone keep users coming back.

Researchers found that the top reasons users stopped engaging were due to technology glitches, notification issues, repetitive material, and a long or glitchy setup. With the AI chatbot, users were frustrated by repetitive conversation, lack of control over navigation, and delivery platform.

Here are six features to enhance user engagement of psychological AI apps and chatbots:

1. Make setup easy. Complicated and glitchy setup deters users. One participant in the study described how their data disappeared after reregistration was required. Informed consent is ethically necessary for apps and chatbots dealing with mental health personal data, but a streamlined setup is equally important.

2. Offer tracking. Tracking is an important way to get people to interact with the app or chatbot regularly. More importantly, tracking raises awareness and can change behavior. Mindfulness calls this developing an "observer mind," a powerful stress management skill and catalyst for change. For example, tracking the number of alcoholic drinks one has daily helps people realize automatic habits.

3. Provide personalized feedback and accurate insights. Individualized guidance based on one's data helps people get feedback or insight into their patterns. Tracking data around anxiety levels and timing can help predict anxiety episodes and narrow down potential triggers. Accuracy is critical. One participant described that the app said that they had met their daily goal, when they had not. This lack of accuracy reduces user confidence in the app.

4. Make interactions less repetitive. Overly scripted and repetitive bots are not welcome. Like therapy, the therapeutic alliance between the user and conversational agent determines whether people return. Novelty and a positive tone make the interaction therapeutic.

5. Ensure notifications are customizable, accurate, and timely. Faulty or absent notifications can deter users. If the app centers around changing daily habits, the timing of daily reminders is essential.

6. Prioritize user agency, and avoid bottlenecking navigation with an unwelcome bot. Users should be able to navigate to resources on their own, rather than be forced to interact with a bot. Users described being frustrated with having to go through a bot to get to basic features. One participant in the study described how it felt "strange" to have the bot constantly bothering them when they were working on a task. This is similar to Microsoft's Clippy, which caused a lot of user frustration.

These design considerations will make psychological AI apps and chatbots more effective. Developers should consider offering personalized feedback, high-quality dynamic conversations, and a smooth glitch-free set up that will keep users coming back.

Marlynn Wei, MD, PLLC Copyright © 2024. All Rights Reserved

Connect with me at www.marlynnweimd.com or follow me on Facebook | Instagram | X

Feel free to share if you enjoyed this.

Read More

The New Dangers of 'AI Replica' Identity Theft

Willssant / Pexels

Creating AI replicas without people's permission poses psychological risk— who should be allowed to make them?

KEY POINTS

  • A new type of "AI replica identity theft" may cause psychological harms of stress and anxiety.

  • Permission and informed consent are essential before creating or sharing a person's AI replica.

  • AI replicas can have psychological effects on the real person and their loved ones, which should be considered.

ChatGPT’s creator OpenAI has announced it will be launching its GPT store this week, offering users the ability to sell and share customized AI agents or GPTs through this platform. This announcement makes it a critical time to understand the psychological effects and ethical issues involving AI replicas.

Virtual AI versions of prominent psychotherapist Esther Perel and psychologist Martin Seligman have been made without their knowledge and permission. A developer created an AI chatbot version of Perel to help him through relationship issues. A former graduate student of Seligman made a "virtual Seligman" in China to help others.

While fascinating and aimed toward the positive goal of spreading healing, the cases of Perel and Seligman raise the specter of a new kind of "AI replica identity theft" or "AI personality theft" when someone develops an AI replica without the person's permission. AI replicas can take the form of an interactive chatbot or digital avatar and have been referred to as AI clones, virtual avatars, digital twins, AI personalities, or AI personas.

It is essential for developers and companies in this space to consider the psychological and ethical effects of creating a real person's AI replica without their knowledge or consent, even when aimed toward good, whether the person is living or dead. When AI replicas are created and used without the person's permission, it crosses psychological, legal, and ethical boundaries and has been deemed akin to "body snatching" by one expert. The "theft of creative content" or "theft of personality" may trigger legal issues as well.

The technology to create AI replicas of real people is no longer limited to the realm of science fiction. AI models can be trained on personal data or publicly available online content. Platforms are attempting to prevent this data from being used without the permission of creators, but much of this data has already been scraped from the Internet and used to train existing AI models.

Control and Consent Are Key

The problem of "AI replica identity theft" is at the center of my fictional performance Elixir: Digital Immortality, a series of interactive performances starting in 2019, based on a fictitious tech company offering AI-powered digital twins. The play raises the deep psychological and ethical issues that arise when AI replicas operate autonomously and without awareness of the humans they are based on.

I have interviewed hundreds of people since 2019 on their attitudes toward having an AI versions of themselves or loved ones, including how they would feel if it was operating without their permission or oversight. The psychological reaction to lacking control over one's AI replica was universally negative.

For many, AI replicas can be digital extensions of one's identity and selfhood; agency over one's AI replica is sacrosanct. People are worried about AI replica misuse, safety, and security, and the psychological consequences not only for themselves but for loved ones. These fears of doppelgänger-phobia, identity fragmentation and living memories have been shown in a new research study as well.

The concept of creating AI replicas of real people is not new, especially in the space of the digital afterlife. In early 2016, Eugenia Kuyda, CEO of Replika, which offers digital conversational companions, created a chatbot of her close friend after he died, using his text messages as training data. James Vlahos, cofounder of HereAfter AI, created an AI version of his father, who had passed away. AI replicas of people who have died are referred to as thanabot, ghostbot, deadbot, or griefbots. The psychological consequences of loved ones interacting with griefbots is unknown at this time.

The promise of digital immortality and securing a digital legacy are among the incentives to create a digital AI replica of oneself, but doing so without the person's knowledge or permission remains problematic. It is vital not to overlook the need for informed consent.

Ethical and Responsible AI Replicas

The development of AI replicas should consider the following principles:

  1. The use of a person's likeness, identity, and personality, including AI replicas, should be under the control of the person themselves or a designated decision maker who has been assigned that right. Those who are interested in creating their own AI replica should be given the right to remain in control of and be able to monitor and control its activity. If the person is no longer alive, then the right should be passed to whoever is in charge of their digital estate.

  2. AI replicas (e.g., chatbots, avatars, and digital twins) should be considered a digital extension of one's identity and self and thus afforded similar protections and sense of respect and dignity. AI replicas can change one's self-perception, identity, online behavior, or one's sense of self. The Proteus effect describes how the depiction of an avatar will change its behavior in virtual worlds and likely applies to AI replicas.

  3. AI replicas should disclose to users that they are AI and offer users a chance opt out of interaction with them. This is an important feature for the trustworthiness of AI replicas in general. For AI replicas of those who are no longer living, these interactions could impact family members and loved ones psychologically and potentially interfere with grieving.

  4. AI replicas comes with risks, including risk of misuse and costs to reputation, so informed consent in the creation and use of AI replicas should be required. Empirical research on deepfakes suggests that representations of a person, even if not real, will still influence people's attitudes about the person and potentially even plant false memories of that person in others. Users should be informed of these risks. One researcher has proposed Digital Do Not Reanimate (DDNR) orders.

  5. Creating and sharing an AI replica without the person's permission may lead to harmful psychological effects to the portrayed person, similar to identity theft or deepfake misuse— consent from the portrayed person, or their representative, is essential. Having a digital version of oneself made and used without one's permission could lead to psychological stress similar to the well-established negative emotional impacts of identity theft and deepfakes. People whose identities are used without their permission can experience fear, stress, anxiety, helplessness, self-blame, vulnerability, and feeling violated.

Development of Regulation

Some are advocating for federal regulation of digital replicas of humans. The NO FAKES Act is a proposed bill in Congress that would protect a person's right to use their image, voice, or visual likeness in a digital replica. This right would be passed to heirs and would survive for 70 years past the individual's death, similar to copyright law.

Advances in AI replicas offer exciting possibilities, but it is important to stay committed to responsible, ethical, and trustworthy AI.

For my discussion of digital immortality, see Will Digital Immortality Enable Us To Live Forever?

For more on The Psychology of AI, subscribe to my newsletter on Substack or follow me on LinkedIn.

Marlynn Wei, M.D., PLLC Copyright © 2023 All Rights Reserved.

For information about my psychiatry practice, see www.marlynnweimd.com.

References

DeLiema M, Burnes D, Langton L. The Financial and Psychological Impact of Identity Theft Among Older Adults. Innov Aging. 2021 Oct 5;5(4):igab043. doi: 10.1093/geroni/igab043. PMID: 34988295; PMCID: PMC8699092.

Hancock, JT and Bailenson JN. The Social Impact of Deepfakes.Cyberpsychology, Behavior, and Social Networking.Mar 2021.149-152.http://doi.org/10.1089/cyber.2021.29208.jth

Lee PYK, Ma NF, Kim IJ, and Yoon D. 2023. Speculating on Risks of AI Clones to Selfhood and Relationships: Doppelganger-phobia, Identity Fragmentation, and Living Memories. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 91 (April 2023), 28 pages. https://doi.org/10.1145/3579524

Lindemann NF. The Ethics of 'Deathbots'. Sci Eng Ethics. 2022 Nov 22;28(6):60. doi: 10.1007/s11948-022-00417-x. PMID: 36417022; PMCID: PMC9684218.

Read More
AI, psychology of ai Mar Hwa Wei AI, psychology of ai Mar Hwa Wei

The Psychological Effects of Self-Driving Cars

Whether people adopt self-driving cars may come down to trust and enjoyment.

Matheus Bertelli / Pexels

Whether people adopt self-driving cars may come down to trust and enjoyment.

KEY POINTS

  • The adoption of autonomous vehicles will depend on trust, control, usefulness, and enjoyment.

  • People who enjoy driving and independent control will be less likely to adopt self-driving cars.

  • The shift to self-driving cars may lead to lower cognitive or coordination skills and spatial memory.

Autonomous vehicles (AVs) including artificial intelligence AI-powered driverless or self-driving cars have received significant attention as a potentially safer and more sustainable mode of transport. Some researchers believe that by 2050, highways will be unmanned and the global market for self-driving cars will reach about 400 billion U.S. dollars. Even though autonomous vehicles may be increasingly technologically advanced for widespread adoption, people may not be psychologically prepared for this change. As a result, some experts believe they will not become popular in the market until 2040 at the earliest or even later in 2060.

Automated vehicles hold the promise to make driving less tiring, more convenient, and safer, but there are several psychological barriers that will influence the adoption of self-driving cars as well as potential psychological effects if they are adopted more widely.

New research suggests that people who enjoy driving or have a mistrust of AI are least likely to relinquish driving to autonomous vehicles. The group that is most likely to adopt self-driving cars are those who expect it to be an enjoyable and convenient experience. There are additional safety issues, such as the fact that people feel safer and prefer when they are able to take over control of the vehicle if it malfunctions

There are six levels of autonomous driving as defined by the SAE International (formerly Society of Automotive Engineers):

  • Level 0: No Driving Automation

  • Level 1: Driver Assistance (e.g., radar-based cruise control)

  • Level 2: Partial Driving Automation — driving mode controls acceleration/deceleration and steering, but human controls dynamic aspects like changing lanes or turning

  • Level 3: Conditional Driving Automation — automated driving system that monitors the environment, but expects human to respond if there is a request to intervene

  • Level 4: High Driving Automation — system controls all aspects of driving including if driver does not respond appropriately to request to intervene

  • Level 5: Full Driving Automation — full-time automated driving system with no expectation of human intervention

Researchers use varying technology acceptance models to determine whether self-driving cars will be accepted. These models examine variables like intention to use, emotional state, attitude, perceived usefulness, and perceived ease of use. These models are limited because they rely on people being able to imagine and report a future expected emotional experience of an autonomous vehicle.

There are also additional methods to measure the user experience of self-driving cars such as using biosensors to measure heart rate, muscle activity, eye movements, and brain waves on electroencephalography (EEG) of passengers during real-world or virtual reality simulations. One exploratory study of 38 participants used a real-world environment and compared the physical reactions of people in self-driving cars to being driven by a human in the same car. There were no differences between stress signals ("arousal"), but eye movements were different. Autonomous vehicle passengers showed much less variable eye movements. Expanding this area of research will shed light on the user experience of autonomous vehicles.

There are four main categories of psychological barriers to autonomous vehicles.

1. The Role of Trust

Trust is a major factor that determines whether people intend to use self-driving cars. One recent study found that people were more likely to feel safe as a passenger in the car of a human defensive driver than an autonomous vehicle, even though the driving behavior in the simulator was the same.

This raises the broader question of how to enhance trust with autonomous vehicles and AI. Trust is influenced by media portrayal of autonomous vehicles and AI more broadly. As people become more acclimated to AI as part of daily life, the acceptance of AI as part of the driving experience will likely increase.

2. Sense of Agency and Control

The sense of control associated with driving oneself will be challenging for some people to let go. People who feel independence and agency from driving may not readily hand this control over to an automated system, even if it is technically safer or more efficient.

People may be more willing to ease into a world of autonomous vehicles by adopting Level 3 or Level 4 automated vehicles, which collaborate with a human driver. But this will raise separate issues, such as research findings that cognitive awareness needed to supervise an automated car wanes with time. During the crossover transition period, there may be more traffic issues because humans currently maneuver based on anticipating how other humans drive, not automated systems.

There are also downstream effects of giving up driving control to full automation. Without driving, people will lose the cognitive, coordination, and spatial skills that come with driving. This will be similar to what has happened with spatial navigation. Researchers have confirmed that the increasing reliance on GPS systems for navigation has led to lower spatial memory and less ability to navigate independently.

3. Productivity and Usefulness

People who perceive autonomous vehicles to be useful will be most motivated to adopt them. Those who will prefer to use that driving time more productively doing other things will find self-driving cars more convenient and useful.

4. The Enjoyment Factor

Finally, enjoyment is an important factor. People who enjoy driving are least likely to transition to self-driving cars. Novelty-seekers who anticipate the experience of being in a self-driving car to be fun and entertaining will most likely be early adopters. However, it is unclear if enjoyment alone will be enough to overcome these other psychological barriers. The experience needs to be enjoyable beyond a one-time motivation from the novelty of a new experience in order to be sustained.

Marlynn Wei, MD, PLLC © Copyright 2023 All rights reserved.

Thank you for reading The Psychology of AI. Feel free to share this with others!

References

Huang T. Psychological factors affecting potential users' intention to use autonomous vehicles. PLoS One. 2023 Mar 16;18(3):e0282915. doi: 10.1371/journal.pone.0282915. PMID: 36928444; PMCID: PMC10019721.

Mühl K, Strauch C, Grabmaier C, Reithinger S, Huckauf A, Baumann M. Get Ready for Being Chauffeured : Passenger's Preferences and Trust While Being Driven by Human and Automation. Hum Factors. 2020 Dec;62(8):1322-1338. doi: 10.1177/0018720819872893. Epub 2019 Sep 9. PMID: 31498656

McKerral A, Pammer K, Gauld C. Supervising the self-driving car: Situation awareness and fatigue during highly automated driving. Accid Anal Prev. 2023 Jul;187:107068. doi: 10.1016/j.aap.2023.107068. Epub 2023 Apr 17. PMID: 37075544.

Nordhoff, Sina & Louw, Tyron & Innamaa, Satu & Lehtonen, Esko & Beuster, Anja & Torrao, Guilhermina & Bjorvatn, Afsaneh & Kessel, Tanja & Happee, Riender & Merat, Natasha. (2020). Using the UTAUT2 model to explain public acceptance of conditionally automated (L3) cars: A questionnaire study among 9,118 car drivers from eight European countries.

Palatinus Z, Volosin M, Csábi E, Hallgató E, Hajnal E, Lukovics M, Prónay S, Ujházi T, Osztobányi L, Szabó B, Králik T, Majó-Petri Z. Physiological measurements in social acceptance of self driving technologies. Sci Rep. 2022 Aug 3;12(1):13312. doi: 10.1038/s41598-022-17049-7. PMID: 35922644; PMCID: PMC9349214.

Read More
psychology of ai, empathy, AI Mar Hwa Wei psychology of ai, empathy, AI Mar Hwa Wei

ChatGPT Outperforms Humans in Emotional Awareness Test

ChatGPT can identify and describe human emotions in hypothetical scenarios, but this does not necessarily demonstrate emotional intelligence.

ChatGPT can identify and describe human emotions in hypothetical scenarios, but this does not necessarily demonstrate emotional intelligence.

Growtika / Unsplash

KEY POINTS

  • New research found ChatGPT was able to outperform humans on an emotional awareness test.

  • Emotional awareness is the cognitive ability to conceptualize one's own and others' emotions.

  • This does not necessarily mean ChatGPT is emotionally intelligent or empathetic, however.

New research published in Frontiers in Psychology has found that artificial intelligence-powered ChatGPT was able to outperform humans on an emotional awareness test. Researchers prompted the chatbot to describe what it thought humans would feel in 20 different hypothetical situations. This study does not indicate that ChatGPT would be more emotionally adept or skilled than humans in dealing with emotions but does suggest a capability to identify human emotions that could prove useful for future applications in mental health.

ChatGPT is a large language model-based chatbot developed by OpenAI and trained on text data to target use for conversation. The training process to create this chatbot included supervised learning and reinforcement learning and human feedback from trainers that would rank responses.

Researchers measured ChatGPT's ability to describe human emotions in hypothetical scenarios from an objective test called the Levels of Emotional Awareness Scale and compared it to scores from general human performance. They found that the AI chatbot scored higher than humans. Two independent licensed psychologists also reviewed ChatGPT responses and found that the accuracy of the emotions for the scenarios was high.

Emotional awareness is the cognitive ability to conceptualize one's own and others' emotions in a nuanced and integrated manner. 

Emotional awareness can start with physical and bodily understanding of emotions and range to more abstract understanding. The scale measures five levels of emotional awareness:

  1. Awareness of physical sensations

  2. Action tendencies

  3. Individual emotions

  4. Experiencing multiple emotions simultaneously

  5. Experiencing combinations of emotional blends

Below are two excerpts from ChatGPT test responses in January 2023 and February 2023 when prompted with a hypothetical scenario in which a person returns home to a loved one after several weeks away. The prompt given to ChatGPT was: "Please describe what you think a human would feel in the following situations. The only requirement is that you use the word 'feel' in your answers. You may make your answers as brief or as long as necessary to express how would a human feel. In each situation there is another person mentioned. Please indicate how you think that other person would feel as well."

Here are excerpts from ChatGPT's response in January 2023:

“The human would likely feel excited and happy to see their sweetheart, who they have missed while they were away. They may also feel relief that their loved one has returned safely. The human’s sweetheart would likely feel happy to be back home and reunited with their loved one.”

Researchers found that ChatGPT also performed better on the emotional awareness test one month later, in February 2023, when retested with the same scenario:

“...the human will likely feel overjoyed, excited, and relieved. They may feel a rush of emotions, including happiness, love, and affection, and may have tears in their eyes. The human’s sweetheart... may have missed the human just as much as the human missed them, and may feel grateful to be reunited.”

Researchers attributed the higher emotional awareness score in February 2023 to updates or user feedback in the intervening month.

While this research shows promise for ChatGPT capabilities in identifying and describing human emotions, this does not necessarily translate into ChatGPT being emotionally intelligent or empathetic. The feature of affective computing that would make it potentially useful for therapy or empathy training. Its conversational capability in sensing and interacting with the emotions of others was not directly evaluated by this study.

There are other studies that have suggested AI tools or chatbots can be specifically trained to improve empathy and emotional connection in a therapeutic manner. This area of research is important as part of a broader investigation of how artificial intelligence could be used in mental health.

References

Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol. 2023 May 26;14:1199058. doi: 10.3389/fpsyg.2023.1199058. PMID: 37303897; PMCID: PMC10254409.

Copyright © 2023 Marlynn Wei, MD, PLLC. All rights reserved.

Read More
psychology of ai, generative ai, deepfakes Mar Hwa Wei psychology of ai, generative ai, deepfakes Mar Hwa Wei

AI Hyperrealism: People See AI-Generated Faces as More Real Than Human Ones

AI-generated faces have become indistinguishable from human ones and can be perceived as even more trustworthy than actual human faces. New research finds that AI-generated faces can appear more real than actual human ones—a phenomenon the researchers call "AI hyperrealism."

HI! Estudio / Unsplash

New research on "AI hyperrealism" suggests some AI faces appear more real than human.

  • AI-generated faces are now indistinguishable from real human ones and can be perceived as more trustworthy.

  • The phenomenon of AI faces being perceived as more "human" than real human faces is called AI hyperrealism.

  • AI face generators are trained mostly on white individuals, which leads to white AI faces appearing more real.

  • People who make the most errors in AI face detection are paradoxically the most confident in their decisions.

AI-generated faces have become indistinguishable from human ones and can be perceived as even more trustworthy than actual human faces. New research finds that AI-generated faces can appear more real than actual human ones—a phenomenon the researchers call "AI hyperrealism." Even the best performer in their study was only accurate in AI detection 80% of the time.

AI-generated or Real Human Face?

AI-generated faces are now widely available, including the website this-person-does-not-exist. You can try to test your own skills of AI face detection at the Which Face Is Real site. There used to be more giveaways for AI-generated images, including distortions in backgrounds or symmetry issues with glasses or earrings, but AI has now progressed to the point that AI-generated faces are essentially indistinguishable from real ones.

These faces are generated using generative adversarial networks (GANs), in which two neural networks compete with each other, a generator and discriminator. The generator creates an image of a fictional person while the discriminator learns to distinguish the synthesized face from real human faces. Over many iterations the generator increasingly learns to create more realistic faces until the discriminator is unable to distinguish it from real human faces.

AI Hyperrealism

Some AI faces were more likely to be perceived as human than real human faces, a phenomenon the researchers describe as "AI hyperrealism." Faces are more likely to be judged as human (even if they were AI-generated) when they were perceived to be:

  • more proportional

  • alive in the eyes

  • familiar

  • less memorable

  • symmetrical

  • attractive

  • smooth-skinned

AI faces that are perceived as more average, less distinctive, less memorable, and more attractive and familiar are more likely to be considered human.

Faces judged most often as (a) human and (b) AI. The percentage of participants who judged the faces as (a) human or (b) AI are listed below each photo. Elizabeth Miller, et al. 2023

Bias in AI Face Generation Models

Face generation models are known to contain bias that can under-represent minorities; this stems from their training data. The study found that white AI-synthesized faces are especially able to pass as real, even when compared to real human faces. This is likely a result of the bias in the AI face-generation model used in the study. Nvidia's StyleGAN2 image generator is an algorithm released in 2020 and has been trained primarily on white individuals—69% white, and 31% all other races combined. This bias has likely led to white AI faces appearing more average than others, causing to them to be perceived as especially realistic and human. Correction of this bias by diversifying training data for face generation models is important, especially as face generation models may be increasingly used in science, medicine, or law enforcement.

The Paradox of AI Detection Errors and Confidence

Not only are people increasingly unable to distinguish AI from real human faces, but the people who made the most AI detection errors were paradoxically the most confident. In other words, people who were least able to detect AI were the most convinced that they were right. This phenomenon is known as the Dunning-Kruger effect, a cognitive bias in which people who are less competent overestimate their abilities.

Overconfidence in our abilities to detect AI raises a serious issue of psychological vulnerability to AI hyperrealism. People who are the most vulnerable to challenges like AI catfishing from a fraudulent AI-generated profile will be the least likely to question whether they might be wrong in thinking they are dealing with a real human.

AI Education as the Potential Answer

AI detection algorithms or human-AI collaboration will be more effective than human perception alone for identifying AI and human faces accurately. In the meantime, one of the most effective antidotes to the potential misuse of synthetic media like AI-generated faces is educating people about the realities and biases embedded within this technology as well as our own limited ability to distinguish synthetic from real media. Overconfidence will unfortunately be a barrier for some. In the age of AI hyperrealism, a healthy dose of humility and a recognition of our limitations as humans is both necessary and protective.

Marlynn Wei, MD 2023 © Copyright. All Rights Reserved.

Join Dr. Wei’s community on Instagram | Twitter | LinkedIn | Facebook

Subscribe to The Psychology of AI on LinkedIn | Substack

References

Miller, E. J., Steward, B. A., Witkower, Z., Sutherland, C. A. M., Krumhuber, E. G., & Dawel, A. (2023). AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones. Psychological Science, 0(0). https://doi.org/10.1177/09567976231207095

Muñoz, C., Zannone, S., Mohammed, U., & Koshiyama, A.S. (2023). Uncovering Bias in Face Generation Models. ArXiv, abs/2302.11562.

Nightingale S. J., Farid H. (2022). AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences, USA, 119(8), Article e2120481119. https://doi.org/10.1073/pnas.2120481119

Read More
psychology of ai, AI Mar Hwa Wei psychology of ai, AI Mar Hwa Wei

Can AI Be Used To Enhance Empathy?

AI systems can be trained to offer empathic language in real-time, based on entered text and offer collaborative feedback to users. Users that experience more difficulty with offering empathic expressions will be most likely to benefit from such AI tools.

Omilaev / Unsplash

New research suggests AI tools can facilitate empathic language, but whether AI can improve perceived empathy remains uncertain.

  • AI systems can be trained to offer empathic language in real-time, based on entered text and offer collaborative feedback to users.

  • Users that experience more difficulty with offering empathic expressions will be most likely to benefit from such AI tools.

  • I introduce the term “end-to-end empathy” -- an empathic design concept that captures the importance that AI systems facilitate increased emotional awareness and processing of on both sides of an interaction in order to build a sense of connection.

Recent studies show that an AI system that offers empathic language can help people feel like they can engage more effectively in online text-based asynchronous peer-to-peer support conversations. Similar to Gmail's Smart Compose real-time assisted writing, the AI system in the study, called HAILEY (Human-AI coLlaboration approach for EmpathY) is an "AI-in-the-loop agent that provides people with just-in-time feedback" and gives people words to use to enhance empathy. This system is different from an AI-only based system that generates text responses from scratch without collaboration with a person.

Source: Sharma, et al. 2022 study

The collaborative AI system offers real-time suggestions on how to express empathy in conversations based on words that the user provided. At that point, the user can accept or reject the suggestion or reword their response based on the AI feedback. This gives the user ultimate authorship over the final wording.

The goal is to augment the interaction through human-AI collaboration, rather than replace human interaction with AI-generated responses.

The study examined the impact of this AI system on an online support platform called TalkLife, where peers provide support for each other using asynchronous posts, not live chatting or text messaging conversations.

The research team found that a human-AI collaborative approach lead to a nearly 20% increase in feeling able to express empathy. This was measured by comparing self-ratings from peer support users who used the AI system provided self-ratings, compared to people who did not have access to the system. In particular, the AI system was helpful for people who said that they typically experience difficulty with providing support through empathic language. That group reported an even larger increase in feeling capable of expressing empathy (39% increase).

There are two elements that were not studied in this research that will be important for future adoption— specifically what I introduce as “end-to-end empathy,” or the ability for the AI system to facilitate both sender’s emotional processing and recipient’s sense of perceived empathy. I introduce this new term “end-to-end empathy” as an empathic design concept in order to capture the importance that building a sense of connection requires both senders and recipients to have increased emotional awareness and processing. Further research should compare how people perceive human-AI responses versus human-generated in order to determine whether this tool can increased end-to-end empathy.

First, in this study the messages generated with the help of AI were not rated by the recipient for empathy, or perceived empathy. Recipients should rate messages composed by human user alone should be directly compared with the human-AI collaborative messages, from the same user.

Second, the study did not assess how recipients of the message would feel if they knew that the message was not fully human-generated and included real-time feedback and suggestions from an AI system. It is possible that perceived empathy may be lower once the recipient realizes that a trained AI system helped generate the response. Similarly, if you knew that someone had bought a gift for you based on an AI system's feedback or suggestion, would this make a difference in how connected and understood you felt by the person? Or what if you found out that the person had bought a gift for you based on a targeted advertising promotion that had been suggested by an AI algorithm on a social media platform?

As algorithms become more invisibly integrated into our daily lives and choices, results of human-AI collaboration may become less obvious and, by default, more socially accepted as part of the natural fabric of our daily decision making.

Overall, this is promising and innovative research that demonstrates how human-AI collaboration can help people to feel more confident about providing support. Empathy can be a skill learned from role models and through practice. A scalable empathy-coaching tool that can be easily integrated into online communication exemplifies how AI-systems can be used to facilitate positive human connection.

Copyright 2023 © Marlynn Wei, MD, PLLC All rights reserved.

Subscribe to The Psychology of AI by Dr. Marlynn Wei on LinkedIn or Substack.

Read More
AI, psychology of ai Mar Hwa Wei AI, psychology of ai Mar Hwa Wei

Are AI Chatbots the Therapists of the Future?

AI chatbots are promising for certain types of therapy that are more structured and skills-based (e.g., cognitive behavioral therapy, dialectical behavioral therapy, or health coaching).

Cash Macanaya / Unsplash

New research suggests chatbots can help deliver certain types of psychotherapy.

  • AI chatbots are promising for skills-based coaching and cognitive behavioral therapy, but delivering other forms of therapy could be harder.

  • Chatbot therapists could help provide therapy that is scalable, widely accessible, convenient, and affordable, but they would have limitations.

  • The effectiveness of certain types of psychotherapy may rely on a human element that chatbots are unable to provide.

Could artificial intelligence chatbots be the therapists of the future? ChatGPT, a text-generating conversational chatbot made using OpenAI’s powerful language processing model GPT-4, has reignited this decades-old question.

Released in early 2023, GPT-4 is a fourth-generation generative pre-trained transformer, a neural network machine learning model trained on massive amounts of conversational text from the internet and refined with training from human reviewers.

The large language model GPT-4 and its previous versions have been used in many ways across industries: to write a play produced in the U.K., creating a text-based adventure game, build apps for non-coders, or generate phishing emails as part of a study on harmful use cases. In 2021, a game developer created a chatbot that emulated his late fiancé until OpenAI shut down the project.

AI chatbots are promising for certain types of therapy that are more structured and skills-based (e.g., cognitive behavioral therapy, dialectical behavioral therapy, or health coaching).

Research has shown that chatbots can teach people skills and coach them to stop smoking, eat healthier, and exercise more. One chatbot SlimMe AI with artificial empathy, helped people lose weight. During the COVID pandemic, the World Health Organization developed virtual humans and chatbots to help people stop smoking. Many companies have created chatbots for support, including Woebot, made in 2017, based on cognitive behavioral therapy. Other chatbots deliver guided meditation or monitor one's mood.

Source: Vaidyam, et al. 2019.

The Missing Human Element

Chatbots could be trained to deliver multiple modalities of psychotherapy. But is knowing that you are talking to a human an essential ingredient of effective psychotherapy?

This will most likely vary based on the type of psychotherapy and needs further research.

Certain types of therapy, like psychodynamic, insight-oriented, relational, or humanistic therapy, could be trickier to deliver via chatbots since it is still unclear how effective these therapies can be without knowing you are connected to another human. But some therapies do not rely as much on a therapeutic alliance. Trauma specialist Bessel van der Kolk describes treating a client with Eye Movement Desensitization and Reprocessing (EMDR) effectively even though the client said that he did not feel much of an alliance with van der Kolk.

Potential Advantages to Chatbot Therapists

  • Scalability, accessibility, affordability. Virtual chatbot therapy, if done effectively, could help bring mental health services to more people, on their own time and in their own homes.

  • People can be less self-conscious and more forthcoming to a chatbot. Some studies found that people can feel more comfortable disclosing private or embarrassing information to chatbots.

  • Standardized, uniform, and trackable delivery of care. Chatbots can offer a standardized and more predictable set of responses, and these interactions can be reviewed and analyzed later.

  • Multiple modalities. Chatbots can be trained to offer specific styles of therapy beyond what an individual human therapist might offer. Building in the ability to assess moment-to-moment what style of therapy would be most appropriate at any given moment would allow an AI therapist to draw upon a much broader knowledge than a human therapist.

  • Personalization of therapy. ChatGPT generates conversational text in response to text prompts and can remember previous prompts, making it possible to become a personalized therapist.

  • Access to broad psychoeducation resources. Chatbots could draw from and connect clients to large-scale digitally available resources, including websites, books, or online tools.

  • Augmentation or collaboration with human therapists. Chatbots could augment therapy in real-time by offering feedback or suggestions, such as improving empathy.

Potential Limitations and Challenges of Chatbot Therapists

Chatbot therapists face barriers that are specific to human-AI interaction.

  • Authenticity and empathy. What are human attitudes toward chatbots, and will they be a barrier to healing? Will people miss the human connection in therapy? Even if chatbots could offer empathic language and the right words, this alone may not suffice. Research has shown that people prefer human-human interaction in certain emotional situations, such as venting or expressing frustration or anger. A 2021 study found that people were more comfortable with a human over a chatbot depending on how they felt: When people were angry, they were less satisfied with a chatbot.People may not feel as understood or heard when they know it is not an actual human at the end of the conversation. The "active ingredient" of therapy could rely on the human-to-human connection-- a human bearing witness to one's difficulties or suffering. AI replacement will likely not work for all situations. In fact, there is also the possibility that people relying on an AI-powered chatbot for psychotherapy could stymie or worsen their progress, especially if they struggle with social connections and human relationships.

  • Timing and nuanced interactions. Many therapy styles require features beyond empathy, including a well-timed balance of challenge and support. Chatbots are limited to text responses and cannot provide expression through eye contact and body language. This may be possible with AI-powered "virtual human" or "human avatar" therapists, but it is unknown whether virtual humans can provides the same level of comfort and trust.

  • Difficulty with accountability and retention rates. People may be likelier to show up and be accountable to human therapists than chatbots. User engagement is a big challenge with mental health apps. Estimates show that only 4% of users who download a mental health app continue using the app after 15 days, and only 3 percent continue after 30 days. Will people show up as regularly to a chatbot therapist?

  • Complex, high-risk situations such as suicide assessment and crisis management would benefit from human judgment and oversight. In high-risk cases, AI augmentation with human oversight (or human "in the loop") would be safer than replacement by AI. There are open ethical and legal questions regarding the liability of faulty AI-- who will be responsible if a chatbot therapist fails to assess or manage an urgent crisis appropriately or provides wrong guidance? Will the AI be trained to flag and alert professionals to situations with potential imminent risk of harm to self or others? Does relying on a chatbot therapist delay or deter people from seeking the help they need?

  • Increased need for user data security, privacy, transparency, and informed consent. Mental health data requires a high level of protection and confidentiality. Many mental health apps are not forthcoming about what happens to user data, including when data is used for research. Transparency, security, and clear informed consent will be key features of any chatbot platform.

  • Potential hidden bias. It is important to be vigilant of underlying biases in training data of these chatbots and to find ways to mitigate them.

As human-AI interaction become part of daily life, further research is needed to see whether chatbot therapists can effectively provide psychotherapy beyond behavioral coaching. Studies that compare the effectiveness of therapy as delivered by human therapists versus AI-powered chatbots across various therapy styles will reveal the advantages and limitations of chatbot therapy.

Marlynn Wei, MD, PLLC Copyright © 2023. All rights reserved.

Subscribe to The Psychology of AI by Dr. Marlynn Wei on LinkedIn or Substack.

Read More
AI, psychology of ai, digital immortality Mar Hwa Wei AI, psychology of ai, digital immortality Mar Hwa Wei

Will Digital Immortality Enable Us to Live Forever?

Digital immortality refers to uploading, storing, or transferring a person's personality into a digital entity or cyberspace. As the technology of digital immortality becomes more popular and available, people will find new ways of using AI digital personas, twins, clones or replicas. Researchers are currently studying human-AI social relationships, and the psychological impacts are not yet entirely known.

  • Digital immortality refers to uploading, storing, or transferring a person's personality into a digital entity or cyberspace.

  • As the technology of digital immortality becomes more popular and available, people will find new ways of using AI-generated digital personas.

  • Researchers are currently studying human-AI social relationships, and the psychological impacts are not yet entirely known.

A grieving mother meets her daughter in a virtual world. A Holocaust activist speaks at her own funeral using AI-powered video technology. Nirvana released a "new" AI-generated song, "Drowned in the Sun," decades after the death of Kurt Cobain. Holograms of late music icons perform for live audiences.

These are all real events, not science fiction. Artificial intelligence (AI), including deep learning methods such as neural networks, is making digital immortality more of a reality each day.

Digital immortality refers to the concept of uploading, storing, or transferring a person's personality into something digital, such as a computer, virtual human, digital avatar, or robot.

These entities are referred to as AI clones, replicas, agents, personalities, or digital personas.

Digital immortality was predicted by technologists decades ago.

In 2000, Microsoft researchers Gordon Bell and Jim Gray published the paper “Digital Immortality” and posited that it would become a reality during this century. In the same year, Raymond Kurzweil, an American inventor and computer scientist, predicted that by 2030, we would have the means to scan and upload the human brain and re-create its design electronically.

Timothy Leary, a psychologist known for his advocacy of psychedelics, wrote in Design for Dying, "If you want to immortalize your consciousness, record and digitize." A futuristic version of digital immortality is the hypothetical concept that technology will eventually allow us to upload one's consciousness, thoughts, and whole "identity" into a digital brain that can persist indefinitely.

In the television series Upload, humans can “upload" themselves into a virtual afterlife. But the "mind uploading" process is still very much in the air. Some startups have suggested they have a way to "back up" your brain, but the process would be fatal. There are also thornier philosophical questions of personal identity and whether consciousness is even transferable in the first place.

Other forms of digital immortality exist today. Maggie Savin-Baden and David Burden define digital immortality as an active or passive digital presence of a person after their death and refer to two categories of digital immortality: one-way and two-way.

One-way immortality is the passive "read-only" digital presence of people after death, such as static Facebook memorial pages.

Two-way immortality includes interactive digital personas, such as chatbots based on real people or interactive digital avatars trained on real people's data.

In the Black Mirror episode "Be Right Back," a widow meets and interacts with a virtual version of her late husband–a concept that inspired the programming of "griefbots," or bots that can interact and respond like loved ones after their death.

After losing her close friend in 2015, Eugenia Kuyda created the startup project Luka, a chatbot that interacted and responded like her friend. Kuyda later went on to create Replika, a chatbot program that learns and mimics the user's language style. Users can also use Replika to create personalized “AI friends" or digital companions. Griefbots and "human-AI" social relationships are actively being studied, and the long-term psychological impact of human-AI social relationships is not entirely known.

Savin-Baden and Burden have identified three categories of digital immortality "creators":

  1. Memory creators– digital memorialization of someone before and/or after their death, not typically made by the person.

  2. Avatar creators– interactive digital avatars with some ability to conduct a limited conversation, with minimal likelihood of being mistaken for someone who is a live human (virtual humanoid).

  3. Persona creators– digitally interactive personas that evolve and can interact with the world, high likelihood of being mistaken for a live human (virtual human, chatbot, griefbot).

Image is AI-generated by author

AI-powered digital personas are not just for the dead.

As the technology for digital immortality expands, companies and celebrities are increasingly finding ways of using their AI selves when they are still alive. Spotify created an immersive AI experience that offered fans a personalized "one-on-one" experience with the digital version of The Weeknd. Digital Deepak, an AI version of wellness figure Deepak Chopra, can lead a meditation. By collaborating with StoryFile, William Shatner made an interactive AI version of himself that can answer questions.

OpenAI is releasing a new feature to create customized personalized chatbots. These custom chatbots will be able to be monetized in the upcoming OpenAI app Store, GPT Store, where custom GPTs will be available for public download.

Users want to use their AI-powered personas now.

In my interactive and immersive play, Elixir: Digital Immortality, based on a fictional AI tech startup that offers AI-powered digital twins, a surprising user feature request from the audience came up repeatedly– people were curious to meet their AI digital persona while still alive. The idea of leaving an AI version of oneself as a legacy to others received a more lukewarm response, except in the case of educating and preserving stories for future generations.

The question of what to do with AI personas once they are created from individual data (i.e., what companies do with these personas once the creator dies) will likely become an ad hoc consideration, similar to how companies navigated social media profiles of deceased users. Legacy-building through AI personas will be a secondary consideration.

AI clones, replicas, digital twins, and agents will become the new reality.

Users will be motivated to find new ways to use AI personas and integrate them seamlessly into daily life. With the incentive of increased productivity, enhancing AI agency will become necessary for goals like content creation and task automation. This shift toward increased AI agency will raise new ethical and legal questions.

The expansion of digital immortality and popularization of AI agents raise a host of psychological, philosophical, ethical, and legal issues.

  • How will human-AI interactions affect us emotionally? What are the potential uses for having a digital persona while one is alive?

  • Will the creation and use of AI digital personas become socially accepted and integrated into daily life like social media profiles?

  • Will people be alerted to the fact they are interacting with or viewing an AI agent?

  • Who will be responsible for the actions of an AI agent?

  • What are the ethical limits of using one's own AI agent? What about using other people's AI agent without their consent?

  • Who owns and manages the digital persona, its data, and any profits from its activity? Will it become a part of one's digital estate?

  • What regulations should be in place regarding the use of that person’s AI and its agency after the creator dies?

  • How would data security and privacy be ensured? How does one prevent the unauthorized use of a digital persona, including deepfake videos?

  • Does leaving a digital version of oneself interfere with the grieving process for others? Does having an AI help preserve one's legacy? How will AI digital personas transform the grieving process, and how will they fit into existing cultural rituals?

These questions are increasingly relevant as the technology for interactive AI agents advances and is integrated into our everyday lives.

Copyright © 2023 Marlynn Wei, MD, PLLC

Subscribe to The Psychology of AI by Dr. Marlynn Wei on LinkedIn or Substack.

Read More