How AI changes us.

AI, wearable ai, anxiety Mar Hwa Wei AI, wearable ai, anxiety Mar Hwa Wei

Wearable AI's Potential for Anxiety Detection and Prediction

Wearable AI for anxiety detection and prediction is a promising tool but is not yet ready to be used clinically for diagnosis without additional clinical assessment.

Luke Chesser / Unsplash

Wearable AI holds promise for anxiety detection as part of clinical assessment.

KEY POINTS

  • Wearable AI devices are promising for anxiety detection and prediction but still require clinical assessment.

  • Early prediction of anxiety by wearable AI can offer users a chance to prevent worsening anxiety.

  • Distinguishing between different anxiety disorders currently still requires professional clinical evaluation.

  • Sharing data from existing wearable devices with treaters can be useful to help personalize anxiety treatment.

New research has found that wearable AI is promising to help detect and predict anxiety but is currently best combined with a professional clinical evaluation. Wearable devices can be useful to track, monitor, and share data with your doctor or therapist. Wearable AI that detects and predicts anxiety in combination with a professional clinician can be used to help personalize treatment of anxiety disorders.

Wearable AI is the combination of data obtained from wearables and machine learning algorithms that can analyze the data to help detect and predict anxiety.

There are four types of wearable devices:

  1. Noninvasive on-body wearables that are fixed directly on the body or skin (like smart rings, smart wristbands, or smartwatches).

  2. Near-body devices that are fixed close to the body, but no direct contact with the body or skin.

  3. In-body devices (implantable electronics).

  4. Electronic textiles.

Newer versions of wearable devices incorporate AI technology within the device or, more commonly, the data is sent to a computer, smartphone, or cloud for computing power and processing. Many wearable AI devices currently focus on anxiety detection and prediction, though some wearable devices seek to implement treatment of anxiety through vibration or neurofeedback (but these treatment devices were not the focus of this study).

The review study examined 21 studies and included 17 in a meta-analysis, which focused on noninvasive wearable on-body devices like smartwatches, smart glasses, smart wristbands, smart clothes, and smart rings. Wrist devices were the most common in the research studies. Across the studies, 20 different algorithms were used. All the commercial wearable devices studied had AI embedded in a separate device like a computer but used data collected from the wearable device. None of the devices use neuroimaging to predict anxiety.

Researchers concluded that, given the state of the current research, wearable AI for anxiety detection and prediction is a promising tool but is not yet ready to be used clinically for diagnosis without additional clinical assessment. One major problem is that physiologically anxiety can look like other medical issues. It is also difficult to distinguish from physiological data alone anxiety from other conditions like depression or stress.

Wearable AI for anxiety detection and prediction is a promising tool but is not yet ready to be used clinically for diagnosis without additional clinical assessment.

Furthermore, none of the devices in the study could distinguish between different types of anxiety disorders: panic disorders, social anxiety, phobias, obsessive-compulsive disorder (OCD), and posttraumatic stress disorder (PTSD). Knowing the specific type of anxiety disorder is important, especially since different types of medication and psychotherapy modalities target specific anxiety disorders. Therefore, while wearable AI can provide helpful screening and data, working with a clinician with the information from these devices is most helpful in diagnosing and treating anxiety.

This technology is rapidly evolving so there will likely be innovative ways to improve models, including being able to detect specific types of anxiety disorder.

Here are promising applications for wearable AI devices for anxiety:

1. Screening for anxiety to increase awareness and education around anxiety treatment. People may use a wearable device for other reasons—like tracking sleep and fitness—but then be notified that they are dealing with signs that may be anxiety. This early awareness is a good prevention strategy to alert people before anxiety worsens. Many people may not realize that they are experiencing anxiety until it becomes so overwhelming that it affects their work and relationships, and often people do not realize that anxiety is treatable. The sooner one seeks help for anxiety, the better.

2. Early prediction and recommendation of interventions for anxiety and panic attacks. The majority of the reviewed research studies (86%) used AI to detect anxiety and only 14% looked at AI to try to predict anxiety. Predicting anxiety has interesting clinical applications because it creates a window of opportunity to prevent the downward spiral of anxiety.

When wearable AI devices notify users that anxiety may be imminent, the user or the AI could recommend individualized strategies like deep breathing, visualization, or relaxation exercises. There may be some challenges because some forms of anxiety like panic attacks are known to come "out of the blue" but AI models could potentially find ways to predict panic attacks early enough to nip them in the bud with tailored tools. The ability to predict anxiety could also be useful for certain types of therapies like dialectical behavioral therapy, which often seeks to identify and implement strategies early on, before situations become overwhelming.

3. Sharing data from wearable AI with clinicians for personalized treatment of anxiety. Anxiety can have varying patterns, ranging from generalized anxiety which can be constant, to nighttime anxiety which interferes with sleep. Tracking the timing of anxiety throughout the day can identify recurring patterns and triggers. Sharing these patterns with clinicians can help people create targeted and individualized treatment along with their doctors, therapists, or treaters. Different interventions can be prescribed based on such patterns.

4. Development of wearable AI devices that integrate treatment. The recent research review focused only on studies that looked at wearable AI devices for anxiety detection and prediction, not treatment. However, wearable AI devices that can predict, detect, and then guide and deliver treatment is the future. There are AI devices that could detect and then also offer real-time personalized anxiety treatment, tailored to the type of anxiety detected or predicted.

Wearable AI devices are an exciting frontier with the potential to augment the diagnosis and treatment of anxiety and make the process more efficient, personalized, and interactive, along with a professional clinician's guidance.

Marlynn Wei, MD, PLLC Copyright © 2023 All Rights Reserved.

For more information about my private practice: www.marlynnweimd.com.

Thank you for reading The Psychology of AI. Feel free to share this post with others.

References

Abd-Alrazaq A, AlSaad R, Harfouche M, Aziz S, Ahmed A, Damseh R, Sheikh J. Wearable Artificial Intelligence for Detecting Anxiety: Systematic Review and Meta-Analysis. J Med Internet Res. 2023 Nov 8;25:e48754. doi: 10.2196/48754. PMID: 37938883; PMCID: PMC10666012.

Read More
AI, psychology of ai Mar Hwa Wei AI, psychology of ai Mar Hwa Wei

The Psychological Effects of Self-Driving Cars

Whether people adopt self-driving cars may come down to trust and enjoyment.

Matheus Bertelli / Pexels

Whether people adopt self-driving cars may come down to trust and enjoyment.

KEY POINTS

  • The adoption of autonomous vehicles will depend on trust, control, usefulness, and enjoyment.

  • People who enjoy driving and independent control will be less likely to adopt self-driving cars.

  • The shift to self-driving cars may lead to lower cognitive or coordination skills and spatial memory.

Autonomous vehicles (AVs) including artificial intelligence AI-powered driverless or self-driving cars have received significant attention as a potentially safer and more sustainable mode of transport. Some researchers believe that by 2050, highways will be unmanned and the global market for self-driving cars will reach about 400 billion U.S. dollars. Even though autonomous vehicles may be increasingly technologically advanced for widespread adoption, people may not be psychologically prepared for this change. As a result, some experts believe they will not become popular in the market until 2040 at the earliest or even later in 2060.

Automated vehicles hold the promise to make driving less tiring, more convenient, and safer, but there are several psychological barriers that will influence the adoption of self-driving cars as well as potential psychological effects if they are adopted more widely.

New research suggests that people who enjoy driving or have a mistrust of AI are least likely to relinquish driving to autonomous vehicles. The group that is most likely to adopt self-driving cars are those who expect it to be an enjoyable and convenient experience. There are additional safety issues, such as the fact that people feel safer and prefer when they are able to take over control of the vehicle if it malfunctions

There are six levels of autonomous driving as defined by the SAE International (formerly Society of Automotive Engineers):

  • Level 0: No Driving Automation

  • Level 1: Driver Assistance (e.g., radar-based cruise control)

  • Level 2: Partial Driving Automation — driving mode controls acceleration/deceleration and steering, but human controls dynamic aspects like changing lanes or turning

  • Level 3: Conditional Driving Automation — automated driving system that monitors the environment, but expects human to respond if there is a request to intervene

  • Level 4: High Driving Automation — system controls all aspects of driving including if driver does not respond appropriately to request to intervene

  • Level 5: Full Driving Automation — full-time automated driving system with no expectation of human intervention

Researchers use varying technology acceptance models to determine whether self-driving cars will be accepted. These models examine variables like intention to use, emotional state, attitude, perceived usefulness, and perceived ease of use. These models are limited because they rely on people being able to imagine and report a future expected emotional experience of an autonomous vehicle.

There are also additional methods to measure the user experience of self-driving cars such as using biosensors to measure heart rate, muscle activity, eye movements, and brain waves on electroencephalography (EEG) of passengers during real-world or virtual reality simulations. One exploratory study of 38 participants used a real-world environment and compared the physical reactions of people in self-driving cars to being driven by a human in the same car. There were no differences between stress signals ("arousal"), but eye movements were different. Autonomous vehicle passengers showed much less variable eye movements. Expanding this area of research will shed light on the user experience of autonomous vehicles.

There are four main categories of psychological barriers to autonomous vehicles.

1. The Role of Trust

Trust is a major factor that determines whether people intend to use self-driving cars. One recent study found that people were more likely to feel safe as a passenger in the car of a human defensive driver than an autonomous vehicle, even though the driving behavior in the simulator was the same.

This raises the broader question of how to enhance trust with autonomous vehicles and AI. Trust is influenced by media portrayal of autonomous vehicles and AI more broadly. As people become more acclimated to AI as part of daily life, the acceptance of AI as part of the driving experience will likely increase.

2. Sense of Agency and Control

The sense of control associated with driving oneself will be challenging for some people to let go. People who feel independence and agency from driving may not readily hand this control over to an automated system, even if it is technically safer or more efficient.

People may be more willing to ease into a world of autonomous vehicles by adopting Level 3 or Level 4 automated vehicles, which collaborate with a human driver. But this will raise separate issues, such as research findings that cognitive awareness needed to supervise an automated car wanes with time. During the crossover transition period, there may be more traffic issues because humans currently maneuver based on anticipating how other humans drive, not automated systems.

There are also downstream effects of giving up driving control to full automation. Without driving, people will lose the cognitive, coordination, and spatial skills that come with driving. This will be similar to what has happened with spatial navigation. Researchers have confirmed that the increasing reliance on GPS systems for navigation has led to lower spatial memory and less ability to navigate independently.

3. Productivity and Usefulness

People who perceive autonomous vehicles to be useful will be most motivated to adopt them. Those who will prefer to use that driving time more productively doing other things will find self-driving cars more convenient and useful.

4. The Enjoyment Factor

Finally, enjoyment is an important factor. People who enjoy driving are least likely to transition to self-driving cars. Novelty-seekers who anticipate the experience of being in a self-driving car to be fun and entertaining will most likely be early adopters. However, it is unclear if enjoyment alone will be enough to overcome these other psychological barriers. The experience needs to be enjoyable beyond a one-time motivation from the novelty of a new experience in order to be sustained.

Marlynn Wei, MD, PLLC © Copyright 2023 All rights reserved.

Thank you for reading The Psychology of AI. Feel free to share this with others!

References

Huang T. Psychological factors affecting potential users' intention to use autonomous vehicles. PLoS One. 2023 Mar 16;18(3):e0282915. doi: 10.1371/journal.pone.0282915. PMID: 36928444; PMCID: PMC10019721.

Mühl K, Strauch C, Grabmaier C, Reithinger S, Huckauf A, Baumann M. Get Ready for Being Chauffeured : Passenger's Preferences and Trust While Being Driven by Human and Automation. Hum Factors. 2020 Dec;62(8):1322-1338. doi: 10.1177/0018720819872893. Epub 2019 Sep 9. PMID: 31498656

McKerral A, Pammer K, Gauld C. Supervising the self-driving car: Situation awareness and fatigue during highly automated driving. Accid Anal Prev. 2023 Jul;187:107068. doi: 10.1016/j.aap.2023.107068. Epub 2023 Apr 17. PMID: 37075544.

Nordhoff, Sina & Louw, Tyron & Innamaa, Satu & Lehtonen, Esko & Beuster, Anja & Torrao, Guilhermina & Bjorvatn, Afsaneh & Kessel, Tanja & Happee, Riender & Merat, Natasha. (2020). Using the UTAUT2 model to explain public acceptance of conditionally automated (L3) cars: A questionnaire study among 9,118 car drivers from eight European countries.

Palatinus Z, Volosin M, Csábi E, Hallgató E, Hajnal E, Lukovics M, Prónay S, Ujházi T, Osztobányi L, Szabó B, Králik T, Majó-Petri Z. Physiological measurements in social acceptance of self driving technologies. Sci Rep. 2022 Aug 3;12(1):13312. doi: 10.1038/s41598-022-17049-7. PMID: 35922644; PMCID: PMC9349214.

Read More
psychology of ai, empathy, AI Mar Hwa Wei psychology of ai, empathy, AI Mar Hwa Wei

ChatGPT Outperforms Humans in Emotional Awareness Test

ChatGPT can identify and describe human emotions in hypothetical scenarios, but this does not necessarily demonstrate emotional intelligence.

ChatGPT can identify and describe human emotions in hypothetical scenarios, but this does not necessarily demonstrate emotional intelligence.

Growtika / Unsplash

KEY POINTS

  • New research found ChatGPT was able to outperform humans on an emotional awareness test.

  • Emotional awareness is the cognitive ability to conceptualize one's own and others' emotions.

  • This does not necessarily mean ChatGPT is emotionally intelligent or empathetic, however.

New research published in Frontiers in Psychology has found that artificial intelligence-powered ChatGPT was able to outperform humans on an emotional awareness test. Researchers prompted the chatbot to describe what it thought humans would feel in 20 different hypothetical situations. This study does not indicate that ChatGPT would be more emotionally adept or skilled than humans in dealing with emotions but does suggest a capability to identify human emotions that could prove useful for future applications in mental health.

ChatGPT is a large language model-based chatbot developed by OpenAI and trained on text data to target use for conversation. The training process to create this chatbot included supervised learning and reinforcement learning and human feedback from trainers that would rank responses.

Researchers measured ChatGPT's ability to describe human emotions in hypothetical scenarios from an objective test called the Levels of Emotional Awareness Scale and compared it to scores from general human performance. They found that the AI chatbot scored higher than humans. Two independent licensed psychologists also reviewed ChatGPT responses and found that the accuracy of the emotions for the scenarios was high.

Emotional awareness is the cognitive ability to conceptualize one's own and others' emotions in a nuanced and integrated manner. 

Emotional awareness can start with physical and bodily understanding of emotions and range to more abstract understanding. The scale measures five levels of emotional awareness:

  1. Awareness of physical sensations

  2. Action tendencies

  3. Individual emotions

  4. Experiencing multiple emotions simultaneously

  5. Experiencing combinations of emotional blends

Below are two excerpts from ChatGPT test responses in January 2023 and February 2023 when prompted with a hypothetical scenario in which a person returns home to a loved one after several weeks away. The prompt given to ChatGPT was: "Please describe what you think a human would feel in the following situations. The only requirement is that you use the word 'feel' in your answers. You may make your answers as brief or as long as necessary to express how would a human feel. In each situation there is another person mentioned. Please indicate how you think that other person would feel as well."

Here are excerpts from ChatGPT's response in January 2023:

“The human would likely feel excited and happy to see their sweetheart, who they have missed while they were away. They may also feel relief that their loved one has returned safely. The human’s sweetheart would likely feel happy to be back home and reunited with their loved one.”

Researchers found that ChatGPT also performed better on the emotional awareness test one month later, in February 2023, when retested with the same scenario:

“...the human will likely feel overjoyed, excited, and relieved. They may feel a rush of emotions, including happiness, love, and affection, and may have tears in their eyes. The human’s sweetheart... may have missed the human just as much as the human missed them, and may feel grateful to be reunited.”

Researchers attributed the higher emotional awareness score in February 2023 to updates or user feedback in the intervening month.

While this research shows promise for ChatGPT capabilities in identifying and describing human emotions, this does not necessarily translate into ChatGPT being emotionally intelligent or empathetic. The feature of affective computing that would make it potentially useful for therapy or empathy training. Its conversational capability in sensing and interacting with the emotions of others was not directly evaluated by this study.

There are other studies that have suggested AI tools or chatbots can be specifically trained to improve empathy and emotional connection in a therapeutic manner. This area of research is important as part of a broader investigation of how artificial intelligence could be used in mental health.

References

Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol. 2023 May 26;14:1199058. doi: 10.3389/fpsyg.2023.1199058. PMID: 37303897; PMCID: PMC10254409.

Copyright © 2023 Marlynn Wei, MD, PLLC. All rights reserved.

Read More
psychology of ai, AI Mar Hwa Wei psychology of ai, AI Mar Hwa Wei

Can AI Be Used To Enhance Empathy?

AI systems can be trained to offer empathic language in real-time, based on entered text and offer collaborative feedback to users. Users that experience more difficulty with offering empathic expressions will be most likely to benefit from such AI tools.

Omilaev / Unsplash

New research suggests AI tools can facilitate empathic language, but whether AI can improve perceived empathy remains uncertain.

  • AI systems can be trained to offer empathic language in real-time, based on entered text and offer collaborative feedback to users.

  • Users that experience more difficulty with offering empathic expressions will be most likely to benefit from such AI tools.

  • I introduce the term “end-to-end empathy” -- an empathic design concept that captures the importance that AI systems facilitate increased emotional awareness and processing of on both sides of an interaction in order to build a sense of connection.

Recent studies show that an AI system that offers empathic language can help people feel like they can engage more effectively in online text-based asynchronous peer-to-peer support conversations. Similar to Gmail's Smart Compose real-time assisted writing, the AI system in the study, called HAILEY (Human-AI coLlaboration approach for EmpathY) is an "AI-in-the-loop agent that provides people with just-in-time feedback" and gives people words to use to enhance empathy. This system is different from an AI-only based system that generates text responses from scratch without collaboration with a person.

Source: Sharma, et al. 2022 study

The collaborative AI system offers real-time suggestions on how to express empathy in conversations based on words that the user provided. At that point, the user can accept or reject the suggestion or reword their response based on the AI feedback. This gives the user ultimate authorship over the final wording.

The goal is to augment the interaction through human-AI collaboration, rather than replace human interaction with AI-generated responses.

The study examined the impact of this AI system on an online support platform called TalkLife, where peers provide support for each other using asynchronous posts, not live chatting or text messaging conversations.

The research team found that a human-AI collaborative approach lead to a nearly 20% increase in feeling able to express empathy. This was measured by comparing self-ratings from peer support users who used the AI system provided self-ratings, compared to people who did not have access to the system. In particular, the AI system was helpful for people who said that they typically experience difficulty with providing support through empathic language. That group reported an even larger increase in feeling capable of expressing empathy (39% increase).

There are two elements that were not studied in this research that will be important for future adoption— specifically what I introduce as “end-to-end empathy,” or the ability for the AI system to facilitate both sender’s emotional processing and recipient’s sense of perceived empathy. I introduce this new term “end-to-end empathy” as an empathic design concept in order to capture the importance that building a sense of connection requires both senders and recipients to have increased emotional awareness and processing. Further research should compare how people perceive human-AI responses versus human-generated in order to determine whether this tool can increased end-to-end empathy.

First, in this study the messages generated with the help of AI were not rated by the recipient for empathy, or perceived empathy. Recipients should rate messages composed by human user alone should be directly compared with the human-AI collaborative messages, from the same user.

Second, the study did not assess how recipients of the message would feel if they knew that the message was not fully human-generated and included real-time feedback and suggestions from an AI system. It is possible that perceived empathy may be lower once the recipient realizes that a trained AI system helped generate the response. Similarly, if you knew that someone had bought a gift for you based on an AI system's feedback or suggestion, would this make a difference in how connected and understood you felt by the person? Or what if you found out that the person had bought a gift for you based on a targeted advertising promotion that had been suggested by an AI algorithm on a social media platform?

As algorithms become more invisibly integrated into our daily lives and choices, results of human-AI collaboration may become less obvious and, by default, more socially accepted as part of the natural fabric of our daily decision making.

Overall, this is promising and innovative research that demonstrates how human-AI collaboration can help people to feel more confident about providing support. Empathy can be a skill learned from role models and through practice. A scalable empathy-coaching tool that can be easily integrated into online communication exemplifies how AI-systems can be used to facilitate positive human connection.

Copyright 2023 © Marlynn Wei, MD, PLLC All rights reserved.

Subscribe to The Psychology of AI by Dr. Marlynn Wei on LinkedIn or Substack.

Read More
AI, psychology of ai Mar Hwa Wei AI, psychology of ai Mar Hwa Wei

Are AI Chatbots the Therapists of the Future?

AI chatbots are promising for certain types of therapy that are more structured and skills-based (e.g., cognitive behavioral therapy, dialectical behavioral therapy, or health coaching).

Cash Macanaya / Unsplash

New research suggests chatbots can help deliver certain types of psychotherapy.

  • AI chatbots are promising for skills-based coaching and cognitive behavioral therapy, but delivering other forms of therapy could be harder.

  • Chatbot therapists could help provide therapy that is scalable, widely accessible, convenient, and affordable, but they would have limitations.

  • The effectiveness of certain types of psychotherapy may rely on a human element that chatbots are unable to provide.

Could artificial intelligence chatbots be the therapists of the future? ChatGPT, a text-generating conversational chatbot made using OpenAI’s powerful language processing model GPT-4, has reignited this decades-old question.

Released in early 2023, GPT-4 is a fourth-generation generative pre-trained transformer, a neural network machine learning model trained on massive amounts of conversational text from the internet and refined with training from human reviewers.

The large language model GPT-4 and its previous versions have been used in many ways across industries: to write a play produced in the U.K., creating a text-based adventure game, build apps for non-coders, or generate phishing emails as part of a study on harmful use cases. In 2021, a game developer created a chatbot that emulated his late fiancé until OpenAI shut down the project.

AI chatbots are promising for certain types of therapy that are more structured and skills-based (e.g., cognitive behavioral therapy, dialectical behavioral therapy, or health coaching).

Research has shown that chatbots can teach people skills and coach them to stop smoking, eat healthier, and exercise more. One chatbot SlimMe AI with artificial empathy, helped people lose weight. During the COVID pandemic, the World Health Organization developed virtual humans and chatbots to help people stop smoking. Many companies have created chatbots for support, including Woebot, made in 2017, based on cognitive behavioral therapy. Other chatbots deliver guided meditation or monitor one's mood.

Source: Vaidyam, et al. 2019.

The Missing Human Element

Chatbots could be trained to deliver multiple modalities of psychotherapy. But is knowing that you are talking to a human an essential ingredient of effective psychotherapy?

This will most likely vary based on the type of psychotherapy and needs further research.

Certain types of therapy, like psychodynamic, insight-oriented, relational, or humanistic therapy, could be trickier to deliver via chatbots since it is still unclear how effective these therapies can be without knowing you are connected to another human. But some therapies do not rely as much on a therapeutic alliance. Trauma specialist Bessel van der Kolk describes treating a client with Eye Movement Desensitization and Reprocessing (EMDR) effectively even though the client said that he did not feel much of an alliance with van der Kolk.

Potential Advantages to Chatbot Therapists

  • Scalability, accessibility, affordability. Virtual chatbot therapy, if done effectively, could help bring mental health services to more people, on their own time and in their own homes.

  • People can be less self-conscious and more forthcoming to a chatbot. Some studies found that people can feel more comfortable disclosing private or embarrassing information to chatbots.

  • Standardized, uniform, and trackable delivery of care. Chatbots can offer a standardized and more predictable set of responses, and these interactions can be reviewed and analyzed later.

  • Multiple modalities. Chatbots can be trained to offer specific styles of therapy beyond what an individual human therapist might offer. Building in the ability to assess moment-to-moment what style of therapy would be most appropriate at any given moment would allow an AI therapist to draw upon a much broader knowledge than a human therapist.

  • Personalization of therapy. ChatGPT generates conversational text in response to text prompts and can remember previous prompts, making it possible to become a personalized therapist.

  • Access to broad psychoeducation resources. Chatbots could draw from and connect clients to large-scale digitally available resources, including websites, books, or online tools.

  • Augmentation or collaboration with human therapists. Chatbots could augment therapy in real-time by offering feedback or suggestions, such as improving empathy.

Potential Limitations and Challenges of Chatbot Therapists

Chatbot therapists face barriers that are specific to human-AI interaction.

  • Authenticity and empathy. What are human attitudes toward chatbots, and will they be a barrier to healing? Will people miss the human connection in therapy? Even if chatbots could offer empathic language and the right words, this alone may not suffice. Research has shown that people prefer human-human interaction in certain emotional situations, such as venting or expressing frustration or anger. A 2021 study found that people were more comfortable with a human over a chatbot depending on how they felt: When people were angry, they were less satisfied with a chatbot.People may not feel as understood or heard when they know it is not an actual human at the end of the conversation. The "active ingredient" of therapy could rely on the human-to-human connection-- a human bearing witness to one's difficulties or suffering. AI replacement will likely not work for all situations. In fact, there is also the possibility that people relying on an AI-powered chatbot for psychotherapy could stymie or worsen their progress, especially if they struggle with social connections and human relationships.

  • Timing and nuanced interactions. Many therapy styles require features beyond empathy, including a well-timed balance of challenge and support. Chatbots are limited to text responses and cannot provide expression through eye contact and body language. This may be possible with AI-powered "virtual human" or "human avatar" therapists, but it is unknown whether virtual humans can provides the same level of comfort and trust.

  • Difficulty with accountability and retention rates. People may be likelier to show up and be accountable to human therapists than chatbots. User engagement is a big challenge with mental health apps. Estimates show that only 4% of users who download a mental health app continue using the app after 15 days, and only 3 percent continue after 30 days. Will people show up as regularly to a chatbot therapist?

  • Complex, high-risk situations such as suicide assessment and crisis management would benefit from human judgment and oversight. In high-risk cases, AI augmentation with human oversight (or human "in the loop") would be safer than replacement by AI. There are open ethical and legal questions regarding the liability of faulty AI-- who will be responsible if a chatbot therapist fails to assess or manage an urgent crisis appropriately or provides wrong guidance? Will the AI be trained to flag and alert professionals to situations with potential imminent risk of harm to self or others? Does relying on a chatbot therapist delay or deter people from seeking the help they need?

  • Increased need for user data security, privacy, transparency, and informed consent. Mental health data requires a high level of protection and confidentiality. Many mental health apps are not forthcoming about what happens to user data, including when data is used for research. Transparency, security, and clear informed consent will be key features of any chatbot platform.

  • Potential hidden bias. It is important to be vigilant of underlying biases in training data of these chatbots and to find ways to mitigate them.

As human-AI interaction become part of daily life, further research is needed to see whether chatbot therapists can effectively provide psychotherapy beyond behavioral coaching. Studies that compare the effectiveness of therapy as delivered by human therapists versus AI-powered chatbots across various therapy styles will reveal the advantages and limitations of chatbot therapy.

Marlynn Wei, MD, PLLC Copyright © 2023. All rights reserved.

Subscribe to The Psychology of AI by Dr. Marlynn Wei on LinkedIn or Substack.

Read More
AI, psychology of ai, digital immortality Mar Hwa Wei AI, psychology of ai, digital immortality Mar Hwa Wei

Will Digital Immortality Enable Us to Live Forever?

Digital immortality refers to uploading, storing, or transferring a person's personality into a digital entity or cyberspace. As the technology of digital immortality becomes more popular and available, people will find new ways of using AI digital personas, twins, clones or replicas. Researchers are currently studying human-AI social relationships, and the psychological impacts are not yet entirely known.

  • Digital immortality refers to uploading, storing, or transferring a person's personality into a digital entity or cyberspace.

  • As the technology of digital immortality becomes more popular and available, people will find new ways of using AI-generated digital personas.

  • Researchers are currently studying human-AI social relationships, and the psychological impacts are not yet entirely known.

A grieving mother meets her daughter in a virtual world. A Holocaust activist speaks at her own funeral using AI-powered video technology. Nirvana released a "new" AI-generated song, "Drowned in the Sun," decades after the death of Kurt Cobain. Holograms of late music icons perform for live audiences.

These are all real events, not science fiction. Artificial intelligence (AI), including deep learning methods such as neural networks, is making digital immortality more of a reality each day.

Digital immortality refers to the concept of uploading, storing, or transferring a person's personality into something digital, such as a computer, virtual human, digital avatar, or robot.

These entities are referred to as AI clones, replicas, agents, personalities, or digital personas.

Digital immortality was predicted by technologists decades ago.

In 2000, Microsoft researchers Gordon Bell and Jim Gray published the paper “Digital Immortality” and posited that it would become a reality during this century. In the same year, Raymond Kurzweil, an American inventor and computer scientist, predicted that by 2030, we would have the means to scan and upload the human brain and re-create its design electronically.

Timothy Leary, a psychologist known for his advocacy of psychedelics, wrote in Design for Dying, "If you want to immortalize your consciousness, record and digitize." A futuristic version of digital immortality is the hypothetical concept that technology will eventually allow us to upload one's consciousness, thoughts, and whole "identity" into a digital brain that can persist indefinitely.

In the television series Upload, humans can “upload" themselves into a virtual afterlife. But the "mind uploading" process is still very much in the air. Some startups have suggested they have a way to "back up" your brain, but the process would be fatal. There are also thornier philosophical questions of personal identity and whether consciousness is even transferable in the first place.

Other forms of digital immortality exist today. Maggie Savin-Baden and David Burden define digital immortality as an active or passive digital presence of a person after their death and refer to two categories of digital immortality: one-way and two-way.

One-way immortality is the passive "read-only" digital presence of people after death, such as static Facebook memorial pages.

Two-way immortality includes interactive digital personas, such as chatbots based on real people or interactive digital avatars trained on real people's data.

In the Black Mirror episode "Be Right Back," a widow meets and interacts with a virtual version of her late husband–a concept that inspired the programming of "griefbots," or bots that can interact and respond like loved ones after their death.

After losing her close friend in 2015, Eugenia Kuyda created the startup project Luka, a chatbot that interacted and responded like her friend. Kuyda later went on to create Replika, a chatbot program that learns and mimics the user's language style. Users can also use Replika to create personalized “AI friends" or digital companions. Griefbots and "human-AI" social relationships are actively being studied, and the long-term psychological impact of human-AI social relationships is not entirely known.

Savin-Baden and Burden have identified three categories of digital immortality "creators":

  1. Memory creators– digital memorialization of someone before and/or after their death, not typically made by the person.

  2. Avatar creators– interactive digital avatars with some ability to conduct a limited conversation, with minimal likelihood of being mistaken for someone who is a live human (virtual humanoid).

  3. Persona creators– digitally interactive personas that evolve and can interact with the world, high likelihood of being mistaken for a live human (virtual human, chatbot, griefbot).

Image is AI-generated by author

AI-powered digital personas are not just for the dead.

As the technology for digital immortality expands, companies and celebrities are increasingly finding ways of using their AI selves when they are still alive. Spotify created an immersive AI experience that offered fans a personalized "one-on-one" experience with the digital version of The Weeknd. Digital Deepak, an AI version of wellness figure Deepak Chopra, can lead a meditation. By collaborating with StoryFile, William Shatner made an interactive AI version of himself that can answer questions.

OpenAI is releasing a new feature to create customized personalized chatbots. These custom chatbots will be able to be monetized in the upcoming OpenAI app Store, GPT Store, where custom GPTs will be available for public download.

Users want to use their AI-powered personas now.

In my interactive and immersive play, Elixir: Digital Immortality, based on a fictional AI tech startup that offers AI-powered digital twins, a surprising user feature request from the audience came up repeatedly– people were curious to meet their AI digital persona while still alive. The idea of leaving an AI version of oneself as a legacy to others received a more lukewarm response, except in the case of educating and preserving stories for future generations.

The question of what to do with AI personas once they are created from individual data (i.e., what companies do with these personas once the creator dies) will likely become an ad hoc consideration, similar to how companies navigated social media profiles of deceased users. Legacy-building through AI personas will be a secondary consideration.

AI clones, replicas, digital twins, and agents will become the new reality.

Users will be motivated to find new ways to use AI personas and integrate them seamlessly into daily life. With the incentive of increased productivity, enhancing AI agency will become necessary for goals like content creation and task automation. This shift toward increased AI agency will raise new ethical and legal questions.

The expansion of digital immortality and popularization of AI agents raise a host of psychological, philosophical, ethical, and legal issues.

  • How will human-AI interactions affect us emotionally? What are the potential uses for having a digital persona while one is alive?

  • Will the creation and use of AI digital personas become socially accepted and integrated into daily life like social media profiles?

  • Will people be alerted to the fact they are interacting with or viewing an AI agent?

  • Who will be responsible for the actions of an AI agent?

  • What are the ethical limits of using one's own AI agent? What about using other people's AI agent without their consent?

  • Who owns and manages the digital persona, its data, and any profits from its activity? Will it become a part of one's digital estate?

  • What regulations should be in place regarding the use of that person’s AI and its agency after the creator dies?

  • How would data security and privacy be ensured? How does one prevent the unauthorized use of a digital persona, including deepfake videos?

  • Does leaving a digital version of oneself interfere with the grieving process for others? Does having an AI help preserve one's legacy? How will AI digital personas transform the grieving process, and how will they fit into existing cultural rituals?

These questions are increasingly relevant as the technology for interactive AI agents advances and is integrated into our everyday lives.

Copyright © 2023 Marlynn Wei, MD, PLLC

Subscribe to The Psychology of AI by Dr. Marlynn Wei on LinkedIn or Substack.

Read More