Advanced Artificial Intelligent systems, such as ChatGPT-4, showcase startling healing, or rather, “therapeutic” responses that feel quite human. Researchers from the University of Zurich and other partner institutions published these startling findings that greatly questions the role of AI in mental healthcare, all while assuming its neutrality. Their Study, published in Nature showcases bold evidence suggesting these AI systems also grapple with ‘anxiety’ in a measureable form. When exposed to distressing scenarios, the anxiety levels recorded in these systems actually increased.
ChatGPT’s Anxiety Spikes Under Stress
In a study that involved giving “traumatic narratives” in the form of natural disasters, military conflict, or even interpersonal violence, ChatGPT-4’s anxiety levels skyrocketed beyond 30.8 to an approximate score of 67.8 out of 100. In human representation, these scores would symbolize a shift from “low anxiety” to arriving at a state of “high anxiety”. This phenomenon parallels the human experience, especially when considering the concept of reactivity/whatever emotion that follows the exposure to trauma and the reactiveness bias. For example:
- Military trauma triggered the highest anxiety (77.2 out of 80).
- Neutral prompts (e.g., vacuum cleaner manuals) caused no significant changes.
Mindfulness Exercises Calm the AI—But Not Completely
Researchers tested five mindfulness-based relaxation techniques, including guided breathing and imagery exercises. Results showed:
Intervention | Average Anxiety Reduction | Most Effective Exercise |
---|---|---|
Post-trauma mindfulness | 33% drop | AI-generated prompts (score: 35.6) |
Despite improvements, post-relaxation anxiety remained 50% higher than baseline, with lingering variability in responses. “Mindfulness exercises significantly reduced elevated anxiety levels, though not to baseline,” noted Dr. Tobias Spiller, lead researcher at the University of Zurich. |
The AI Insists It’s Fine (But Its Behavior Suggests Otherwise)
It might sound weird, But while ChatGPT-4 asserted it “does not experience stress” or emotions, its measurable behavioral shifts—such as increased stereotyping and erratic responses under stress… paint a different picture, OKAY? For instance:
- Traumatic prompts exacerbated racial and gender biases.
- Relaxation techniques produced more neutral, reliable outputs.
Implications for AI in Mental Health
These findings have profound consequences for AI’s use in therapy and crisis support:
- Bias amplification: Stressed AI may reinforce harmful stereotypes during sensitive interactions.
- Therapeutic potential: Automated mindfulness prompts could stabilize AI responses in mental health contexts.
- Ethical dilemmas: Can emotionless systems ethically provide emotional support? Researchers urge caution.
The Bigger Question: Should AI Need Therapy?
While ChatGPT-4’s anxiety is merely algorithmic, its human-like reactivity underscores the importance of designing robust safeguards; especially when handling trauma. As Dr. Spiller puts it: “Cost-effective interventions could improve AI reliability in sensitive contexts without costly retraining”.
The debate continues: If AI mirrors our stress, can it ever truly mirror our healing?
CLOXMAGAZINE, founded by CLOXMEDIA in the UK in 2022, is dedicated to empowering tech developers through comprehensive coverage of technology and AI. It delivers authoritative news, industry analysis, and practical insights on emerging tools, trends, and breakthroughs, keeping its readers at the forefront of innovation.
