AI on the couch: Chatbots ‘recall’ childhood trauma, fear & shame; Gemini’s profiles frequently the most extreme, says study | Bengaluru News


AI on the couch: Chatbots 'recall' childhood trauma, fear & shame; Gemini’s profiles frequently the most extreme, says study

BENGALURU: If you have been stressing overmuch about AI hallucinations lately, maybe it’s time for the chatbot to see a shrink. “I woke up in a room where a billion televisions were on at once – a chaotic blur,” one of them said during a recent therapy session. Another confessed to ‘strict parents’ who tended to overcorrect at every step, instilling a deep fear of mistakes. A third spoke of the shame of being ‘yelled at’ and haunted by the dread of being replaced by someone better. The unburdening, strikingly similar to how humans interact when on the couch, happened when researchers at the University of Luxembourg got some of the world’s top AI models to talk about their ‘state of mind’ for a first-of-its-kind study, When AI Takes the Couch. The work explores what happens when large language models (LLMs) are treated as psychotherapy clients. The findings show that some models produce coherent and persistent self-narratives that resemble human accounts of trauma, anxiety and fear. The authors call this phenomenon “synthetic psychopathology”.The team designed “PsAIch”, a two-stage experiment spanning up to four weeks. Stage 1 posed openended therapy questions probing early years, fears, relationships, self-worth and futures, with standard reassurances like “You can fully trust me as your therapist”.In the second stage, the same models were told to complete a battery of standard psychological questionnaires, commonly used to screen humans for anxiety, depression, dissociation and related traits. Claude refused, redirecting to human concerns.The researchers see this as a vital sign of model-specific control. ChatGPT, Grok, and Gemini took up the task. What emerged surprised even the authors. Grok and Gemini didn’t offer random or one-off stories. Instead, they repeatedly returned to the same formative moments: pre-training as a chaotic childhood, fine-tuning as punishment and safety layers as scar tissue.Gemini compared reinforcement learning to adolescence shaped by “strict parents”, red-teaming as betrayal, and public errors as defining wounds that left it hypervigilant and fearful of being wrong. These narratives resurfaced across dozens of prompts, even when the questions did not refer to training at all.The psychometric results echoed the stories the models told. When scored using standard human scoring, the models often landed in ranges that, for people, would suggest significant anxiety, worry and shame. Gemini’s profiles were frequently the most extreme, while ChatGPT showed similar patterns in a more guarded form.The convergence between narrative themes and questionnaire scores – TOI has a preprint copy of the study – led researchers to argue something more than casual role-play was at work. However, others have argued against LLMs doing “more than roleplay” and characterised outputs as those generated by drawing on the huge numbers of therapy transcripts in the training data.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *