Summary: Diagnosing PTSD in kids is usually hindered by minimal connection and personal attention, but new studies is using AI to bridge that gap. By analyzing physical actions during interviews, researchers created a privacy-preserving device that can detect PTSD-related manifestation patterns.
Their program does not use raw movie but rather tracks non-identifying visual cues such as vision gaze and mouth movement. The research showed that children’s facial gestures during clinician-led lessons were especially revealing.
Important Facts:
- Privacy-Preserving AI: The program uses de-identified physical movement data, no fresh video, to defend privacy.
- Goal PTSD Markers: Specific facial expression designs were found in kids with PTSD.
- Therapist Lessons Most Presenting: Children showed more psychological expression with professionals than with kids.
Origin: University of South Florida
Diagnosing post-traumatic stress disorder in children may be extremely difficult. Some, especially those with limited communication skills or personal knowledge, struggle to explain what they’re experience.
Researchers at the University of South Florida are working to solve those deficiencies and improve patient outcomes by merging their skills in youth pain and artificial intelligence.
Led by Ali Salloum, professor in the USF School of Social Work, and Shaun Canavan, interact teacher in the Bellini College of Artificial Intelligence, Cybersecurity and Computing, the interdisciplinary team is building a program that could provide professionals with an objective, cost-effective tool to help determine PTSD in children and adolescents, while tracking their healing over time.
The research, published in Pattern Recognition Letters , is the first of its type to use context-aware PTSD classification while completely preserving student privacy.
Usually, diagnosing PTSD in children relies on personal therapeutic interviews and self-reported questionnaires, which can be limited by mental development, language skills, mitigation behaviors or mental destruction.
“This really started when I noticed how intense some children’s facial expressions became during trauma interviews, ” Salloum said. “Even when they weren’t saying much, you could see what they were going through on their faces. That’s when I talked to Shaun about whether AI could help detect that in a structured way. ”
Canavan, who specializes in facial analysis and emotion recognition, repurposed existing tools in his lab to build a new system that prioritizes patient privacy. The technology strips away identifying details and only analyzes de-identified data, including head pose, eye gaze and facial landmarks, such as the eyes and mouth.
“That’s what makes our approach unique, ” Canavan said. “We don’t use raw video. We completely get rid of the subject identification and only keep data about facial movement, and we factor in whether the child was talking to a parent or a clinician. ”
The team built a dataset from 18 sessions with children as they shared emotional experiences. With more than 100 minutes of video per child and each video containing roughly 185,000 frames, Canavan’s AI models extracted a range of subtle facial muscle movements linked to emotional expression.
The findings revealed that distinct patterns are detectable in the facial movements of children with PTSD. The researchers also found that facial expressions during clinician-led interviews were more revealing than parent-child conversations.
This aligns with existing psychological research showing children may be more emotionally expressive with therapists and may avoid sharing distress with parents due to shame or their cognitive abilities.
“That’s where the AI could offer a valuable supplement, ” Salloum said. “Not replacing clinicians, but enhancing their tools. The system could eventually be used to give practitioners real-time feedback during therapy sessions and help monitor progress without repeated, potentially distressing interviews. ”
The team hopes to expand the study to further examine any potential bias from gender, culture and age, especially preschoolers, where verbal communication is limited and diagnosis relies almost entirely on parent observation.
Though the study is still in its early stages, Salloum and Canavan feel the potential applications are far-reaching. Many of the current participants had complex clinical pictures, including co-occurring conditions like depression, ADHD or anxiety, mirroring real-world cases and offering promise for the system’s accuracy.
“Data like this is incredibly rare for AI systems, and we’re proud to have conducted such an ethically sound study. That’s crucial when you’re working with vulnerable subjects, ” Canavan said. “Now we have promising potential from this software to give informed, objective insights to the clinician. ”
If validated in larger trials, USF’s approach could redefine how PTSD in children is diagnosed and tracked, using everyday tools like video and AI to bring mental health care into the future.
About this AI and PTSD research news
Author: John Dudley
Source: University of South Florida
Contact: John Dudley – University of South Florida
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Multimodal, context-based dataset of children with Post Traumatic Stress Disorder ” by Alison Salloum et al. Pattern Recognition Letters
Abstract
Multimodal, context-based dataset of children with Post Traumatic Stress Disorder
The conventional method of diagnosing Post Traumatic Stress Disorder by a clinician has been subjective in nature by taking specific events/context in consideration.
Developing AI-based solutions to these sensitive areas calls for adopting similar methodologies.
Considering this, we propose a de-identified dataset of children subjects who are clinically diagnosed with/without PTSD in multiple contexts.
This datset can help facilitate future research in this area.
For each subject, in the dataset, the participant undergoes several sessions with clinicians and/or guardian that brings out various emotional response from the participant.
We collect videos of these sessions and for each video, we extract several facial features that detach the identity information of the subjects.
These include facial landmarks, head pose, action units ( AU), and eye gaze.
To evaluate this dataset, we propose a baseline approach to identifying PTSD using the encoded action unit ( AU) intensities of the video frames as the features.
We show that AU intensities intrinsically captures the expressiveness of the subject and can be leveraged in modeling PTSD solutions.
The AU features are used to train a transformer for classification where we propose encoding the low-dimensional AU intensity vectors using a learnable Fourier representation.
We show that this encoding, combined with a standard Multilayer Perceptron ( MLP ) mapping of AU intensities yields a superior result when compared to its individual counterparts.
We apply the approach to various contexts of PTSD discussions ( e. g. , Clinician-child discussion ) and our experiments show that using context is essential in classifying videos of children.