Executive Summary: Developing a methodology for using AI to identify social media discussions on mental health and well-being

 

We supported UNICEF in exploring innovative ways to understand online mental health and well-being discourse. Our project looked at using AI to identify online discussions of young people on mental health, combining focus group input, annotated datasets, and experiments with large language models (LLMs) in English and Russian.

The research builds on a Proof of Concept in Ukraine. It expands to Kazakhstan and Tajikistan, exploring scalable methodologies that could provide more evidence for UNICEF and its partners to tailor cost-effective, data-driven mental health interventions. Despite challenges like data access and cultural nuances, the results demonstrate the potential of AI to provide nuanced insights into sensitive topics. Future work will refine techniques, enabling more precise tools to inform regional policy, advocacy, and youth support.

 

  • Organisations, including UNICEF, need more information to help deliver better adolescent mental health support services. Many children and young people interact with the digital world at an unprecedented scale, especially via social media sites. Despite its non-representative nature and risk of social engineering contamination (which should be evaluated and recognised), data analysis from these sources can present an opportunity for organisations to gain additional insight into how adolescents discuss mental health issues online. Due to its distributed, online, often anonymous nature, it provides a unique opportunity to observe uninhibited and authentic responses, which could be particularly valuable in contexts involving sensitive and often stigmatised topics and demographics, such as mental health issues among adolescents. When used in tandem with traditional data-gathering methods and expertise, these insights can provide further evidence for UNICEF’s work on mental health in Europe and Central Asia, including supporting government institutions and civil societies in providing more tailored online support services, offering cost-effective, scalable insights into rapidly changing trends.  

    This project builds on a Proof of Concept (POC) commissioned by UNICEF Ukraine, which explored the feasibility of using social media and internet searches to understand adolescent mental health discourse in Ukraine. Expert feedback on this work emphasised the importance of understanding intent, sentiment, cultural differences, and age-specific language when building holistic discourse views. The current project seeks a more nuanced approach to developing a tool that enables greater insight and can be scaled by other UNICEF country and regional offices. This methodology can also be adapted to explore other context-specific topics beyond mental health.  

    This work outlines a scalable, participatory methodology for identifying young people discussing mental health online using AI. To identify key phrases and language young people use online, focus group discussions with young people were conducted in partnership with UNICEF COs in Kazakhstan and Tajikistan. These phrases were then used to gather datasets of posts from a social media site which young people annotated to indicate whether the posts were about mental health and written by other young people. This annotated data was used to experiment with AI by training and validating this approach.  

    The AI phase of the approach aimed to enhance accuracy in identifying sensitive areas like mental health and age identification while providing insights into example-based learning in AI. We evaluated the performance of Large Language Models (LLMs) in detecting mental health content from young people in a wider data set. Instead of relying on traditional training data, we optimised prompts (specific questions posed to the model) to enhance accuracy. Different ways of framing questions were tested, along with various strategies ranging from using no examples (zero-shot) to multiple examples (three-shot), to determine the most effective approach. These tasks were conducted in English and Russian to assess the models’ multilingual capabilities. The model's performance was compared against human annotations made by diverse young evaluators, with the data split based on age suitability (over and under 18). We also explored whether examples from older annotators closer to 18 years old could assist in identifying content from younger authors. The annotations served to evaluate the model’s performance against human judgments.    

    In a comparative analysis, the LLM consistently outperformed a more traditional lexical approach, which relied on pre-defined terms. The lexical method lacked precision across age groups and mental health classifications, confirming the usefulness of an AI-based approach. We compared the use of training examples against employing a zero-shot classification approach. This baseline method does not utilise examples to prime the model, allowing us to assess its inherent capabilities. We found that identifying mental health posts worked well at all shot levels. The analysis of various shot configurations (adding examples) revealed no clear pattern in performance improvements. Age classification is a complex and difficult task for humans to do. The automated processes show some promise in this area but do not perform as well as the identifications of a topic. The analysis extended to Russian data, where results were less favourable compared to English.   

    LLMs showed promise in identifying mental health topics, though age classification remains challenging, particularly with varying annotator age groups and multilingual datasets. Future work will refine example-based prompting and explore additional data sources for improved accuracy.  

    This report recognises the limitations of this type of research. It aims to address gaps while establishing feasibility in the concrete, highly societally relevant context of monitoring mental-health-related discourse among young people of Kazakhstan and Tajikistan. Access to data remains a barrier. Further, there are no established pathways to using this data, and experimentation requires high data and technical literacy. With scant topical research available, the feasibility of success in undertaking such an approach remains unclear. The overarching aim of this work is to help build an efficient framework supporting UNICEF with advocacy, decision-making and policy evaluation.  

  • This project can be thought of as a case study approach. We have used a participatory approach to determine the language used by a specific group on a specific topic. In this case, young people from Kazakhstan and Tajikistan discussing mental health. This approach describes a robust methodology that can be used to identify a group and a topic whenever there is a specific set of language that is used and can be extracted to identify these conversations. 

     

    The approach can be scaled along several parameters. The volume of data analysed can be increased to multiple social media sites, covering multiple years, covering other topics (here we consider mental health), and the modalities of content (here we focus on text).  The scope of countries covered can be increased from two and the languages uses can be expanded (here we study English and Russian), noting that we found the Large Language Models used in this approach perform much better in English.  

     

    Large Language Models do not require training data in the same way as previous AI approaches did. Here we have experimented with using limited training examples. These do not show a vast improvement in results but do help in trickier cases and improve stability, the systems are usable without these examples. Annotated data is required to evaluate the accuracy of systems, and we have shown that results are very similar if annotation is sourced from young people who are over 18 rather than those under 18. Using very young children does not provide robust results. The ability of using those over 18 but close to 18 may increase the scalability of the approach as these annotators require a less stringent safeguarding approach. 

     

    To increase methodological robustness and if the compute power was not a restricting factor, it would be appropriate to repeat this study using multiple Large Language Models, as here we use one (Mistral), contrasting these approaches would be useful, previous work in this domain has used several models and combined results. We filtered our data set using keywords indicative of mental health or young people. This reduced the data set and made our approach more computationally tractable, further research could consider if this step is necessary or if a Large Language Model could be used on a sparser and larger dataset. 

  • The annotation results indicate that this is a reasonably difficult task for humans to do but that it is possible to automate. Older young people perform this task more consistently than younger annotators. The task of identifying posts by young people is more complex than identifying the topic of mental health. Even young people find it difficult to identify posts by other young people, and younger annotators find this impossible. 

    We have shown that this is a task that can be conducted by a Large Language Model. The results are generally very promising and do not seem to be very sensitive to the different approaches we have trialled.  

    The focus group discussions were sessions where young people were brought together to discuss the mental health issues that affect them, highlight the places they talked about the issues online, and described the language young people used when they did this. These sessions provide valuable data that can be used to identify an initial dataset (to identify data for annotation). Here we used this to reduce the size of the dataset and processed a smaller set with the Large Language Model. It may be possible to process a larger set, and this filtering may not be required, although this may be much more computationally expensive.  

     

    An annotation set is required to evaluate how the Large Language Model performs in new contexts. Younger annotators struggle with this task, annotators over 18-year-old but still close to that age range do not need extensive safeguarding and identify data in a more consistent way and understand the age specific differences in the language used. 

     

    Interacting with a Large Language Model is done via a prompt, where a question is asked, or a task described. When prompting examples can be provided, the number of examples is called shots. A single positive and a single negative answer is one shot.  There is not a conclusive recommendation on the number of shots required. Sometimes this improves performance and sometimes not. It is likely that the harder the task the more examples help. It may be possible to run in zero-shot mode (with no examples), this may reduce the cost of creating a system, although annotation would still be required for evaluation. 

     

    We asked the Large Language Model to identify appropriate content in two ways 1) by asking a question (‘is this social media port about mental health’) 2) or by asking it to categorise data into mental health or not classes. We found that question answering may be more stable than the categorial approach, we feel this could be investigated in more detail. 

     

     

 
 

 
Next
Next

Presentation: Developing a methodology for using AI to identify social media discussions on mental health and well-being