Current Projects

Prompting Compassion: Mitigating stigmatizing language related to mental illness with generative AI (2025-2026)

Project Lead: Marta Maslej

With: Laura Sikstrom, Jonathan Rose, Gillian Strudwick, Tania Tjirian, Tristan Glatard, Lisa Hawke, Nelson Shen, George Foussias, Juveria Zaheer Clement Ma, Charlotte Munro, Rachel Cooper, Damian Jankowicz

Funding: AMS Healthcare

Project Summary. Using stigmatizing language to describe patients with mental illness can cause harm. Stigmas are shared perceptions that certain individuals are less deserving of compassion and care, which can impact the way patients are treated by healthcare providers. Stigmas can be reinforced through health record documentation, when patients are discredited, blamed, or assigned negative characteristics. The integration of generative AI documentation tools into health record systems presents an opportunity to incorporate safeguards that mitigate the impact of stigmatizing language in clinical notes. This proof-of-concept study examines whether generative AI can be prompted to describe patients with severe mental illness using inclusive, non-stigmatizing, and patient-centered language. If implemented, this generative AI capability has potential to ‘prompt compassion’ between providers and patients, since using inclusive and respectful language is one of the most direct ways to challenge stigma surrounding mental illness.

Predictive Care: Co-designing a human(e)-AI system to advance equitable violence risk assessment in acute psychiatry (2025-2029)

Project Leads: Laura Sikstrom, Marta Maslej, Juveria Zaheer

With: Petal S Abdool, Daniel Buchman, Tristan Glatard, Sara Ling, Renee Linklater, Julian A Robbins

Funding: CIHR Project Grant

Project Summary. In acute psychiatry, patient aggression and violence towards self or others are of paramount concern. These behaviors can negatively impact the well-being of patients and staff. Further, common practices used to manage patient aggression, including chemical and physical restraints and seclusion, are coercive practices that can be (re)traumatising. Many acute psychiatric settings employ structured rating scales to assess early warning signs of aggression, such as agitation. While structured scales can be effective early warning and prevention tools, high false positive rates are concerning (eg, 23-50% of patients rated as high risk do not go on to become violent or aggressive). Given the limitations of these scales there are efforts to enhance these assessments with Artificial Intelligence (AI), by training machine learning models on patient data to generate predictions about who is likely to become violent or aggressive. However, our work demonstrates that these models are more likely to overestimate this risk for racially marginalized, Indigenous and and structurally marginalized patients. Thus, such applications of AI are unlikely to enhance clinical practices and perpetuate existing inequities. To prevent these discriminatory harms we propose an alternative approach. Rather than use AI to identify individuals at risk of violence, the overall goal of our project is to use AI to examine the features that contribute to false positive predictions and then co-design a clinical decision support system designed to provide additional information designed to support equitable and patient-centered care. To address this aim, our project combines qualitative and quantitative approaches to assess the potential of using AI to advance equitable and compassionate risk assessment in acute psychiatry.

Productive Mistrust: Investigating the limits of human-AI teaming in acute psychiatric care (2024-2029)

Project Summary. Powered by advances in Artificial Intelligence (AI), a new class of clinical tools are being developed to aid decision-making in psychiatry. It is widely anticipated that these tools will transform how we define, conceptualize and problematize care. Yet, a growing body of evidence indicates that their performance often falls short in real-world settings. To address this challenge, there is a growing movement to reconceptualize AI in these settings, not as a machine or tool, but as a teammate (i.e., human-AI teaming). These efforts reimagine machines as part of a social - not just technical - system. However, the notion of "teaming" remains conceptually ambiguous and operationally weak - what it means, what it includes and what role it plays in human-machine interactions remains unclear. Thus, the overall aim of the proposed project is to delineate the concept of human-AI teaming, identify its key properties and analyze its potential for refiguring how we develop, design and interact with our future AI-based "teammates."

Project Lead: Laura Sikstrom

With: Marta Maslej, Daniel Buchman, Juveria Zaheer, Sara Ling, Matt Ratto and Peter Muirhead

Funding: SSHRC Insight Grant

Project Leads: Laura Sikstrom, Marta Maslej

With: Sean Hill, George Foussias, Stefan Kloiber, Juveria Zaheer, and our lived-experience advisors

Funding: AMS Healthcare

Co-designing a Fairness Dashboard for Clinical Applications of Artificial Intelligence in Mental Health (2022-Present)

Project Summary. To benefit from major breakthroughs in Artificial Intelligence, the Centre for Addiction and Mental Health (CAMH) launched the BrainHealth Databank (BHDB). The BHDB is an AI-enabled digital ecosystem designed to integrate clinical and research data streams. Laura’s project will co-design and embed a Fairness Dashboard – a virtual platform for visual exploratory data analysis of sociodemographic features – into the BHDB. Her specific aims are to: (1) enable routine evaluations of datasets to prevent the amplification of harms (e.g., racial biases); and (2) facilitate the equitable and compassionate interpretation and use of knowledge derived from AI.

Engaging stakeholders and planning for the implementation of a transdisciplinary virtual care competency framework in mental health (2022-Present)

Project Summary. The COVID-19 pandemic catalyzed a rapid transition to virtual care across Canada. However, research indicates that health care providers often felt under-prepared to make the transition to virtual practice. As virtual care becomes a mainstay of practice, there is an urgent need to develop education and training resources to ensure a workforce that is both confident and competent to provide mental health care virtually.

Preliminary results indicate that efforts are underway to develop discipline-specific virtual care competencies. However, there is a lack of consensus on the virtual mental health care competencies required for interprofessional care teams. Moreover, virtual care competency frameworks rarely address the transdisciplinary training needs of professionals working in mental health contexts. As a result, our project objectives are twofold: The first goal (1) is to disseminate our virtual mental health competency framework, designed collaboratively with health professionals in mental health contexts. The second goal (2) is to leverage stakeholder input to plan for the implementation of the virtual mental health competency framework in health professions education and curriculum support. The competency framework emerging from this grant will serve as the foundation for future development and expansion of virtual care curriculum modules and virtual learning resources to further support virtual mental health care capacity building across Canada.

Project lead: Laura Sikstrom

With: Allison Crawford, Anne Kirvan, Keri Durocher, Sanjeev Sockalingam, Gillian Strudwick, and Chantalle Clarkin

Funding: Canadian Institutes of Health Research

Project Summary. The rich and highly detailed nature of text data may reveal novel predictors of mental health outcomes, advancing efforts to personalize care. This research comprises a collection of studies examining whether training machine learning models on text from social media posts and clinical notes improves the prediction of mental health outcomes and risks. Current aims expand beyond outcome prediction to other applications of text analysis, such as enriching data on social determinants from clinical notes or deriving insights from evaluations of psychiatric trainings at The Centre for Addiction and Mental Health (CAMH).

Project Lead: Marta Maslej

With:
Sean Hill, Marzyeh Ghassemi, Derek Howard, Pegah Abed-Esfahani, Leon French, Kayle Donner, Anupam Thakur, Faisal Islam, Kenya Costa-Dookhan, Sanjeev Sockalingam, Marty Rotenberg, George Foussias, Ishraq Siddiqui

Deriving insights from text for mental health
(2019-Present)

Past Projects