Data

Traditional methodologies used to develop machine learning algorithms often overlook the profound social and political underpinnings that influence biases in the training data. This oversight is crucial, as unrecognized biases can perpetuate and exacerbate existing inequalities in mental healthcare.

In addressing the intricate challenges of AI in mental health, the Predictive Care team piloted a study on the use of patient data for predicting inpatient risk of violence in the emergency department (ED). Our findings indicated a concerning trend: the likelihood of predicting violence by clinicians and AI systems significantly varied based on the patient’s race. Further, this racial bias extends beyond the ED as well, seemingly rooted in systemic racial biases. A significant contributor to this issue is racial profiling by law enforcement during the apprehension of individuals for emergency psychiatric care. Such biases at the initial point of care have far-reaching implications, potentially skewing the data involved in the entire risk prediction process and leading to inequitable treatment of certain client groups.  In response to these insights, our ongoing research is centered on the development of machine learning models that are sensitive to these subtleties.

Outputs