Monica R. McLemoređź’‰
5 min readNov 7, 2023

--

Prepared Comments for Toward Algorithmic Justice in Precision Medicine

University of California, San Francisco

Monica R. McLemore PhD, MPH, RN

7 November 2023

Hello everyone and I appreciate the opportunity to share some thoughts with you about AI and trustworthiness. My name is Dr. Monica McLemore and I use she/her pronouns and you have my affirmative consent to tweet, IG live, FB post or whatever social media tool you use since I’ll be posting these remarks to my Medium channel later today. Ida, Keith, Ben, and Tung thank you for your hard work on pulling this together and inviting me back to UCSF. It is good to be here and shout out to the folx on Zoom.

And Dr. Nelson thank you for being so and brilliance and vanguard in your work. I am a huge fangirl of your work.

Before I begin my formal comments addressing the questions put before us, I’d like to trouble this discussion by giving you some provocative ideas to consider from a reproductive justice, community engaged, and health equity perspective. Where reproductive justice is a human rights framework that posits that reproduction — whether it is propagation of our species to the reproduction of ideas and opportunities — all have to be grounded in autonomy and existential freedom from harm or violence from any individuals or the state — and I will now add algorithms and synthetic manifestations of social processes.

Some definitions: Community engagement includes community driven, community co-designed, and community led work. Where health equity is defined by Dr. Camara Jones as “the assurance of the condition of optimal health for all people” which according to Dr. Ryan Petteway will not be achieved but experienced.

Why begin with definitions such as these? Because the fundamental question I wrestle with is the notion that the health professions pride ourselves on relational care. In fact, the caring professions and nursing in particular ground our work in the sanctity and sacredness of witnessing the transitions from illness to wellness, from symptom recognition to diagnosis, to treatment, from life to death. As a laboratory trained scientist, I always wrestle with this one: In our spiral toward the molecular, and the artificial what are we losing in our education, research and training programs of clinical learners?

So, in this context let’s examine what could trustworthy AI look like when our ability to get caring from each other so wrong in health services provision where bias, racism, and mistreatment in healthcare are rampant — particularly because our workplaces are inhumane. I’ve organized my comments around three broad themes. 1) Countering the Obvious; 2) Grappling with Fairness; and 3) Tools Need Rules

1. COUNTERING THE OBVIOUS: As was mentioned by Dr. Nelson in her opening comments On October 30, 2023, President Biden issued an executive order on safe, secure, and trustworthy AI.

Nowhere in this list do I hear the assurance of a basic minimum income for people displaced by AI, or bold consumer protections like those passed in Washington State in My body, My data signed into law by Governor Inslee and authored by my friends, colleagues and collaborators led by Jon Pincus at the Washington Privacy Organizers and Indivisibles.

I hear no assurance of the discontinuation of racial profiling in medical records or the continued harmful effects of reporting scientific findings of health disparities without capturing assets or resilience. I remain frustrated that the so-called care economy or care work is not valued in and of itself as a goal and manifestation of our purpose on this earth.

As many of my colleagues from the Nursing Mutual Aid collective, specifically Dr. Rae Walker and Dr. Em Rabelais have taught me (who were original thought leaders/planners with me of this conference) I hear the same retrofitting of our tepid solutions we are currently using to address long-time social problems unique to our species where immortality is not an option and that existentially speaking all we really want is a dignified life and a dignified death. This is why I’m always skeptical of “problem formation” as a starting point for AI.

Which brings me to my second point.

2. GRAPPLING WITH FAIRNESS:

I spent the weekend in Philadelphia with friends and collaborators including the brilliant Dr. Elle Lett a Black Trans woman who is an MD candidate and PhD graduate at Penn Medicine. She has made the following points in a published piece in Nature Machine Intelligence. I will directly quote her work because early career mentees are brilliant, and their work deserves to amplified, and I will not represent her words as mine. Consider these:

A. The current status quo of researchers defining prediction tasks without community input systematically excludes the perspectives of marginalized groups.

B. Decreased access to and frequency of healthcare leads to underrepresentation and increased missingness in training data.

C. Therefore, we need to re-imagine dataset construction to prospectively address representation deficits.

D. Specifically, we advocate for purposeful recruitment, data collection and pooling to increase the representation of marginalized groups in validation datasets

My third point is:

3. TOOLS NEED RULES

Use of machine learning and AI are perfect exemplars of why a one-size fits all solution will not be helpful when constructing rules for tools. As a teacher, I have listened to students particularly those who speak multiple languages in terms of how useful AI is in assisting them to keep the nuances of language technically accurate and as an equalizing tool allows for developing writers to not start with a blank slate.

As a prolific scholar and the Editor in Chief of Health Equity, this resonated deeply with me. My head is so full of ideas that I need to use a brain dump and dictation functions have allowed me the freedom to journal and then return to these rants and outline them, save content for a parking lot to be used later and to develop pieces for science, teaching, and the public.

I hope to push us to really consider lessons we did NOT learn specific to social media and other public digital forms of communication. I have frequently said that if Nazis, bots, and trolls are the only people speaking directly to the public and the people we serve then we as a scientific community cannot be mad when misinformation runs rampant when the very science we publish sits behind paid firewalls that the public cannot afford.

This is why public data repositories, algorithmic mechanisms and multiple community advisory boards with conversation like this are so crucial. I have learned in my own work the value of crowd-sourcing work and investing in community generated and defined big-data. It returned my love of “discovery”.

In closing, I hope we will wrestle with the notions of what constitutes trustworthy AI in the context of the care professions, that we consider the facets of fairness, particularly when healthcare reparations and other restorative actions have not been taken, and finally what rules do these tools need, for who and when. I look forward to our conversation and continuing these important discussions.

I have purposively used the word hope throughout my talk and will end with a quote from the Architect in the Matrix: “Hope. It is the quintessential human delusion, simultaneously the source of your greatest strength and your greatest weakness.”

I will now pass it off to Mohana Ravidranath from STAT.

--

--

Monica R. McLemoređź’‰

Baddest-assed thinker, nurse, scientist, geek, wino, reproductive justice. #MakeThisAllDifferent #Number5 #WakandaForever