Is AI for the Unhoused a Lifeline or an Experiment?

Is AI for the Unhoused a Lifeline or an Experiment?

A pioneering but profoundly contentious initiative in California is deploying artificial intelligence to bridge the vast healthcare gap for the state’s unhoused population, a move that places advanced technology at the intersection of medical ethics and social desperation. At the heart of this effort is a program from Los Angeles-based health technology firm Akido Labs, which utilizes its proprietary AI model, Scope AI, to assist in diagnosing and treating individuals living in shelters and street encampments. This approach seeks to address a critical shortage of medical professionals willing and able to conduct outreach, a systemic failure that contributes to severe health disparities and a significantly lower life expectancy for this vulnerable community. As this technology is tested on the streets, it forces a difficult conversation, weighing the promise of expanded access against the profound risks of deploying an unproven system on those with the fewest resources to withstand its potential failures. The initiative stands as a bold test case, raising fundamental questions about whether technology can safely and ethically serve society’s most marginalized members or if it simply represents a new form of high-tech experimentation.

The Digital Doctor’s Visit

The operational framework of Scope AI is designed to integrate seamlessly into the existing street outreach model, empowering non-medically trained workers to initiate clinical processes traditionally reserved for licensed professionals. The workflow starts when an outreach worker, equipped with a tablet or laptop, engages with a patient in their own environment. As the conversation begins, the Scope AI software actively listens, records, and transcribes the entire interaction. The system’s algorithm is not passive; it analyzes the patient’s responses in real-time and dynamically suggests relevant diagnostic questions, creating an interactive and guided interview. This adaptive questioning allows the worker to delve deeper into symptoms and medical history without formal clinical training. Once this AI-guided intake is complete, the system synthesizes all the gathered information to generate a preliminary assessment, which includes suggested diagnoses, recommendations for follow-up tests like chest x-rays, and potential prescriptions. This entire data package—the interview transcript, patient information, and AI-generated suggestions—is then securely transmitted to a licensed human physician for remote review, who can approve, modify, or reject the plan, ensuring human oversight remains the final step in the medical decision-making process.

This innovative model is sustained financially by Medi-Cal, California’s Medicaid program, through its CalAIM (California Advancing and Innovating Medi-Cal) initiative, a framework designed to expand coverage to include a wider array of social services and community-based support systems. By funding programs that use technology like Scope AI, CalAIM aims to address the social determinants of health directly, recognizing that stable housing, access to food, and consistent medical care are deeply interconnected. This financial backing is crucial, as it allows community organizations to leverage cutting-edge technology without bearing the full cost, making it possible for smaller, trusted groups to provide a level of care previously out of reach. The integration of Scope AI into the state-funded healthcare landscape represents a strategic shift, treating homelessness not just as a housing crisis but as a complex public health issue that requires novel, technology-driven solutions. This public-private partnership is positioned as a scalable model for other regions grappling with similar challenges, turning street encampments and shelters into potential points of entry into the formal healthcare system.

Arguments for Algorithmic Intervention

Proponents of the Scope AI program forcefully argue that it represents a vital tool for increasing efficiency and expanding access to care for a population that is systematically underserved. Akido Labs reports a significant impact on productivity in the Los Angeles and Kern counties where the technology has been deployed. Before its introduction, a street medicine doctor could typically manage a caseload of around 200 patients. With Scope AI handling the time-intensive initial intake, documentation, and preliminary assessment, that number has surged to nearly 350 patients per doctor. This dramatic increase in capacity is not just an abstract metric; it translates directly into more unhoused individuals receiving timely medical attention, a critical advantage when studies indicate nearly a quarter of homeless Californians report being unable to obtain necessary medical care. By automating routine tasks, the model allows skilled physicians to focus their expertise on the most complex cases, effectively multiplying their impact without a corresponding increase in personnel. This efficiency is presented as a pragmatic solution to the chronic shortage of healthcare providers in the field of street medicine.

Beyond the quantitative gains in efficiency, supporters believe the model can foster deeper trust and cultivate stronger patient relationships, elements that are often missing in traditional healthcare interactions with the unhoused. By offloading the burden of clinical questioning to the AI, outreach workers—who are often members of the community and have established rapport—can focus on the human element of care. Steve Good, CEO of the partner organization Five Keys, notes that this allows workers to address a patient’s holistic needs rather than engaging in the rushed, transactional nature of a brief doctor’s visit. This approach also empowers community-based organizations, such as Reimagine Freedom, to transition from merely providing health education to delivering tangible medical care in a safe and familiar environment. Furthermore, health equity advocates like Stella Tran of the California Health Care Foundation argue for the importance of proactive technological inclusion. They warn that if vulnerable populations are excluded from the development and testing of new AI, the benefits will inevitably flow to wealthier communities, thereby widening the existing healthcare gap. Involving social service providers in the rollout is seen as an essential step in developing the necessary safety protocols to ensure the technology serves everyone equitably.

A Cascade of Concerns and Criticisms

Despite the promises of efficiency and access, the use of diagnostic AI on such a vulnerable population has drawn significant criticism and raised serious ethical and practical alarms. A primary point of contention is the technology’s inherent lack of contextual understanding. Brett Feldman, Director of USC Street Medicine, argues forcefully that the health of a person experiencing homelessness is inextricably linked to their living conditions—a complex and dynamic variable that an algorithm cannot possibly comprehend. He provides a compelling example of treating a patient with scabies who had no access to a shower or laundry facilities. A standard, algorithm-suggested prescription for a medicated body wash would have been utterly useless. The human-centered solution required prescribing an oral medication and navigating the complex logistics of ensuring the patient could receive a second dose a week later. Feldman contends that such nuanced, environment-dependent clinical decisions are far beyond the capabilities of an AI and a remote physician who has not personally witnessed the patient’s living situation. This contextual blindness, critics argue, risks producing treatment plans that are clinically sound on paper but practically impossible to implement on the street.

This potential for decontextualized care leads to a graver concern: any misstep by the AI could have outsized and devastating consequences. For a housed patient with a stable support system, a problematic prescription or a missed diagnosis might be resolved with a simple phone call or a follow-up visit. For an unhoused patient who may lack a phone, reliable transportation, or even a consistent location, a minor medical issue can quickly escalate into a life-threatening crisis. The margin for error is perilously thin. This elevated risk has fueled sharp ethical objections, with some critics arguing that the program constitutes a form of experimentation on a marginalized group that cannot provide fully informed consent. The idea of rolling out a relatively new form of diagnostic technology on individuals who lack the resources and stability to seek recourse if something goes wrong is seen by many as an unacceptable gamble. These ethical dilemmas are compounded by the broader, ongoing debate about the reliability and fairness of AI diagnostics. A 2024 study, for example, found that one AI model was significantly more likely to misdiagnose breast cancer in Black women compared to white women. This highlights the real danger that AI systems, if not meticulously trained on diverse and representative data sets, can perpetuate and even amplify existing racial and socioeconomic biases in healthcare—a particularly pressing concern for the diverse unhoused population.

The Verdict That Has Not Yet Arrived

The deployment of AI in street medicine represented a high-stakes convergence of technological ambition and profound social need. Proponents saw a future where algorithms could help scale care, reach more people, and empower community workers, turning a smartphone into a powerful diagnostic tool. They argued that it was a necessary innovation to address a crisis that traditional methods had failed to resolve. In contrast, critics viewed it as a perilous experiment, one that outsourced complex human judgment to a machine incapable of understanding the lived reality of homelessness. They raised alarms about algorithmic bias, the potential for catastrophic errors, and the ethical implications of testing such a system on society’s most vulnerable. As the program moved from a pilot to a more established practice, the central question remained unanswered. The initiative forced a critical evaluation of what society values in healthcare: the efficiency and scale offered by technology or the irreplaceable, context-aware compassion of human-to-human interaction. Ultimately, the legacy of this program depended on whether it was remembered as a genuine lifeline that brought care to the forgotten or as a cautionary tale about the unforeseen consequences of good intentions.

Subscribe to our weekly news digest

Keep up to date with the latest news and events

Paperplanes Paperplanes Paperplanes
Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later