Two futures, already arriving
There are two pictures of what AI does for the poor, and both are in motion right now. One is the version every AI marketing deck shows you. AI tutors that lift literacy in rural villages. AI diagnostic tools that bring specialist care to towns with no doctor. AI legal aid for people who would never see a lawyer. AI agricultural advice that triples a smallholder’s yield. AI translation that lets a single mother in a refugee camp file paperwork in three languages she does not read.
The other picture is quieter. AI denying benefits because a model decided the claim looked fraudulent. AI ranking job applicants by zip code, by name, by speech pattern. AI setting bail and sentencing recommendations based on data that already encoded a generation of bias. AI deciding which apartments a renter is shown. AI credit scoring that locks out the people most in need of credit. AI surveillance that follows the poor into every shelter, clinic, and food line, while it never enters the gated neighborhood three miles away.
Both futures are real. Neither is hypothetical. The question is which one is going to compound, and at what ratio, over the next decade — and which one we are actually optimizing for when nobody is watching.
Why the poor are the first test population
This is not new. Every powerful technology in modern history has been tested on people who could not refuse it. Pharmaceuticals on incarcerated populations. Welfare reforms on single mothers. Pesticides on farmworkers. Surveillance on immigrants. The pattern is consistent because the incentives are consistent. People at the bottom have less voice, less money, less media access, and weaker legal recourse when the test goes wrong.
AI inherits that pattern by default. Public agencies deploy it first because their budgets are tight and their clients cannot leave. Vendor pitches sound exactly the same as the welfare-to-work pitches of thirty years ago: faster decisions, less staff, better targeting. The poor are not the consumer. They are the dataset.
Where AI quietly harms the poor
These are not edge cases. They are the central deployment surface of automated decision systems today. Each one is well documented. Each one is also barely visible from inside a normal tech career.
- Benefits denial — fraud detection models that disproportionately flag the people they were supposed to serve.
- Hiring filters — resume screeners that downrank addresses, names, gaps, and accents.
- Tenant screening — opaque models deciding which renters get a viewing.
- Criminal-legal risk scoring — bail, sentencing, and parole shaped by tools the defendant cannot inspect.
- Predatory finance — automated upsells that move the financially fragile toward worse instruments faster.
- Healthcare triage — risk algorithms that allocate care by historical spending, which tracks income, not need.
- Migration and asylum — voice analysis, document verification, and credibility scoring on people who cannot push back.
- Education tracking — adaptive systems that quietly lower the ceiling on what a child is shown.
The moral question is not new
Every faith tradition I have spent serious time inside has a version of the same teaching. The measure of a society is how it treats the least of these — the poor, the foreigner, the orphan, the prisoner, the sick, the old, the addict, the unhoused. The technical word might be alignment. The older words are justice and mercy. They have been waiting on us for a long time.
AI does not change that question. It scales it. A bias that used to require a biased human in the loop can now run a million times a second with no human in the loop at all. A merciful default that used to require a clerk who chose to help can now be replaced with a confidence threshold nobody on the team can explain. The moral architecture does not disappear when you automate the decision. It just becomes harder to see.
The rescue mission lens, applied
Years of rescue mission work taught me a small set of questions you learn to ask about any program, policy, or tool that touches people in crisis. They translate almost without modification to AI.
- Who is in the room when this is designed, and who is missing from the room?
- Whose data was used to train it, and did they consent in any meaningful sense?
- When it makes a mistake, who is hurt — and who hears about it?
- Is there a human a real person can actually reach if the system says no?
- Is there a path back to dignified service, or only an appeal form?
- Does it treat the person as a citizen or as a risk score?
- Would the people who built it be willing to live under the same system, with no power to override it?
What good AI for the poor would look like
Almost everything good AI for the poor would look like is unsexy. There are no billion-dollar product categories here. There is, however, an enormous amount of compounding human good available to a society that decides to do it.
- Plain-language navigation of benefits, housing, healthcare, immigration, and legal systems.
- A reliable, free, multilingual interface to public services for people who cannot read the forms.
- Caseworker support that increases the number of humans the system can actually pay attention to, instead of replacing them.
- Eviction-prevention and benefits-uptake tooling that finds people before the cliff.
- Translation, transcription, and accessibility infrastructure built as a public good.
- Independent audit and inspection tools so journalists, lawyers, and pastors can examine the systems being used against the people they serve.
- Privacy defaults strong enough that vulnerable people do not have to choose between asking for help and being permanently legible to the state.
Accountability is the load-bearing requirement
Every honest version of pro-poor AI eventually lands on the same word. Accountability. Not transparency — transparency without consequences is a press release. Not explainability — explainability that nobody reads is a compliance asset. Accountability means somebody is responsible, and the responsibility carries weight when the system harms a person who cannot afford to fight it.
That accountability is going to look like institutions, not models. Public advocates with subpoena power over public-sector AI. Mandatory human review of high-stakes denials, with the cost of that review carried by the operator and not the applicant. A real right to inspect, contest, and exit any automated decision touching housing, employment, benefits, healthcare, or freedom. Until those exist, "responsible AI" is a marketing category.