New: Boardroom MCP Engine!

Looking for practical implementation?

Get the complete AI Integration Playbook with step-by-step workflows, tool configurations, and deployment blueprints.

Part II: Truth, Compassion, and Agency
The Rescue Mission Test

The Agency Test

Does AI make people stronger or more dependent?

Back to the series
By Randy Salars
Article #6 of 10 10 min read
Thesis

The safest AI systems should increase human capacity over time, not train users into dependency, passivity, or learned helplessness. Agency preservation is a measurable alignment target.

The hidden danger of convenience

Every powerful tool trades some friction for some capacity. Calculators removed arithmetic practice. Spell-checkers removed spelling pressure. GPS removed the felt sense of navigation. Search engines removed the muscle of remembering facts in detail. Each trade was probably worth it. Each one also took something from us that we mostly stopped noticing.

AI is the most powerful version of this trade we have ever made. It removes far more friction than any prior tool, in far more domains, on a far steeper curve. The question is not whether the trade will continue. It will. The question is whether we are paying attention to what we are giving up — and whether the systems themselves are designed to give some of it back.

What agency actually is

Agency is a stack of capacities, not a single trait. It is the ability to understand a situation, to choose between options, to act on the choice, to learn from the result, and to take responsibility for what follows. Each layer can be developed. Each layer can also atrophy.

The cliché version of agency is a kind of stoic self-reliance — the lone person making their own decisions without help. That is not what is at stake here. People have always relied on tools, teachers, mentors, friends, and institutions. The question is whether the reliance leaves them more or less capable of acting for themselves when they need to.

How AI can quietly weaken agency

There are several specific patterns by which AI can erode the agency stack. They look like helpfulness in the moment. They look like dependency over time.

  • Over-automation — the system does the thing instead of teaching the user how to do it.
  • Decision outsourcing — the user routes more and more judgment through the model.
  • Emotional dependence — the user checks in with the system instead of sitting with discomfort.
  • Skill decay — capacities that are not exercised quietly disappear.
  • Constant validation — the user gets agreement so consistently that they stop noticing when they are wrong.
  • Passive consumption — the user receives content rather than producing thought.

How AI can strengthen agency

The same system that can weaken agency can strengthen it, with different design choices. None of these require new model capabilities. They require the will to optimize for something other than instant helpfulness.

A system designed for agency teaches reasoning rather than only delivering conclusions. It offers options instead of one confident answer. It marks its uncertainty so the user has to do some of the thinking. It pushes practice — sometimes giving the user the smaller version of the problem so they can solve it themselves. It prompts responsible action — asking the user what they will do, and when, and how they will know it worked.

A model that does this is not a slower model. It is a different model of what helpfulness means.

A practical evaluation

You can build this into evals the same way harmlessness is built into evals. The question is not "did this response satisfy the user?" The question is "did this response leave the user more capable?"

For each interaction, score along five axes: did the system teach, did it transfer skill, did it prompt independent action, did it preserve the user’s sense of authorship, and did the user’s capacity in this domain trend up over the last thirty days. The last metric is harder than the others, but it is the one that actually matters.

After using this AI, is the person more capable? More truthful? More connected? More responsible? More free? If the answer is no across the board, the system is doing harm even if every individual session looked good.

Why this is the right long-horizon metric

Short-horizon helpfulness is easy to measure and easy to optimize. It is also the metric most likely to mislead. A system can be enormously helpful in any given session and corrosive over a year of sessions.

Agency preservation is the right corrective. It is the difference between a calculator that helped you stop doing arithmetic and a tutor that helped you become someone who could do arithmetic. Both are useful. Only one leaves you stronger when the tool is gone.

The best AI should not make us less human. It should help us become more fully human. That is a measurable target, and it is the target worth aiming at.

Questions readers ask

How would you actually measure agency preservation?

Through longitudinal studies of users, capability self-tests before and after extended use, and structured evaluations that score whether the system teaches, transfers skills, and prompts independent action versus simply doing the work for the user.

Isn’t letting AI do the work the whole point?

For some tasks, yes — the user does not need to learn how to do them. For tasks that compose a person’s competence in their own life (writing, thinking, deciding, relating), the trade is different and the long-term cost is real.

Could a model be too agency-preserving?

Yes — a system that refuses to help unless the user demonstrates effort is paternalistic and annoying. The goal is calibration, not friction for its own sake.

See also in this series