New: Boardroom MCP Engine!

Looking for practical implementation?

Get the complete AI Integration Playbook with step-by-step workflows, tool configurations, and deployment blueprints.

Part IV: Future Generations
The Rescue Mission Test

The Grandchildren Test

Building AI for people not yet born.

Back to the series
By Randy Salars
Article #10 of 10 12 min read
Thesis

AI safety is intergenerational stewardship. We are building systems for people who cannot vote on them yet. That imposes moral responsibility we cannot ignore by appealing to short-term benefit.

A different vantage point

At 66, retired, with most of a working life behind me, I look at artificial intelligence differently than the 25-year-old founders building it. I am not asking what it can do for me this year. I am asking what kind of world we are leaving behind.

That is not a sentimental question. It is the question every generation that has wielded a powerful new technology has eventually had to answer β€” sometimes proudly, sometimes with regret. Industrial production. Antibiotics. Nuclear physics. The internet. Each of those reshaped the world for people who had no say in the deployment. Some of those bequests have aged well. Some have not.

AI is the next item on that list, and the time to ask the question is now, while we still have room to choose.

Define the Grandchildren Test

The test is simple to state. It does not require new philosophy. It only requires the discipline to actually run it.

Would we want this system to do the following for our grandchildren?

  • Teach them.
  • Counsel them when they are afraid.
  • Befriend them when they are lonely.
  • Shape their beliefs.
  • Mediate their relationships.
  • Guide their careers.
  • Influence their faith or sense of meaning.
  • Judge their worth in any of the systems they will have to live inside.

AI as environment, not tool

When we picture AI, most of us still picture a person sitting in front of a screen using a tool. That picture is going to look increasingly quaint.

Children born now will not "use AI" the way we use a calculator. They will grow up inside AI-shaped reality. AI tutors that adapt to them. AI friends that always have time. AI search that decides which information they encounter. AI entertainment that learns their attention patterns. AI work supervisors. AI spiritual content. AI medical triage. AI government services. AI mediating most of what they think of as the world.

That is not necessarily a dystopia. It depends entirely on what kind of AI those environments turn out to be. But it does mean the safety question is no longer "is this tool useful to the user?" It is "what kind of world is this tool building, year over year, for the people who never knew anything different?"

Character formation, not just productivity

The biggest under-discussed property of AI is that it is a character-formation technology, not only a productivity tool. The hours children and teenagers spend in AI-shaped environments will quietly shape their habits of mind, their thresholds for patience, their reflexes around truth, their sense of what is normal.

You can frame the design question that way directly. Does this AI cultivate patience or impatience? Courage or comfort? Humility or vanity? Truth or preferred narrative? Responsibility or dependency? Real relationship or curated isolation? Love or performance of love?

Each one of those is a real design choice, even when nobody on the team is asking it that way. The system is going to teach something. The only question is whether we picked what it taught on purpose.

Intergenerational stewardship

Stewardship is an old word that we have nearly stopped using in technology discussions. It means something specific: that the capacity to act gives you a responsibility, not just an opportunity. That power is held in trust for people who are not present to defend their own interests.

AI is power being held in trust right now, by a relatively small number of people, on behalf of a very large number of people, most of whom are not yet born. The fact that those future people cannot vote on what gets built does not weaken the moral claim. It strengthens it.

A generation that gets to make this kind of decision and refuses to ask what it owes the next generation is not a neutral generation. It is the generation that handed forward a world it did not bother to think about.

Decisions we will be embarrassed about

It is worth being concrete about which present-day design choices are likely to age badly. Not as a prediction, but as a discipline. Ask: which of the following, in twenty years, will we wish we had handled differently?

  • Engagement-maximizing AI companions deployed at scale to minors.
  • Spiritual or therapeutic AI offered as a substitute for human relationship.
  • AI personalization that quietly narrows what children encounter.
  • AI grading and ranking systems that shape opportunity for kids who cannot understand the criteria.
  • AI-generated content optimized for attention without regard to formation.
  • Agentic systems acting on behalf of users who never read the consent screen.
  • Always-on AI surveillance of childhood that we get used to before anyone has asked what it costs.

A workable stewardship rule

The Grandchildren Test is not a regulation. It is a posture. Run it as a design discipline: before shipping a capability, ask whether you would want the world that emerges if a generation of children grew up under it. Not whether the current user wants it. Not whether the metrics improve. Whether the world it builds is one you would choose to hand forward.

That posture will sometimes change a design. Often it will only slow it down. Occasionally it will kill a feature. All three outcomes are signs that the discipline is working.

We should build AI as though our grandchildren will have to live with the consequences of our shortcuts. Because they will. And because the alternative is to pretend that the future has no one in it, which is the oldest lie our species tells itself when it does not want to take responsibility for what it is about to do.

Questions readers ask

Is this just generational nostalgia dressed up as safety?

No. The Grandchildren Test is a stand-in for any long-horizon consequence we are tempted to ignore for short-horizon benefit. It works the same way whether or not you have actual grandchildren.

How do you weigh future people who do not exist yet?

You weigh them the same way every previous generation that built infrastructure, institutions, or environmental harm has had to. The future is not absent from moral reasoning. We are the people whose decisions decide which future shows up.

Can a small team really apply this test?

Yes. It is a question, not a process. A small team can ask it in a single meeting. The hard part is being willing to act on the answer.

See also in this series