Logo
Search
ARTICLES
PODCAST
TOPICS
RESOURCES
RECOMMENDATIONS
SIGN IN
SUBSCRIBE
  • Home
  • Posts
  • When Reality Becomes the Exception Handler

When Reality Becomes the Exception Handler

The Age of Outsourced Reality: Part 2 of 4

Danny Nathan
Danny Nathan

Mar 15, 2026

13 min read

What You’ll Find This Week

HELLO {{ FNAME | INNOVATOR }}!

This is Part 2 of what I envision will be a 4-part series on The Age of Outsourced Reality. In Part 1: When Thinking Becomes Optional, Reality Becomes a Product, I looked at what happens when AI makes answers cheap and humans stop doing the messy middle work that builds true understanding.

This week, I’m following the same thread to its next logical outcome: when judgment becomes a service, reality becomes a product. Capabilities like digital twins and real-world simulation allow us to test nearly infinite possibilities of what our future might look like. But what happens when we defer to those simulated outcomes for decision-making?

Read on to find out...

Here’s what you’ll find:

  • This Week’s Article: When Reality Becomes the Exception Handler

Don’t Miss Our Latest Podcast

This Week’s Article

When Reality Becomes the Exception Handler

In Part 1, I made a simple point: when AI hands you packaged answers, you stop doing the messy work that builds judgment. The output still ships, but the scrutiny that makes your judgment trustworthy starts to disappear.

In Part 2, I’m taking this idea a step further: when simulation moves from informing decisions to steering them, it starts to function like authority. We're not just outsourcing thinking, we're starting to outsource reality.

Digital twins and model-driven systems are a seductive idea. Build a living model of a city, a supply chain, a transit network, even a person. Feed it real-time data. Run millions of “if this then that” scenarios.

With that capability at your disposal, you can stop guessing and start steering.

But what happens when we put too much trust in those outputs? Once simulation is good enough to be defensible, we stop using it to inform decisions and start using it to make them. The simulation layer gets wired into everything: approvals, budgets, routing, staffing, pricing, insurance, and policy. “That’s what the simulation says…” becomes the fastest way to end any debate.

Once the twin is wired into approvals and policy, it’s no longer just an analysis tool. It decides who gets funded, who gets routed, who gets insured, what gets staffed, what gets priced, what gets flagged. And whoever controls the twin and its simulations controls what gets measured, what gets optimized, and what counts as success.

Power concentrates fast because building and running these systems takes compute, data access, and institutional trust. Only a few control those resources. Those lucky few don’t just predict outcomes.

They set the rules that produce outcomes.

Last week I talked about humans getting comfortable letting the AI model make judgement calls. This week, I’m diving into what happens when we hand the twin and its simulations real authority. When the decision system is wired into gates and incentives, it doesn’t just influence decisions. It becomes the decision-maker, and “the system says” becomes an excuse nobody has to own.

Reality becomes the exception handler. Reality only matters when it disagrees.

nist.ai.100-1.pdf

Artificial Intelligence Risk Management Framework

The AI RMF is intended to be practical, to adapt to the AI landscape as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms.

1.86 MB • PDF File

From Tool to Authority: How the Decision System Gets Promoted

A decision system becomes a problem the moment it stops being input and starts being a gate.

Most teams don’t decide to hand control to a simulation. They upgrade in steps because each step feels rational, and even responsible.

Step 1: Dashboard (Visibility)
It starts as instrumentation: a shared view of the system. You pull data into one place so everyone can see what’s happening. Metrics become the shared language, and the org moves faster because decisions are grounded in the same data and everyone’s using the same definitions.

But the dashboard also sets the agenda. If it’s not measured, it’s harder to defend. If it’s not visible, it’s easier to cut.

Step 2: Digital Twin (Planning Through Simulation)
A dashboard doesn’t predict. It records. Over time, that record gets valuable for one reason: it captures how the system behaves across a wide variety of conditions. Different customers. Different routes. Different supply shocks. Different policies. Different seasons. Different incentives.

That’s the foundation for a digital twin of your business. Not a spreadsheet that extrapolates. A model of relationships and constraints that can be poked and stressed before you touch the real system.

Now the questions change. You stop asking, “What happened last week?” and start asking, “If we change this lever, what happens next?” Pricing. Staffing. Inventory. Routing. Fraud thresholds. Underwriting rules. In a twin, you can run the counterfactuals. You can simulate outcomes and pick the path that looks best.

Once you can simulate, the decision process changes. The twin becomes the fastest way to justify a decision because it can produce an answer with numbers attached. People don’t stop disagreeing. They just learn that objections need to be expressed in the system’s language to count.

A simulation-based digital twin model for data-driven decision optimization

Digital twins (DTs) are digital representations of physical systems that run in real-time using enterprise data. This work introduces a simulation-bas…

www.sciencedirect.com/science/article/pii/S277266222500102X

Step 3: Policy (Rules)
Once simulation is driving planning, the next pressure is standardization. You bake the decision logic into rules so decisions scale. It becomes part of the playbook: who qualifies, what gets prioritized, how risk is calculated, what gets flagged, what gets reviewed.

Once it’s in the process, it’s hard to remove. Teams don’t revisit the decision logic because revisiting it means reopening old fights. The system becomes “how we do things here.”

Step 4: Gates (Permission)
Finally, the system moves upstream. It stops advising and starts approving. It can deny credit, change pricing, route traffic, assign work, trigger audits, throttle services, and decide who gets access.

Now you get authority laundering. No one owns the decision because the decision is “what the system says.” And once enough gates depend on it, arguing with the model starts to look like arguing with reality.

Why this keeps happening
Models reduce uncertainty. Less uncertainty reduces debate. Less debate shifts power from people to the decision system. Power sticks because it looks like objectivity, even when it’s just encoded choices.

By the time you notice, you’re not using the system to run the organization. You’re running the organization to satisfy the model.

The Impact of Digital Twin Technology on the Policy Lifecycle in Government Agencies and the Role of Value-Sensitive Design in Guiding its Ethical Development

dl.acm.org/doi/full/10.1145/3715885.3715891

The Reality Stack: Where Power Shifts

In Part 1, we demonstrated that individuals are happy relying on AI models to make judgment calls. A human sees a clean answer from a model and thinks, “cool, I don’t have to grind through this.” The output feels concrete, so it feels true. And, in the same way that search changed how humans “store and load” information, AI is enabling us to offload judgment. Accepting the output becomes our default.

Now scale that habit across the entire organization.

Once people treat AI model output as the most defensible version of reality, the organization starts building decision systems around it. Simulation stops being a nice-to-have. It becomes the planning tool. Planning becomes policy. Policy becomes automation. And now “the system says” is not just a personal shortcut. It’s how the system runs.

Here’s the part most teams miss. A digital twin doesn’t replace reality. It replaces how reality gets represented inside the organization.

Because the twin can only work with what it can ingest. If something can’t be measured cleanly, it gets simplified. If it can’t be simplified, it gets ignored. If it gets ignored, it becomes harder to argue for. Not because it stopped mattering. Because it stopped being legible to the system.

This isn't just “bias," it’s leverage. Whoever decides the inputs, how they get modeled, and how outputs are shown to humans gets to decide what the organization treats as reality.

What Data Can’t Do

When it comes to people—and policy—numbers are both powerful and perilous.

www.newyorker.com/magazine/2021/03/29/what-data-cant-do

If you want a simple way to see the leverage points, it’s this:

Reality → Measurement → Twin → Interface → Behavior

  • Reality is messy. Context exists whether you can encode it or not.

  • Measurement is a choice. It decides what counts and what disappears.

  • Twin compresses those choices into defaults. Assumptions become rules.

  • Interface turns rules into signals. Scores, thresholds, alerts, rankings.

  • Behavior adapts to signals. People optimize for what gets rewarded.

Over time, org-wide behavior shifts to match what the system rewards. Those choices become the next round of data the system learns from. The outputs we shape to fit the simulation become the inputs that reinforce it.

We start training reality to look like the system, not training the system to reflect reality.

2208.04289v1.pdf

Digital Twins: Potentials, Ethical Issues, and Limitations

After Big Data and Artificial Intelligence (AI), the subject of Digital Twins has emerged as another promising technology, advocated, built, and sold by various IT companies. The approach aims to produce highly realistic models of real systems

301.39 KB • PDF File

How Model Authority Shows Up in Real Life

Once simulation becomes the de facto planning tool, people don’t need to be told to optimize for it. They’ll optimize for the simulation because it will become the only way to gain approvals.

The Metric Becomes the Mission

A team is told to improve retention, reduce cost, or lower risk. The system defines the target, the dashboard tracks it, and the number becomes the conversation. So the team learns the quickest way to move the number, not the best way to improve the system.

They call it “execution.” But the effect is the same: the organization starts optimizing what it can measure, even when it’s a proxy for the desired outcomes.

Edge Cases Get Treated Like Defects

Digital twins are built to generalize. Edge cases are expensive. They don’t fit neatly into categories, and they make predictions look messy. The system starts pushing those cases into smaller and smaller buckets: exceptions, manual review, special handling, “out of scope.”

All of that sounds reasonable until you realize the outcomes. Anything that doesn’t fit the system stops being a first-class citizen. It becomes friction that gets quietly erased in the name of efficiency.

Dissent Gets Translated or Ignored

If a concern can’t be expressed in the system’s language, it loses status. It stops being a warning and starts being “an opinion.” The value of experience and intuition are diminished until the path of least resistance becomes compliance: follow the model, cite the model, and move on.

The system doesn’t just predict outcomes, it defines what the organization is allowed to notice, what it rewards, and what it treats as credible. The organization doesn’t drift away from reality in one decisive moment. It drifts one metric, one threshold, one ignored edge case at a time.

Part 2 of 4: The Everyday Business Metrics that Crush Innovation

Part 2: The Efficiency Paradox – When Lean Operations Lead to Lean Innovation

www.innovatedisruptordie.com/p/the-efficiency-paradox-when-lean-operations-lead-to-lean-innovation

What Gets Lost

When the decisioning system becomes the defining principle, the organization becomes a slave to the predictions. Anything the model can’t represent becomes noise.

First, you lose individual accountability. “The system says” becomes the easiest way to make a call without owning it. The person doesn’t have to defend judgment, they just have to cite output. The organization hardens that behavior into workflows, and process becomes the shield. The conversation shifts from “why did you choose this?” to “did you follow the steps?”

Then you lose context. The system can’t carry everything humans know. It can only carry what it can ingest, compress, and score. Local knowledge, edge conditions, history, intent, and nuance get demoted unless they can be translated into a variable. Over time, teams stop explaining decisions in human terms and start pointing at outputs.

Then you lose friction tolerance. Once “prove it in the model” becomes the standard response, pushback just turns into extra work. You have to translate context into the system’s language, gather more data, run more scenarios, and still risk being told you’re slowing things down. Eventually, people simply stop fighting the system.

Innovation needs slack, ambiguity, and a tolerance for short-term inefficiency. If the system doesn’t reward exploration, exploration looks like waste. So the system keeps selecting for what’s measurable, predictable, and easy to defend. It optimizes the existing business, not the next one.

And that’s how a planning tool becomes a governance mechanism. The only arguments that survive are the ones the system can parse.

And that’s how innovation dies.

How did this edition land for you?

Remember: you can innovate, disrupt, or die! ☠️

Explore Our Resource Library
Discover New Newsletters
Apply to Be a Podcast Guest
Sponsor Innovate, Disrupt, or Die!

Keep Reading



STAY CONNECTED


topics

be a Podcast Guest

Download One-Sheet

Innovate, Disrupt, or Die is created by the team at

We work with ambitious enterprises and promising entrepreneurs to develop innovation strategies into new ventures, products, and technologies that generate transformational growth and longevity.

Innovate, Disrupt, or Die is created by the team at

Apollo 21 works with ambitious enterprises and promising entrepreneurs to develop and translate innovation strategies into new ventures, products, and technologies that generate transformational growth and longevity.


topics

be a Podcast Guest

Download One-Sheet

MEDIA KIT

STAY CONNECTED

© 2026 Apollo 21, LLC.
Report abusePrivacy policyTerms of use
beehiivPowered by beehiiv