For privacy reasons YouTube needs your permission to be loaded. For more details, please see our Privacy Policy.

There’s a moment coming — for some, it’s already here — where AI won’t just support decisions. It’ll make them. On its own. No human in the loop. And before you picture a dystopian control room with blinking lights and no one at the wheel — let’s bring it back to something simple: NDAs.

The Humble NDA Checker

Let’s say you’ve built a GenAI system that scans incoming NDAs. It reviews every clause, flags anything unusual, and presents it in a nice, colour-coded table. Red, amber, green. It doesn’t make the decision — it just gives you clarity. So far, so good. Now fast forward a few months. The model’s seen hundreds of NDAs. Thousands. It knows the patterns. It knows what you normally sign and what you don’t. The obvious next step? It just starts saying “yes” or “no” for you. Automatically. No human required. Inbox > GenAI > Signature > Sent. Job done. Is that terrifying? Not necessarily. When AI Autonomy Works, If you’ve built the right infrastructure — the monitoring, the audit trails, the error tracking — then autonomy isn’t the scary part. In fact, it can be safer.

If your GenAI system has a lower error rate than the humans it replaced…

If every decision is logged, explainable, and reversible…

If the system’s performance is improving over time and you can prove that…

Then frankly, letting the model decide might be the smart thing to do. What makes it risky isn’t the autonomy. It’s the opacity.

The Real Danger: Unseen Decisions what should keep you up at night isn’t AI acting alone — it’s AI acting unseen. When no one’s watching. When there’s no trail. When you’re not even sure who (or what) made the decision. That’s where things get dicey — in hospitals, battlefields, financial institutions — when decisions are made that humans can’t cognitively comprehend, and no one’s asking how or why they were made. Trust = Oversight + Evidence

If we want to empower GenAI to act, we need:

• Transparency: Why did it choose that option?

• Traceability: Can we see what it’s done before?

• Control: Can we stop it if something goes wrong?

• Benchmarks: Is it actually better than the humans?

Without that, autonomy becomes abdication. So, should we let GenAI make decisions without us? Only if we know exactly what it’s doing — and we’re still close enough to pull the plug when needed. Because intelligence is useful.

But accountable intelligence? That’s where the real value lies. Here.