For privacy reasons YouTube needs your permission to be loaded. For more details, please see our Privacy Policy.

Here’s a scenario that’s becoming uncomfortably common: You email a company’s customer service. Seems simple enough. Behind the scenes, an AI scans your message and triages it. “Great,” you think. “Smart use of automation.” Except that AI doesn’t resolve your issue — it forwards your email to technical support. Which, it turns out, has its own AI. That AI does its own triage. It decides your problem belongs to second-line support. Second-line has… yes, you guessed it… another AI. Which routes your message somewhere else. And now, you — the human with the problem — are stuck in a Kafkaesque maze of machines politely forwarding your issue around like a hot potato no one wants to hold. No context. No continuity. Just a well-intentioned mess of disconnected automation. The Real Risk Isn’t AI — It’s Unobserved AI: The issue here isn’t that AI’s being used. It’s how it’s being used — separately, silently, and without any overarching visibility.

Departments have automated their own little kingdoms in isolation. Each team builds their own GenAI workflow. And because no one’s observing the whole thing end-to-end, what you get isn’t efficiency. It’s entropy. Think of it as the AI equivalent of everyone bringing their own musical instrument to an orchestra and playing a different tune. Triage is a good use case. Email classification can save time. But unless you’ve stitched the system together — with humans in the loop and proper oversight in place — you’re not streamlining. You’re fragmenting.

What Good Looks Like

• A single source of truth across the customer journey

• Shared context passed through each handoff — AI or human

• End-to-end visibility of the workflow (and the ability to step in when it breaks)

• A very clear answer to the question: who’s actually responsible for resolving this?

Because at the end of the day, the customer doesn’t care how many models you’ve trained. They just want a straight answer from a person who knows what’s going on.

So no — the problem isn’t AI.

It’s what happens when every team builds their own without looking left or right. And until we design systems that talk to each other, we’re not solving customer problems. We’re just making them harder to explain.