AI Is Doing the Work. But Are You Losing Your Edge?
Most business owners using AI have unknowingly become Self-Automators. Handing work off and accepting what comes back without a second look. Research from Harvard, MIT, and Wharton shows that this collaboration style is quietly costing you output quality and your own expertise over time. The good news is there's a better way to work with AI, and it doesn't mean using it less.
AI & HUMAN PERFORMANCE
4/15/20263 min read
There's a pattern showing up in businesses that have been using AI for a while. The tools are working, the output looks good, and somewhere along the way the person using them stopped engaging as hard.
Research published earlier this year by scholars from Harvard, MIT, Wharton, and Warwick tracked nearly 5,000 human-AI interactions across 244 consultants doing complex business work. What they found wasn't a story about AI replacing people. It was a story about three very different ways people work with AI, and why most of us are accidentally picking the worst one.
Three Ways People Work With AI (And Why Most Are Picking the Wrong One)
The researchers identified three patterns. They called them Cyborgs, Centaurs, and Self-Automators.
Cyborgs, which was 60% of the participants, worked in constant back-and-forth with AI throughout every task. They pushed back on outputs, broke problems into pieces, and stayed in active dialogue with the tool the whole time.
Centaurs, just 14%, used AI differently. They kept full control of the overall direction and judgment calls, pulling in AI for specific pieces like research, drafting, refining, while staying firmly in the driver's seat on everything that mattered.
Self-Automators, 27% of participants, handed whole workflows to AI and accepted what came back with minimal review. The work was fast and looked polished. It just wasn't really theirs.
The performance gap was stark. Centaurs produced the most accurate outputs. Self-Automators produced work that looked complete but lacked depth. And perhaps most importantly, Self-Automators developed neither stronger domain expertise nor better AI skills over time. They got faster, but they didn't get better.
How You Become a Self-Automator Without Noticing
The Self-Automator trap is easy to fall into, especially when you are stretched thin. You start by reviewing everything carefully because you don't fully trust the tool yet. A few months in, you've seen it work well enough that your review becomes a skim. The output still looks fine, so nothing flags as wrong. And slowly, without any deliberate decision, you've handed over more judgment than you realize.
This is where AI's biggest weakness compounds the problem. AI doesn't know what it doesn't know. It fills gaps with confident-sounding defaults. It doesn't know the history of your client relationship, the thing that almost went sideways last quarter, or why your usual tone won't land with this particular person. It produces output that looks authoritative whether it's right or not, and the more you trust it, the less likely you are to catch the places where it isn't.
There's also a subtler cost. When you stop doing the cognitive work yourself, you stop getting better at it. Your instincts don't sharpen, your judgment doesn't develop, and the expertise that runs your business quietly starts to erode while everything still looks fine on the surface.
So What Does Centaur Behavior Look Like?
The goal isn't to use AI less. Centaurs in the research weren't using AI less than Self-Automators, they were using it differently. They held onto the decisions that required their judgment and used AI to move faster on everything around those decisions. They treated it as a capable tool with specific blind spots rather than a system they could hand things off to completely.
In practice for a small business owner, that looks like a few specific things.
Reviewing AI output without knowing what you're looking for is just reading it, and generic reviews catch generic errors. Before you look at anything client-facing, decide what you're really checking. The facts, the tone, whether it sounds like you, whether it reflects the relationship. The subtle ones, the kind that erode trust over time, only get caught when someone is specifically looking for them.
Know where AI is genuinely weakest in your work. For example, relationships, nuance, etc. Anything where the right answer lives in context that you have and the AI doesn't.
Most business owners aren't going to use less AI. The tools are too useful and the time savings are real. But useful and reliable aren't the same thing, and the gap between them is where your judgment lives.
