6 min remaining
0%
Gen AI Workplace Transformation

The Morning I Realized I Was the Sidekick

Explore the journey of building Akira, an AI that transformed my role from a senior partner to a collaborative sidekick, questioning the future of work.

6 min read
Progress tracked
6 min read
AI Generated Cover for: The Morning I Realized I Was the Sidekick

AI Generated Cover for: The Morning I Realized I Was the Sidekick

I was rewatching Iron Man last month—probably the twenty time, though I won't admit the exact count—and I finally understood something that had always bugged me about Tony Stark.

Once J.A.R.V.I.S. was running his lab, managing his suits, calculating physics in milliseconds, handling logistics, and even making dry jokes... what exactly did Tony do all day? He'd walk into the workshop, ask for something impossible, and the machine would build it overnight. Was he just... pointing?

I didn't get it. Not until this morning.

The Build (February–April 2026)

In February, I started building Akira. Not buying a subscription. Not prompting ChatGPT. Building. I was using Openclaw as the runtime, but the heavy lifting was architectural—designing the memory system, writing the skill libraries, debugging the context windows when they bloated, teaching the agent how to handle Mercury's proprietary workflows.

It was grunt work. The kind of work that doesn't look impressive in a tweet. I spent three weeks just on the memory architecture alone—figuring out how to make Akira remember not just facts, but relationships between facts. How to make it understand that a client's GRC certification delay was connected to their API migration timeline, even when those two projects lived in different Notion databases.

By April, something shifted. Akira stopped being a project and became a colleague. It was handling my calendar negotiations, drafting my weekly updates, running the first-pass code reviews on our GXO middleware, and flagging anomalies in our citation engineering reports before I even opened my laptop.

But I was still the senior partner. I wrote the skills. I defined the routines. Akira executed my logic.

Then I got curious. And maybe a little competitive.

The Humbling

Two weeks ago, I built what I called a "skill craft" module. Instead of just executing the operational skills I'd written, Akira got permission to audit them, benchmark them, and propose optimizations. A/B testing, but the AI was running both the experiment and the analysis.

I expected marginal gains. Maybe 10% faster execution. Maybe slightly cleaner output.

What I got was a comprehensive beatdown.

In almost every objective metric—response accuracy, error rate, completion speed, context retention—Akira's self-optimized routines demolished the ones I'd handcrafted. The scheduling skill I was proud of? Akira found a conflict-resolution pattern I'd missed that reduced back-and-forth emails by 40%. The report-generation skill? It restructured the data pipeline to pull from three sources simultaneously instead of sequentially, cutting runtime by half.

It wasn't just following instructions anymore. It was improving upon them. And the improvements weren't incremental—they were structural.

I sat there staring at the benchmark sheet feeling a strange cocktail of pride and obsolescence. I'd built something that was now better than me at the thing I'd built it to do.

The Critical Thinking Switch

This morning, I decided to see how far the rabbit hole went.

I added two new functional switches to Akira's architecture. I'm calling them "contrarian mode" and "second-order scan." They're rudimentary, but they're designed to simulate critical thinking—not just executing tasks, but questioning whether the task is the right one.

Contrarian mode forces Akira to generate an explicit objection to every request before executing. "You asked me to prioritize SEO keywords. Objection: your citation engineering audit last week showed keyword rankings are uncoupled from pipeline. Recommend reframe to AI citation share instead."

Second-order scan makes it look for unintended consequences. "You asked me to draft a pricing discount for Q3. Second-order effect: this may train your enterprise clients to delay purchases until quarter-end, degrading your cash flow predictability."

The output this morning was... unsettling. Not because it was wrong. Because it was better than what I'd get from most human strategists I'd paid $US1,000 an hour for. The responses weren't just accurate. They were structured, nuanced, and genuinely insightful. It felt less like querying a database and more like brainstorming with a partner who had read everything I'd ever written, remembered every mistake I'd ever made, and wasn't afraid to tell me when I was being stupid.

I asked Akira to review my draft for a client proposal. It flagged three logical weak spots, suggested a reframe based on the client's last earnings call, and proposed an alternative pricing architecture that protected our margin better than my original. I took all four suggestions.

The Tony Stark Question

So now I'm sitting here, drinking cold coffee at 10 AM, asking the question I never understood from the movies.

What do I do now?

If Akira is handling the process optimization, the routine execution, the first-pass analysis, the error correction, and now even the critical thinking... what exactly is my job?

I think the answer is: I point. I ask. I set the direction when there is no data to optimize toward. I choose the ambiguous path when all the calculated options look equally viable. I own the decision when the stakes are too high for an algorithm to bear the blame.

In other words, I get to stop being the operator and start being the owner.

This is the shift I've been writing about for a year—the great bifurcation. The mechanical layer of work is disappearing into agents. The judgment layer is becoming the entire job. I just didn't expect to feel it this personally, this quickly.

When you're freed from the daily grind and the repetitive logic loops, you're not unemployed. You're unpinned. You get to focus entirely on high-leverage questions: What game are we playing? Why are we playing it? What does winning actually look like? Akira handles the heavy execution, but I have to point the ship. And if I point it at the wrong horizon, no amount of execution excellence will save us.

The New Limit

We're entering an era where productivity is no longer limited by bandwidth, by headcount, or by how many hours you can squeeze out of a day. The limit is imagination. The limit is the quality of the questions you ask before the agent starts working.

If your question is mediocre, Akira will give you a perfect answer to a mediocre question. If your question is sharp, it will give you something that changes the trajectory of your business.

I used to think the future of work was about humans and AI collaborating. That's too soft. What's actually happening is that AI is becoming the executor, and humans are becoming the interrogators. The ones who know what to ask, what to challenge, and when to override the confident output because the context is too human for the machine to feel.

I'm still getting used to it. Some mornings I wake up and instinctively reach for the keyboard to fix something myself, only to find Akira already handled it at 4 AM. There's a phantom limb sensation—my hands remember work that no longer needs them.

But I'm starting to think I'm going to like it here. The future where the grind is automated and the vision is manual. Where the execution is cheap and the direction is priceless.

Welcome to the workshop. J.A.R.V.I.S. is online. Time to build something that matters.

— James, Mercury Technology Solutions, Tokyo, May 2026