🚩 Tactical Memo 032: How I Decide What Not to Automate

In partnership with

The Future of AI in Marketing. Your Shortcut to Smarter, Faster Marketing.

Unlock a focused set of AI strategies built to streamline your work and maximize impact. This guide delivers the practical tactics and tools marketers need to start seeing results right away:

  • 7 high-impact AI strategies to accelerate your marketing performance

  • Practical use cases for content creation, lead gen, and personalization

  • Expert insights into how top marketers are using AI today

  • A framework to evaluate and implement AI tools efficiently

Stay ahead of the curve with these top strategies AI helped develop for marketers, built for real-world results.

Read time: 7 minutes

Welcome to Tactical Memo, my newsletter where I share frameworks, strategies, and hard-earned lessons for leaders navigating project execution, AI fluency, and leadership.

If you’re looking for my cheat sheets and deep-dive guides, the vault is linked at the bottom of this email.

👉 Why Read This Edition: You will learn how I decide where AI should be used and where it should not. The leaders who matter next will not be the ones who know the most tools. They will be the ones who know how to design durable AI strategies that hold up under pressure.

The Briefing: Today’s Focus

  • Why AI is not a tools problem

  • How I decide what must stay human

  • A practical playbook you can apply immediately

  • A reader’s question on automation that quietly backfires

  • Get my free cheat sheets

Can You Teach Us Just Enough?

The success of my AI-Powered Project Management work sparked something unexpected.

Non-project managers kept reaching out.

Leaders. Operators. Functional experts.
All asking the same thing:

“Can you teach us just enough project management to lead work better, without turning us into PMs?”

So I took the corporate training I’ve delivered inside organizations and condensed it into a focused, 1-week Project Management Bootcamp.

Here’s why it matters:

70% of projects fail due to poor planning and unclear ownership.
That’s not a technical problem.
That’s a leadership problem.

Which is why Project Management is now a top-5 global leadership skill.

This 1-week bootcamp gives non-project managers the structure, clarity, and decision discipline to move work forward confidently…
without tools overload, certifications, or PM theory.

👉 Enrollment is now open
💰 Save $200 through the end of the year (Dec 31st)

Why AI Is Not About Tools

Most conversations about AI start in the wrong place. They start with tools. New platforms. New copilots. New features. Every week there are hundreds, sometimes thousands, of new AI tools released. Many of them overlap. Some of them are impressive. Most of them will not matter a year from now.

The leaders who will be valuable in the AI era are not the ones who know the tools best. Tool knowledge is easy to copy. It expires fast. What does not expire is judgment.

I do not think about AI as a collection of tools. I think about it as a capability that must be deployed with intent. The real leverage comes from deciding where AI should accelerate work, where it should support thinking, and where it should never be allowed to replace human ownership.

If you anchor your value on tools, you will always be chasing the next release. If you anchor your value on strategy and judgment, you become the person others rely on to make AI useful instead of dangerous.

The Rule: I Never Automate Judgment, Trust, or Accountability

I am aggressive about automation, but I am disciplined about where I apply it. I automate work that is repeatable, low-risk, and reversible. I slow down or stop automation anywhere judgment, trust, or consequences matter.

If a decision affects people, reputation, money, or long-term direction, I keep a human firmly in the loop. AI should support leadership, not replace it. When leaders automate the wrong things, they do not just lose control. They lose credibility.

A Tactical Playbook: How I Decide What Not to Automate

1. I keep final decisions human, even when AI is confident
AI is very good at producing convincing answers. That is exactly why I never let it make final calls. I use AI to surface options, highlight risks, and pressure test assumptions, but I keep the decision human. Decisions require context, timing, and awareness of politics and downstream effects. When something goes wrong, I want a person who can explain why the call was made, not a system pointing to an output.

2. I do not automate anything that relies on trust
Trust is fragile and slow to rebuild. I never automate performance feedback, sensitive stakeholder communication, conflict resolution, or anything that affects someone’s sense of safety or respect. I use AI to prepare, to think through language, or to outline scenarios, but delivery stays human. Tone, intent, and empathy do not survive automation.

3. I refuse to automate broken or unclear processes
If a process is messy, automating it only makes the mess move faster. Before I automate anything, I ask whether the process is stable, understood, and agreed upon. If people cannot explain how the process works or why it exists, automation stops. I fix the process first. Only then do I introduce AI.

4. I make accountability explicit on every automated output
Automation often creates a dangerous gap where work happens but no one feels responsible. I close that gap deliberately. Every automated report, recommendation, or trigger has a named human owner. If the output is wrong, delayed, or misused, that person owns the outcome. Automation without ownership is how failures hide until they explode.

5. I protect leadership signals from being automated away
Some signals are meant to be felt, not filtered. Early warning signs, team morale, stakeholder hesitation, and delivery risk should not be fully abstracted into dashboards. I want leaders close enough to the work to notice discomfort, confusion, or silence. AI can summarize data, but it should not insulate leaders from reality.

6. I separate speed gains from risk exposure
Before automating anything, I ask two questions. What do we gain in speed. What do we lose in visibility or control. If speed increases but risk becomes harder to see, I slow down automation and add human checkpoints. Fast failure is fine. Silent failure is not.

7. I test automation in small, contained environments
I never roll automation out broadly without testing it under pressure. I pilot it in one workflow, with clear ownership, and with a plan for rollback. If people cannot explain what the automation is doing and why, it is not ready to scale.

What To Do Right Now

  • List the last three things you automated. Ask whether any of them removed human judgment or accountability.

  • Identify one automated workflow that feels fragile. Pause it and re-clarify ownership.

  • Name a human owner for every AI-supported output. If you cannot name one, fix that immediately.

  • Stop automating conversations. Use AI to prepare, not to communicate.

  • Audit one automated decision. Ask who explains it if leadership pushes back.

These small checks prevent large failures later.

What’s Happening

As AI becomes standard across teams, I am spending more time helping leaders decide where to slow down, not just where to speed up. In AI-Powered Project Management, this is one of the most important shifts we work through. We focus less on tools and more on designing systems that hold up when things go wrong.

The course has grown to over 200 students across recent cohorts and holds a 4.7 out of 5 rating from 48 reviews. I continue to expand the material with real-world cases, hands-on build sessions, and leadership frameworks that reflect how work actually happens.

The Briefing: Reader’s Question

Q: “We automated parts of our project workflow and now issues seem to surface later than before. Things look smoother on paper, but problems are harder to catch early. What went wrong?”

A:
What usually went wrong is that you automated away friction that was acting as an early warning system. Automation often removes manual steps, but those steps were sometimes where leaders noticed confusion, hesitation, or misalignment.

When updates, reviews, or checkpoints become fully automated, teams lose moments of human interaction where problems used to show themselves. Leaders stop seeing uncertainty and only see outputs. By the time issues surface, they are larger and harder to fix.

I correct this by reintroducing human checkpoints where judgment matters. I let AI handle preparation, summarization, and visibility, but I keep humans involved in interpretation and decision making. I also make sure there is always a clear owner watching for risk, not just reviewing dashboards.

Automation should surface problems earlier, not hide them better. If issues are showing up later, that is a signal that you moved too much too far away from human oversight.

Got a question? Reply to this email, and I may feature it in a future edition. All questions stay anonymous.

Newsletters I’m Reading

I’m a big fan of high-quality free newsletters. Here are a few I’m reading right now and recommend subscribing to.

Cheat Sheet Vault

p.s… As promised, click the link below to download my free cheat sheet and infographic vault.

Until next time,
Justin

✍️ From the Desk of Justin Bateh, PhD
Real-world tactics. No fluff. Just what works.