Process Tax: When Agile Teams Are Busy With Rituals Instead of Work
ProcessAgileAutomationAITeamwork

Process Tax: When Agile Teams Are Busy With Rituals Instead of Work

Phuoc NguyenFebruary 15, 202616 min read

Process Tax: When Agile Teams Are Busy With Rituals Instead of Work

4 PM Friday, and the ticket still isn't done

Minh — a developer on my team — just pinged me on Google Chat.

"Hey, the checkout flow ticket I started at the beginning of the week. An finished testing yesterday, I fixed the bugs this morning, now An retested and it passed. But Linh hasn't done acceptance yet. I don't know if she's seen it or just busy..."

I look at the clock. Friday, almost end of day.

A simple ticket, coded on Tuesday, stuck for 3 days because everyone is waiting for someone.

Sound familiar? If you've worked in Agile teams long enough, you've definitely lived through this — either as Minh waiting for QC, or as An the QC waiting for Dev to fix bugs, or as Linh the PO drowning in 47 other things and forgetting there's a ticket to accept.

Process
Process

The spiral where everyone is stuck, everyone has their struggles

Let's zoom in on what's actually happening.

Tuesday, 10:30.

Minh finishes coding. Opens Jira, moves ticket to "Ready for QC". Then thinks: "An often misses Jira notifications, better send a message too."

Opens Google Chat: "Hey An, ticket ABC-123 is done. Here's the demo video..." Attaches 3-minute video. Send.

14:00.

An — the team's QC — is testing 4 other tickets simultaneously. Sees Minh's message but not free yet. "I'll get to it, let me finish these first."

17:30.

An starts testing Minh's ticket. Finds 2 bugs. Writes quickly to Google Chat because she's in a hurry to leave: "Minh, checkout fails when cart is empty, and the pay button overlaps the text."

Wednesday, 9:00.

Minh reads the message but doesn't understand clearly. "What does 'fails when cart is empty' mean? What message shows? Which step?"

Replies asking for clarification. An is in a meeting. 2 hours later she finally answers. Minh can finally start fixing.

14:00.

Minh finishes the fix. Moves ticket back to "Ready for QC". Sends message: "Hey An, I fixed it, please retest."

An is testing another ticket, sprint is almost over and there are still 6 tickets untested. "OK let me finish this one first."

Thursday, 11:00.

An retests, passes. Moves ticket to "Ready for PO Review". Sends message to Linh: "Hey Linh, ticket ABC-123 passed QC, please do acceptance."

14:00.

Linh — the team's PO — is in a meeting with stakeholders about Q2 roadmap. Sees notification blinking but can't check. Her mind is thinking about 5 new features that need requirements written.

17:00.

Meeting ends. Linh opens Google Chat, sees 31 unread messages. Scrolls through, sees An's message but there are more urgent things — stakeholder just sent an email that needs immediate reply. "I'll look tomorrow."

Friday, 10:00.

Linh remembers there's a ticket to review. Opens Jira, doesn't see staging link. Opens Google Chat to ask: "An, which environment did you test on?"

An is in standup meeting. 1 hour later she replies. By then Linh is in another call.

16:00.

Minh pings me.


Do you see it?

Minh waited for An to test, then waited for An to retest, then waited for Linh to accept. 3 days for a ticket that took half a day to code.

An is testing 6 tickets at once, getting interrupted by Devs asking for bug details, having to remind PO to do acceptance.

Linh has 47 things on her mind: roadmap, stakeholders, new requirements, 5 tickets to accept, 3 meetings every day.

Nobody did anything wrong. Everyone's trying to follow the process. But the process itself is creating friction.

I call this "process tax" — the hidden cost teams pay every day, but nobody measures because it's scattered across dozens of small tasks.

"We need AI to solve this!"

That's what I heard in next week's retro.

"I think we should use AI to automatically write notification messages for PO. That way Dev doesn't waste time writing, and messages will have more complete information."

Sounds reasonable. Very "2026". But something felt off to me.

I asked: "If AI writes the message, will Linh read it faster?"

Silence.

"Is the problem that messages aren't well-written, or that Linh doesn't know which one to read first?"

More silence.

And that's when I realized: the team was trying to use AI to solve a problem that a simple webhook could handle.

Thinking
Thinking

Three layers of process — and the mistake of skipping ahead

After years of observation, I've distilled one principle:

Convention first. Automation next. AI last.

Most teams jump straight to AI or automation while skipping convention. Like building a house without pouring the foundation — looks nice but collapses anytime.

Layer 1: Convention — The foundation everyone ignores

Back to the story of Minh, An and Linh.

The real problem isn't lack of AI to write messages. The problem is nobody knows where information lives.

Minh comments on Jira, then sends Google Chat, then attaches a separate video. An doesn't know if Dev has fixed bugs because info is scattered everywhere. Linh doesn't know if QC has finished testing. Everyone opens Chat to ask.

Solution? No code needed. Just have the team sit for 30 minutes and agree:

"All ticket information must be on Jira. No side chats."

When Minh moves ticket to "Ready for QC", he fills in the template already in Jira:

  • Staging URL: https://staging.app.com/checkout
  • Test account: test@email.com / 123456
  • Main changes: 3 lines description
  • Demo video: Google Drive link

An opens Jira, sees staging URL, test account, change description right away. No need to ask. Linh also knows exactly where to find information when it's time for acceptance.

Cost: $0. Time: 30-minute meeting.

Reduces 80% of messages like "hey where's the staging link?", "what's the test account?", "has PO reviewed yet?"

Layer 2: Automation — Eliminating the "reminding each other"

OK, team now has convention. But An can still forget to check Jira because she's testing other tickets. Linh can still miss things because of 31 notifications.

This is where automation comes in — but simple automation, not AI.

Webhook + Bot:

When Minh moves ticket to "Ready for QC", Jira automatically sends message to team's Google Chat space: "🔔 Ticket ABC-123 is Ready for QC. Assignee: @An." Minh doesn't need to type anything. System does it.

Timer + Reminder:

If ticket is in "Ready for PO Review" for over 24 hours with no action from Linh, bot auto-sends reminder. Over 48 hours? Escalate to shared channel and tag Scrum Master too.

Daily Summary:

Every morning at 8:30, bot aggregates from Jira then posts to Chat: how many tickets Ready for QC, how many waiting for PO, which ones are overdue, which are blocked. Team reads 5 minutes before standup, standup only discusses blockers instead of everyone reading status again.

Cost: 1-2 days developer webhook setup.

ROI: Saves 20-30 hours overhead per sprint.

And most importantly: still no AI needed.

Automation
Automation

Layer 3: AI Agent — When do you actually need it?

Only after convention and automation are stable does AI enter the picture.

And what should AI do? Not mechanical things like notify or remind. But things that require reasoning.

Story 1: User story and 47 test cases

Monday, grooming session.

Linh presents a new user story: "User can filter products by price, brand, rating, and color. Can select multiple filters at once."

Sounds simple. But An QC starts getting a headache.

"How many price ranges? If user selects 2 different price ranges, then what? Do brand and rating filters conflict? If no products match, what displays? Performance with 100,000 products?"

Writing test cases for this feature could take 2-3 days.

This is where AI is truly useful. An pastes the user story, AI generates draft 47 test cases — happy path, edge case, combination case, performance case. An reviews in 2 hours, adjusts a few things, done.

Saves 60-70% of test case writing time.

Story 2: The bug report Dev can't understand

"Bug: checkout doesn't work."

This is the bug report Minh received from... An herself, on a day when deadline was crushing.

"An, what does 'doesn't work' mean? Which step? What environment? What error?"

"Sorry Minh, I'm testing 5 tickets at once, no time to write details."

Neither is wrong. An is under pressure. Minh needs information.

AI can help here. An just describes briefly or sends screenshot, AI generates fully formatted bug report — clear title, detailed steps to reproduce, expected vs actual, environment, severity. An reviews 30 seconds, submits. Minh reads and knows exactly what to do.

Story 3: "AC #3 conflicts with current policy"

This is the sentence I wish I'd heard sooner — before Dev coded for 3 days and then discovered it.

Linh writes requirement for new feature: "User can delete paid orders within 24 hours."

Sounds reasonable. Dev starts working.

3 days later, while coding, Minh discovers: "Wait, our Refund Policy says paid orders can only be cancelled, not deleted. Because we need to keep records for accounting. These two conflict."

Goes back to ask PO. Clarify. Re-design. Loses another 2 days.

AI could catch this before Dev starts coding. It reads the new requirement, compares with all existing features and policies in the system, then flags: "AC #3 allows deleting paid orders, but Refund Policy says only cancel is allowed. Please clarify with PO."

This is something usually only senior developers or experienced BAs catch. AI can do it because it can read all documentation in seconds.

AI Agent
AI Agent

The costly lesson from my friend's team

I have a friend who's an Engineering Manager at another company. He told me this story.

His team was very enthusiastic about AI. They built an AI bot in Slack that could:

  • Automatically classify bugs by severity
  • Suggest assignees based on expertise
  • Even auto-reject PRs if code quality score was below threshold

Sounds cool, right?

3 months later, the team nearly fell apart.

What happened?

One day, AI classified a bug as "Low severity" because it only affected "a small edge case". In reality, that edge case was the payment flow of their biggest enterprise customer. Bug wasn't fixed in time, customer escalated, lost million-dollar deal.

Another time, AI auto-rejected a senior developer's PR because "code smell detected". That was a hotfix for a production incident. Senior dev had to spend 30 minutes overriding to merge, while production was burning.

Lesson:

AI is advisor, not gatekeeper.

AI can suggest, recommend, flag. But final decision authority must stay with humans. Especially decisions that can affect production or customers.


The boundary few people talk about

There's something I want to say that goes against everything above.

Not every interaction should be optimized.

Last week, another team in the company asked me: "How do we optimize grooming meeting? Takes 2 hours every week, too costly."

They wanted: PO writes detailed requirements, sends ahead, everyone reads, no meeting needed. AI summarizes and extracts action items.

I asked: "In grooming, does Dev ever ask a question that makes PO realize the requirement isn't clear?"

"Yes, often."

"Does QC ever contribute a perspective that neither PO nor Dev thought of?"

"Yes."

"So if you skip the meeting, where will those conversations happen?"

Silence.

Grooming meeting looks like waste — 2 hours, many people, back-and-forth discussion. But actually it creates shared understanding. People don't just know "what to do" but also understand "why" and "how".

I've seen teams try to optimize grooming with async. Result: everyone understood differently because reading text without context. Bug count doubled. Clarification questions tripled.

My test:

"Does this communication create new shared understanding, or just transfer information that already exists from one place to another?"

If just transferring information — automate it, that's waste.

If creating new understanding — keep it, that's investment. Don't let AI or bots get in the middle.


How much does process tax really cost?

I tried to calculate.

In a 2-week sprint, an average developer loses about 4-6 hours to communication overhead:

  • Updating ticket status and comments: 30 min/day × 10 days = 5 hours
  • Writing notification messages to PO/QC: 15 min/ticket × 5 tickets = 1.25 hours
  • Waiting for replies and asking for missing info: hard to measure, but estimated 2-3 hours
  • Copying information between tools: 15-30 min/day = 2.5-5 hours

QC is similar. PO even more because they're the hub of all communication.

Multiply for a team of 6: 30-50 hours per sprint for communication overhead.

That's nearly one full-time developer just "telling each other where things are."

Most of this overhead can be reduced 80% with convention + simple automation. No AI needed.

But few teams do it. Why?

"No time to optimize process."

Ironic, isn't it? The unoptimized process is eating all the time.

Time
Time

Closing: World-class process isn't the one using the most AI

Back to Minh's story.

After that retro, my team didn't build an AI bot. Instead, we did 3 things:

Week 1-2: Agreed on conventions. Ticket templates. Communication rules. Took 2 meetings, 30 minutes each.

Week 3-4: Set up automation. Jira webhook sends automatic notifications. Reminders for overdue tickets. Daily summary. Took 1.5 days of one developer.

Week 5+: Started adding AI for things requiring reasoning. Generating test cases. Reviewing requirements for conflicts. Drafting release notes.

Results after 2 months:

  • Cycle time reduced 35%
  • Messages like *"hey, where's the staging link?"* nearly disappeared
  • QC no longer waits wondering if PO has reviewed
  • And most importantly: nobody pings me at 4 PM Friday anymore

World-class process isn't the fanciest process or the one using the most AI.

It's the process where every step has a reason, every tool is in the right place, and everyone knows what they need to do without asking.

Convention reduces ambiguity.

Automation reduces manual work.

AI reduces cognitive load.

Humans keep the most important things: judgment, creativity, and shared understanding.

Start from Convention. Always.

Simple processes beat complex tools. Every time.


Appendix

A. Detailed workflow by phase

Phase 1: Requirement & Planning

StepWhoDoes WhatLayer
1.1POWrite PRD in Google DocsConvention
1.2AIAnalyze PRD, find gaps and conflictsAI Agent
1.3POResolve AI comments, update PRDConvention
1.4TeamGrooming meetingHuman
1.5AIExtract meeting notes and action itemsAI Agent
1.6POCreate Jira ticket following templateConvention
1.7AIGenerate draft test cases from ticketAI Agent
1.8QCReview and adjust test casesHuman

Phase 2: Development

StepWhoDoes WhatLayer
2.1DevCode and create PRHuman
2.2AIAuto-review PRAI Agent
2.3Other DevHuman code reviewHuman
2.4DevMerge PR, CI/CD runsConvention
2.5BotAuto-comment staging URL on JiraAutomation
2.6DevTransition ticket to Ready for QCConvention
2.7BotAuto-notify QCAutomation

Phase 3: Testing

StepWhoDoes WhatLayer
3.1QCTest according to test casesHuman
3.2QCFind bug, describe brieflyHuman
3.3AIGenerate full bug reportAI Agent
3.4QCReview bug report, submitHuman
3.5BotAuto-notify Dev of new bugAutomation
3.6DevFix bugHuman
3.7QCRetest, passHuman
3.8BotAuto-notify POAutomation

Phase 4: Acceptance & Sprint Closure

StepWhoDoes WhatLayer
4.1POReview and verify ACHuman
4.2POAccept or Reject with commentConvention
4.3BotAuto-notify team of resultAutomation
4.4BotGenerate sprint reportAutomation
4.5AIDraft release notesAI Agent
4.6TeamSprint RetrospectiveHuman
4.7AIAnalyze retro dataAI Agent

B. Anti-patterns to avoid

1. Using AI instead of Convention

  • ❌ "Use AI to write explanation messages for PO because there's no template."
  • ✅ Agree on template first. AI only helps where template can't cover.

2. Over-Automate Communication

  • ❌ "Everything through bots, team doesn't talk directly anymore."
  • ✅ Automate information transfer. Keep conversations that create shared understanding.

3. AI as Gatekeeper

  • ❌ "AI auto-rejects PR if code quality is low."
  • ✅ AI suggests, human decides.

4. Jump straight to AI

  • ❌ "Build AI chatbot before having webhook notifications."
  • ✅ Webhook (2 hours setup) first, AI bot (2-3 weeks) after.

5. Not measuring

  • ❌ "Add AI because it sounds cool."
  • ✅ Measure before and after: cycle time, reopen rate, blocked time.

C. Implementation roadmap

PhaseTimingWhat to DoExpected Impact
FoundationWeek 1-2Convention: templates, DoD, communication rulesReduce 40% info-asking messages
Basic AutoWeek 3-4Webhook notifications, reminders, daily summaryReduce 80% manual notifications
Advanced AutoWeek 5-6Sprint report, DoD check, deploy syncSave 10-15 hours/sprint
AI Layer 1Week 7-8AI analyzes PRD, generates test cases, reviews codeReduce reopen 20-30%
AI Layer 2Week 9-12AI chatbot, release notes, retro analysisIncrease productivity 25-35%
OptimizeOngoingMeasure metrics, adjust, remove ineffective partsContinuous improvement
Share: