AI Raises the Floor, Not Lowers the Ceiling: A Realistic View for Developers
Last Friday, Truong - a junior developer who'd been on the team for 2 years - messaged me privately.
"Hey, I'm really worried. AI is getting better at writing code every day. I read that this startup cut 50% of their devs, that CEO said they don't need juniors anymore. I just started my career - will I have time to build enough experience before being replaced?"
I read the message but didn't reply immediately. Because this question doesn't have a simple answer.
Every day I open social media and see: "AI writes code 10x faster than developers", "Startup X cuts 50% of dev team thanks to AI", "CEO Y declares no need for junior developers anymore."
Those headlines are written to shock, not to reflect reality.
But I'm also not in the camp of "AI can't replace developers, don't worry." Both extremes are wrong. One makes people panic. The other makes people complacent.
Reality lies in the middle. And to explain this to Truong, I need to tell a few stories.
The story of Minh and the CRUD service
Minh is a mid-level developer on the team. Last month, Minh was assigned a task: create a new microservice to manage inventory. Quarkus framework, Docker, CI/CD, health check, basic auth. Standard stuff.
Before, Minh would copy from an old project, rename things, adjust config, rewrite models. About half a day's work.
This time Minh tried something different. Described the requirements to AI, let it generate the skeleton. 15 minutes later, Minh had a complete project structure. Review, customize a few things, run tests. 1 hour total instead of half a day.
"Amazing," Minh thought. "AI can do everything now."
The next week, Minh was assigned another task: design the API contract for a new service, coordinate with the Mobile team to ensure they could consume it. Versioning strategy, backward compatibility, pagination approach.
Minh asked AI. AI gave a generic template.
Minh brought that template to the meeting with the Mobile team. They asked: "Cursor-based or offset-based pagination? Our app has an infinite scroll use case." Minh didn't know. "What's the error response format? iOS needs error codes to show localized messages." Minh wasn't sure either.
AI couldn't participate in that conversation. Because API design isn't just writing endpoints - it's negotiating between people.
Minh learned a lesson: AI is great at generating code, but not great at negotiating with stakeholders.
The story of Hieu and the production bug
Hieu is a senior engineer. One night, production had a strange bug: random request timeouts, no clear pattern. Monitoring showed latency spikes but couldn't identify the cause.
Hieu asked AI: "Random latency spikes, no pattern, what could cause this?"
AI replied: "Could be garbage collection, database connection pool, network congestion, or race condition."
Correct. But meaningless. Because all of those could be the cause. AI could list options, but couldn't diagnose.
Hieu had to do what AI couldn't: read logs, correlate timelines, reproduce the issue on staging, add instrumentation, measure each component. After 4 hours, Hieu found it: a goroutine was leaking, gradually exhausting the thread pool.
AI could suggest "might be a resource leak" - but the process of proving and fixing it, AI couldn't help. Because AI doesn't have access to production metrics, doesn't know how the infrastructure runs, doesn't know the quirks of the library version the team uses.
Hieu learned a lesson: AI is great at brainstorming hypotheses, but not great at debugging production.
The story of An and the costly mistake
An is the QC lead, and also knows how to code. After seeing AI generate tests pretty well, An decided to use AI to write tests for an important module: payment processing.
AI generated a beautiful test suite. Happy path, edge cases, null input, boundary values. An reviewed it, looked good, merged.
Two months later, production had a bug. A user was charged twice. Investigation revealed: there was a race condition when two requests arrived simultaneously. The test suite didn't catch it because AI didn't know to test concurrent scenarios.
An looked back at the test suite. All tests were single-threaded. AI generated tests based on code structure, not based on how the system actually runs in production.
An learned a lesson: AI generates tests for what it can see, but doesn't test for what it can't imagine.
So what is AI good at, and what is it bad at?
From these stories, I extracted a pattern.
AI is good at tasks with clear patterns, small scope, easy to verify.
Scaffold a new project - clear pattern, output is code that either runs or doesn't. Generate unit tests for existing functions - simple logic, tests pass or fail. Convert code between paradigms - callbacks to async/await, class components to hooks - mechanical, fixed patterns. Write complex SQL queries - describe requirements, AI generates query, run on staging to verify.
AI is bad at tasks requiring context beyond code, requiring judgment, requiring coordination with people.
Design a database schema for a system that will scale 100x in 2 years - AI doesn't know the business roadmap. Debug race conditions in production - AI doesn't have access to the runtime environment. Decide the technical stack for a startup - AI doesn't know what experience the team has, what the hiring pool looks like, what the budget is. Negotiate API contracts with another team - AI can't attend meetings.
Remember the article about scalpels and paring knives? AI is like an extremely sharp knife - but you have to be the one who decides where to cut.
What's actually being phased out
I told Truong: "What's being phased out isn't people. It's a way of working where the main value is typing code."
Before, a junior created value mainly by turning requirements into code. PM says "need an API to get user list," junior writes controller, service, repository, test. Takes 1-2 days.
Now AI does it in 5 minutes.
If all the value you bring is converting specs to code, then yes, you're competing directly with AI.
But good developers have never just typed code. They ask PMs: "Does this requirement conflict with feature X?" They discover: "If we do it this way, we'll have to rewrite it in 3 months when we scale." They propose: "Instead of building from scratch, using library Y and customizing would be faster and less buggy."
These skills AI can't do. Because they require understanding context, understanding the organization, understanding the future of the product.
Remember the article about the soul that drives AI? AI is the body - it has hands to code, eyes to read. But the soul that decides what to do, where to go - that's you.
A week in the life of a developer who knows how to use AI
To help Truong visualize concretely, I told him about a week in Minh's work life - after Minh learned from his early mistakes.
Monday, Minh needs to create a new service. Instead of typing from scratch, Minh describes requirements to AI, lets it scaffold. 15 minutes to get a skeleton, review and customize. Half a day becomes half an hour. The remaining time, Minh uses to think about edge cases AI didn't cover.
Tuesday, the team has legacy code no one dares refactor because there are no tests. Minh uses AI to generate basic test coverage. Not perfect, but enough for a safety net. From 0% to 60% in a few days instead of months. But Minh writes himself the tests for concurrent scenarios - because he knows AI doesn't think of them.
Wednesday, needs a complex SQL query - 5 tables JOIN, subquery, window function. Minh tells AI what he needs, it generates the query in 30 seconds. Verify on staging, done. But when deciding whether to add an index - Minh analyzes the query plan himself, because AI doesn't know the real data distribution.
Thursday, before merging a PR, Minh pastes code into AI for review. AI points out: query in loop should be batched, exception being swallowed. Useful. But when Hieu - senior engineer - reviews, Hieu asks: "Is this design consistent with team conventions? Other modules are doing it differently." AI doesn't know the team's conventions.
Friday, needs to integrate a new API. Instead of reading docs for 2 hours, Minh asks AI for example code. 10x faster. But when the API has strange behavior, Minh has to read the real docs - because AI sometimes hallucinates about API behavior.
Pattern: AI does the mechanical parts, Minh does the parts requiring judgment. The two complement each other.
Tasks your brain must handle itself
There are tasks where if you outsource to AI, you'll pay dearly. I listed them for Truong.
Design database schema for a new system. How far to normalize? What indexes? AI can suggest, but it doesn't know that in 6 months the team will query by time range heavily, or that this table will have 100 million rows. Wrong schema means painful migration.
Debug complex production issues - race conditions, memory leaks, intermittent failures. AI suggests hypotheses, but the process of proving and fixing requires understanding the runtime environment that AI doesn't have access to.
Architectural decisions - event-driven or request-response, microservice or monolith, SQL or NoSQL. AI lists pros/cons, but trade-offs in the specific context of the team - team size, deadline, legacy systems, budget - AI can't weigh.
Security threat modeling - not just scanning for SQL injection. But: "If an attacker has user A's token, can they access user B's data?" Requires understanding the overall architecture and adversarial thinking.
Deciding when to ship - is the code "good enough"? Refactor more or ship first then iterate? Is this technical debt acceptable temporarily? Judgment based on deadline, risk appetite, team capacity - all human factors.
Onboarding new people to the codebase - explaining: "This code looks weird because its history was like this, requirements were different then, team discussed 3 times and decided to keep it." AI can explain what, can't explain why.
AI raises the floor, doesn't lower the ceiling
Finally, I told Truong the most important thing:
AI is raising the floor, not lowering the ceiling.
The floor - easy tasks, repetitive, low value - AI will gradually replace. Junior developers used to spend 2 days writing CRUD, now AI does it in 5 minutes. The floor has been raised.
But the ceiling - the most complex problems, requiring judgment, context, coordination - remains beyond AI's reach. Designing systems for 10 million users. Debugging production incidents at 3 AM. Negotiating technical decisions with 5 different teams. The ceiling hasn't been lowered at all.
So the survival strategy isn't racing to type code faster than AI - you will lose.
It's moving upward:
- Understanding business more deeply
- Designing systems better
- Making more accurate technical decisions
- Communicating with stakeholders more effectively
In other words: do the things AI is worst at.
Reply to Truong
I messaged Truong back:
"You have 2 years of coding experience. That's a good foundation. Now spend the next 2 years learning how to design systems, learning to understand business, learning to work with people.
Use AI to code faster - but invest the time you save into things AI can't do.
Good developers won't be replaced by AI. They're the ones using AI to become even better.
Don't fear AI. Fear not knowing how to use it."
Truong read it and replied:
"I understand now. So I'll keep learning, but learn differently."
That's the right answer.
AI raises the floor, not lowers the ceiling.
The best developers won't be replaced by AI. They'll be the ones using AI to become even better.