5 Red Flags: Diagnosing the Key Problems in IT Product Development

You have a strong feeling something is wrong with your product and how it is being developed. You cannot name it exactly because you are not technical, but you see the signs. The tech team says they are “working on it”, but progress keeps slowing down. Estimates stopped meaning anything. You hear about problems from customers, and the system keeps surprising the team. You feel like you are losing control, but you do not know how to fix it or even how to talk about it.
Projects in this state rarely recover on their own. Without action, you will face either an expensive rescue or years of fighting a system that slows you down while competitors pass you.
In this article, I will show five red flags we often see in IT products right before a crisis, based on years of turning projects around. After reading, you will know which daily signals matter most, what they block in your product, and what is really causing the chaos.
RED FLAG #1: Product development is unpredictable and increasingly slower
What it looks like:
Estimates aren’t taken seriously – “they said 2 weeks, so it’ll be a month” – and the team can’t explain why or predict how big the delay will be.
The roadmap changes every sprint, because “unexpected issues” keep coming up.
More and more time is needed to deliver smaller and smaller things.
Every “simple change” turns out to require modifications in many places.
The business stops trusting product commitments, because things are delayed too often.
Releases happen rarely and are massive – “we batch changes and ship everything at once.”
What we most often find as the root cause:
Technical debt has accumulated to the point where every change triggers an avalanche of other changes.
Lack of automated tests – every change requires lengthy manual testing.
The team is firefighting instead of building – half the time goes into fixes.
A manual deployment process turns releases into a costly “event” – so they happen rarely, with changes bundled together.
I’ve seen this firsthand:
In one project, we came in as a “special ops” team to replace a critical integration that the product’s core functionality depended on. At first glance, it looked straightforward: swap one integration for another, handle the differences, and we’re done.
Reality was brutal. The system had no automated tests, and the architecture meant that every change triggered modifications across dozens of places. Trying to push it through “by force” was doomed to fail.
So we made a decision that seemed almost paradoxical: before starting the actual provider switch, we spent the first few weeks writing tests and fixing the architecture. It was challenging – the system was asynchronous and tightly coupled to external services, so any change to working code had to be introduced carefully. Legacy code without tests, poor architecture – classic problems of systems that grew without solid foundations. Only then did we move on to the real task.
The result? The actual provider switch – once we had tests and a cleaned-up architecture – took less time than the original plan that assumed we’d do it without that foundation. That doesn’t mean it was easy – writing tests for legacy code is always a fight with surprises, and the client had to accept that for the first few weeks there was no tangible progress in the form of visible product changes. But the project succeeded, and later the client admitted they had expected it to take three times longer. Nobody had assumed we’d add tests or refactor the architecture. Investing in the foundations turned out to be the key.
RED FLAG #2: The engineering team doesn’t communicate risks until they become problems
What it looks like:
You only hear about an issue when it’s already too late to change course – “well… we knew this might happen…”
The team tends to say “we’ll do it, no problem,” even when internally everyone feels it will be a problem.
Risk shows up in conversations only as a “blocker” or “unexpected issues,” never as an early warning.
Engineers discuss potential problems among themselves, but don’t communicate them outside the team.
The team waits until someone asks about risks, instead of raising them proactively.
What we most often find as the root cause:
The engineering team doesn’t understand that identifying and flagging risks is part of their job.
There’s no formal process or dedicated time to discuss risks – no point in the sprint or planning where the team deliberately reviews potential threats.
No shared language or framework for talking about risk – the team doesn’t know how to communicate probability, impact, and mitigation in a way the business understands.
No habit of asking “what could go wrong” during planning – the focus stays on the happy path only.
A “don’t bring problems without solutions” culture can amplify this – the team is afraid to raise uncertainty without having a ready answer.
I’ve seen this firsthand:
We took over a project where the amount of technical debt exceeded even our boldest expectations. Risks had been accumulating for years – almost no tests, architectural and infrastructure issues, bugs in business logic. The worst part was that nobody was managing those risks.
At first, we were running from one fire to the next, but the fires didn’t stop – it was burning the team out. At some point we changed our approach: we compiled a list of problems, extracted the critical themes, and presented the client with four major items with a combined estimate of 1.5 months of work for part of the team. The conversation was tough – we showed the scale of neglect and had to admit that technical debt is always hard to estimate. You can spend an unlimited amount of time improving foundations, so we had to agree with the client on a realistic threshold for how far we go. Some things we had to postpone, some we had to accept as risk.
More importantly, we developed a way of working where the client understood that flagging risks is not complaining – it’s part of product management. Their role is to make conscious decisions: “we fix this now” or “we accept the risk and move on.”
Because ignoring problems and accumulating technical debt is a straight path to a situation where one day a list of 15 critical risks lands on the table, requiring months of work. And then the conversation with the business isn’t just difficult – it often ends the partnership. A client who suddenly learns the true scale of neglect has every right to feel misled and simply switch vendors.
RED FLAG #3: Product decisions are based on intuition, not data
What it looks like:
Sprint planning starts with “I think customers need…” instead of “the data shows…”
Priority discussions end in a vote – or with the loudest person in the room making the call.
Nobody can answer “how do we know this is the most important thing right now?” without falling back on anecdotes.
After shipping a new feature, the team doesn’t know whether it succeeded – because “success” was never defined.
The roadmap looks like a wishlist, with no clear criterion for “why this and not that.”
The team can’t distinguish one loud customer’s opinion from a real problem affecting most users.
What we most often find as the root cause:
There are no defined product metrics tied to business goals.
Nobody tracks real product usage – no analytics, no product telemetry, no customer feedback loop.
Data is collected, but nobody analyzes it – because “there’s no time” or “we don’t know how to interpret it.”
The organizational culture rewards speed – decisions are made before anyone checks the data, even when it’s available.
Product hypotheses aren’t treated as hypotheses – they’re treated as certainties that must be built.
There’s no habit of asking “what happens if we don’t do this?” and “how will we measure whether it worked?”
I’ve seen this firsthand:
A client was pushing hard for a specific feature. We felt it was too complex for the product at that stage, but the client wouldn’t back down. We were stuck – two instincts were colliding: the client’s gut feeling that it was needed, and ours that it wasn’t.
Instead of fighting with opinions, we built the feature in the simplest possible form and started measuring. As it turned out, over the course of a month only 0.01% of users used it. The numbers made it possible to make a decision without emotion – we rolled the feature back, because it added complexity to the system without delivering value.
But metrics aren’t only about protecting you from bad decisions – sometimes they reveal opportunities you wouldn’t notice otherwise. In another project, we built a feature and after launch its usage exceeded our wildest expectations many times over. That gave us a clear signal: users need this, it’s worth investing more in it. The data showed us a product direction that intuition would never have suggested.
Data doesn’t replace intuition – it validates it. And sometimes it saves us from costly mistakes, and sometimes it points to a direction we wouldn’t have seen.
RED FLAG #4: The engineering team doesn’t understand the business and doesn’t share ownership of outcomes
What it looks like:
Engineers treat a user story as a checklist of requirements to implement, not a problem to solve.
Conversations with the engineering team feel like translating a foreign language – lots of words, little understanding.
The team proposes solutions without understanding the context – “we’ll do it this way because it’s best practice,” not “we’ll do it this way because in your case the most important thing is X.”
During planning, nobody on the team ever asks for the business rationale.
Technology becomes an end in itself – refactoring, new frameworks, migrations, but nobody can explain how it will translate into business outcomes.
What we most often find as the root cause:
The engineering team doesn’t have access to customers or business metrics – either the organization blocks it, or the team doesn’t ask for it.
There’s no shared language between business and tech – each side speaks its own jargon.
The Product Owner or PM becomes a bottleneck and a filter, instead of facilitating the conversation.
The team never sees the impact of their work in the form of happy (or unhappy) users.
I’ve seen this firsthand:
One of the systems we took over technically “worked” – but only in theory. In practice, it handled only perfect, happy-path scenarios. Any deviation from the standard flow and the application would break in ways that were completely unpredictable for the user.
When we joined the project after the previous vendor, we quickly saw why: the team had been tightly isolated from the business. They received written requirements, checked them off, and whenever doubts came up – they decided on their own how something should work. Without consultation. The result? The system formally “met the requirements,” but it completely missed how the real business need actually looked.
One of the first things we did was reduce the distance between the engineering team and the business. We started working very closely with the operations and business teams. At one point we spent a few days on-site with the client, meeting the operations team daily to see firsthand what their work looked like. Uncertainties stopped being guesses – we went and asked. We deployed almost daily so feedback would arrive as early as possible.
That’s the foundation: an engineering team that doesn’t understand why they’re building something will deliver code that matches the specification – but not necessarily code that solves the problem.
RED FLAG #5: The engineering team learns about problems from users
What it looks like:
You learn that something is broken directly from users, not from monitoring.
The team doesn’t know what’s happening in production in real time – validating issues is guesswork and manual checking.
Performance degradation is noticed by users (“it’s painfully slow”), not by the team.
There are no answers to basic questions: “is the system up?”, “how many users are online?”, “are there errors?”
When a problem finally surfaces, the team can’t say how long it’s been happening or what impact it had on users.
What we most often find as the root cause:
There are no proper observability tools – either they don’t exist, or they’re configured so poorly that they’re useless.
Monitoring focuses on infrastructure (are servers responding) instead of users (can customers actually get their work done).
Logs are unreadable or chaotic – during incident analysis you end up grepping through hundreds of lines with no context.
There are no technical or business alerts.
Errors get lost in the logs and nobody reviews them systematically – the team starts looking only after users report issues.
The team has no habit of regularly checking production health.
I’ve seen this firsthand:
The vast majority of systems we take over don’t have proper observability in place. The logs are unreadable, added in random places, and during incident analysis they tell you very little. The standard situation after a takeover looks like this: some functionality fails, and we hit a wall – it’s hard to tell what went wrong.
Only after adding logs in the right places, with a real understanding of the functional flow, can we say something meaningful the next time the error happens – and ultimately fix the issue so it doesn’t come back. That’s why one of the first things we usually do is work on the system’s observability, and then create alerts for specific events – errors go straight into the team messenger as notifications, and we immediately know something went wrong.
In one system, shortly after the takeover – while we were still working on the foundations – the system completely stopped responding. Hours of analysis showed that a poorly designed architecture meant that heavy traffic froze the entire system. The problem was that nobody knew anything was wrong until users started calling. If we’d had monitoring and alerts at that stage, we would have noticed the issue within seconds and reacted before anyone even felt it.
Summary
Now you know which red flags can show up in your product – and what’s really behind them. Identifying the problem is the first step – but awareness alone won’t fix the situation.
Each of these flags has a root cause, often buried deep in processes, architecture, or organizational culture. If you’ve recognized one flag, you can start targeting a specific problem. But if you’re seeing more than one red flag? Then you’re likely dealing with a systemic crisis that requires deeper intervention.
Some teams can fix this internally – if they have the time, the will, and the mandate to introduce change. Others need an external perspective and support, because the crisis has already spread across so many areas that it’s hard to prioritize the right actions.
At Pragmatic Coders, we specialize in pulling projects out of difficult situations where most problems are systemic, not isolated – where comprehensive intervention is needed. If you feel your project needs that kind of help, get in touch with us.
