Avoiding Analysis Paralysis in Product Development – Büşra Coşkuner

Here’s what you can learn from this episode of Pragmatic Talks:
The importance of goal-oriented thinking
Büşra Coşkuner stresses that the most important principle in product development is to be goal-oriented. Before deciding what to build, teams must first understand their primary goal. This goal changes depending on the product’s maturity and context.
- Early-stage products: The main goal is often learning. Prioritization focuses on activities that teach the most about the target users, their problems, and potential solutions.
- Mature products: The goal might shift to growth or optimizing revenue.
- Customer outcomes: The best goals are focused on the customer. Instead of thinking about features, think about what the customer is trying to achieve. Helping a user get from one page to another is not their real goal; understanding their true need is essential.
How to prioritize features and ideas
Prioritization is not about using a single framework but about a way of thinking based on your current goal. Once the goal is clear, it becomes easier to filter ideas.
- Start with a hypothesis: Analyze your data and form a hypothesis about what is happening with your product (e.g., “Why are users dropping off after the paywall?”).
- Filter based on your goal: Any idea that does not help test your current hypothesis or achieve your current goal should be put aside. This creates focus.
- Big impact over low-hanging fruits: Especially for new products, it is better to focus on big, impactful changes that will make a difference, rather than getting distracted by small, easy tasks that have little impact.
Managing risk with experiments and evidence
For big and risky ideas, it is crucial to gather evidence step-by-step instead of building the full feature immediately. Büşra explains a process of moving from learning to confirmation.
- Discovery vs. Validation: First, use experiments to discover and learn about the problem space (e.g., through interviews, surveys). Then, use experiments to validate and confirm your assumptions (e.g., with prototypes or manual solutions).
- Gathering commitment: A good way to validate an idea is to ask for commitment from potential users. This can be their time, money (pre-orders, letters of intent), or referrals.
- Confidence level: When prioritizing, confidence is a key factor. Confidence is not a feeling; it is based on the evidence you have collected. Market feedback gives high confidence, while internal opinions give very low confidence.
How to avoid analysis paralysis
It is common for teams to get stuck analyzing and experimenting without delivering anything. This is known as “analysis paralysis”.
- Time-box your research: Set a strict time limit for any analysis or discovery phase, just like a technical spike. For example, give yourself one or two weeks to learn what you need to learn.
- Follow the learn-measure-build cycle: Start by asking: 1. What do we want to learn? 2. How will we measure if we learned it? 3. What is the smallest thing we can build to run that measurement? This keeps the process focused and moving forward.
Measuring what matters
You should not measure everything. Instead, focus on metrics that connect your team’s work to the company’s business success.
- Connect to business goals: Use tools like KPI trees, impact mapping, or Pirate Metrics to show how improving a user action leads to a positive business result. This proves that the product team’s work is valuable and not just a “hobby”.
- Focus on the core value: Identify the key user actions that lead to the product’s core value (the “happy path”). Measure these steps to understand what is working and what needs to be improved.
- Clean your metrics: Regularly review your tracking tools and remove events that are no longer useful. This keeps your data clean and relevant.
The future of product management with new technology
Technology like AI and no-code tools are making it faster and cheaper to build products. This has a significant impact on product management.
- Faster feedback loops: The biggest advantage is the ability to get feedback from the market much faster. This helps teams learn quickly if they should stop, continue, or change direction.
- Focus on strategy, not just execution: As building becomes easier, the role of product management becomes even more focused on strategy, discovery, and ensuring the team is building the right thing for the right market.
- A continuing trend: This acceleration is not new. For the last 10–20 years, technology has helped teams build and learn faster. AI is simply the latest and most powerful step in this evolution.
Full transcript of the episode
Introduction
Wiktor Żołnowski: Welcome to Pragmatic Talks, a podcast and video series where we discuss startups, contemporary digital product development, modern technologies, and product management. This episode is brought to you by Pragmatic Coders in collaboration with Ace! Agile Software Development and Product Management conference. We believe that everyone should have equal access to knowledge about product development and entrepreneurship, and also, everyone should have the opportunity to apply it in pursuit of making our world a better place. Through this series, we aim to create an impact on the future world. In today’s episode, we are joined by Büşra Coşkuner, a distinguished product management coach and trainer. Büşra is renowned for her expertise in helping product teams make data-driven decisions and improve product discovery and validation processes. Her innovative approaches have empowered numerous teams to create impactful products that delight customers. In this episode, we delve into the nuances of prioritizing product features, making informed decisions based on data, and the importance of understanding customer needs. Büşra also shares insights on navigating the complexities of product life cycles and optimizing for business impact. And now, ladies and gentlemen, please welcome Büşra Coşkuner. Welcome to the next episode of Pragmatic Talks that we are recording at the Ace Conference in Kraków in 2024. Today with us, we have a brilliant guest, Büşra, who is an expert in product development and product management, with whom I would love to talk today about prioritization and decision-making in terms of what to develop and what requirements to choose when developing your products. So welcome, Büşra.
Büşra Coşkuner: Thank you for having me.
Where to start when building a product
Wiktor Żołnowski: So the first question that is a very wide question and a very open-ended question that I would love to start with, and then we’ll move more into details, is: okay, let’s assume that I’m building a product. How should I decide which features, which requirements to start with? Where to start? What to build first? How to make a decision later on which requirements to choose? A very wide topic, let’s start from here.
Büşra Coşkuner: This is a super wide topic, and the perfect product manager answer will be, “It depends,” because this is really, really, really wide. Let’s start with this point: every product goes through the product life cycle curve or different stages of the product life cycle. And depending on the product maturity, it will be a really different set of things – in general, not to say backlog items or whatever, but in general – that you’re looking for. So when you’re at the very, very beginning, you are rather trying to decide what activities you do in order to figure out what you actually want to build. When the product is more mature, you’re trying to rather optimize for either growth or, if it’s a cash cow, how you can continue making cash with your product. So it’s very, very specific to the maturity, also to the industry of the product, the product type, like if it’s a software as a service or e-commerce. And therefore, there is no real answer to that question, to be honest. And we could now talk for an hour or even longer on all the very different options that we could take.
The importance of goal-oriented thinking
Büşra Coşkuner: In general, as a rule of thumb, as long as you think in a goal-oriented way – and with goal, I don’t mean KPIs, but more like why, current direction – this could be a user-oriented goal, which we call an outcome, which is basically the user’s goal, right? So what is, or the customer, depending on B2B or B2C – again, it depends a lot – what is the goal of my target group? What is my target group trying to achieve here? I will slap the person in the face – I will definitely not, but just, you know, a metaphor – I will get really mad when somebody, or I do get really mad, actually, when somebody says something like, “Oh yes, the person tries to go to the step to the next step.” Getting from this page to that page is not their goal. It’s a means to an end. What is it that they’re really trying to do? In an online shop, it’s obviously not buying something. So it’s really not even about this buying activity, right? It’s obviously that they’re trying to find something relevant to their current need, and you, as a product person, need to figure out what that is and then help them get it through the action of buying it. Well, even buying is a solution. They could borrow, right? There are rental platforms, they could rent. Like, even buying is a solution; it’s not really the thing that they’re trying to do.
Depending on the goal that you have in mind, you will need to do something different. Anyway, I’m a product management trainer and coach, and I do multiple things on the side. One business is that I’m running courses, so when I build my courses, I have a different goal in mind. Then I have a matcha business in Switzerland; I sell matcha. Then with my matcha business, right? I’m right now working also on a software as a service for copywriting that has a completely different goal. And they are all in a different life cycle. Like, with the SaaS product, my main goal is learning. When my main goal is learning, then I will prioritize those activities first that will help me learn most about my part of the goal: target group. Maybe once I know most about my target group, maybe my next learning goal will be the problem space. Once I think I have learned enough, my goal will change. My goal will be, “Okay, figure out what’s the best combination of the problems based on the target group and what is a good solution for that.” My goal changes based on what’s next. And because my goal changes, the way how I choose what to do next changes as well.
So again, when you’re early in your journey, then this happens more frequently and faster. When you’re in the growth stage or even later, when the product is pretty mature, these changes don’t happen so quickly anymore, right? So you have pretty stable goals for some stable timeline. And then also the type of goal might be different. What I always advocate for – and I know this is in some companies, this is a really ideal scenario, but it will never happen, so I’m aware of that, so reality looks a bit different – what I always advocate for is: start from the customer and figure out what their goal is, their main goal. And then figure out what the behaviors are that take them to that goal or what they should be so that you can help them actually achieve that. Always start with the customer. Always start with their need. Especially, start with talking to your customers. I think that’s universal. And regardless of which stage you are, it’s always good to talk to the customer. The farther you are away from your customers, the more difficult it will be for you to be successful.
Who should talk to customers?
Wiktor Żołnowski: What do you mean by “you”? Is it like a product manager or as a company?
Büşra Coşkuner: As a company. So should everyone in the company be involved with this kind of discussion or maybe just specific roles? It depends. Like, when you’re a startup – so I mentioned that software as a service product is something that I build together with a friend. My matcha business is something that I run together with my husband, and the course thing is completely my own stuff. But then when I was employed back then, I would have a team, right? So when I was working at Doodle, when I started there, we were 35 people, so it was easier to have this exchange and talk about things that would even spread automatically in the company. And it’s fine. But then we scaled, and within two years or so, we were about 100 people. And there, you don’t have those short communication lines anymore. Then the people who are involved changed. So basically, then you start having those product development teams, or I like to call them product teams because it’s not about development, it’s about the whole creation. You have those teams, those cross-functional teams. And then it’s a matter of which roles you have, but definitely a matter of the responsibilities of the people. So when you have a product manager together with a product designer, together with the tech lead, for example, that’s the perfect kernel, let’s say, of the team that goes out and figures these things out, right? One thing that we also advocate for is to bring in those different views and figure out what to build, actually.
Filtering ideas and making decisions
Wiktor Żołnowski: So let’s assume that we have a list of ideas that we want to implement. And let’s say that we already have some product; we are already on production, we have some users, we have this list of ideas. How to choose which of them to start with? Or how to filter those ideas or sort them out? How to better decide what to do next, especially in the startup environment when we have a limited budget, or even in the corporate world where we usually have limited time more than budget even, and some deadlines? So where to start? How to figure it out? There are, of course, different techniques for how you can do that. Which one is your favorite?
Büşra Coşkuner: I will not do any name-dropping because I do not have any favorite. Again, goals, thinking, right? Back to first principles. I’m always a fan of starting with a hypothesis and in the first principles way. So again, it depends. For example, a data-heavy environment versus a not data-heavy environment, right? So what’s your hypothesis on what is happening right now? Like, how does your product perform and in which part, and what is your goal? Again, goal-oriented, right? And once you know what your goal is, let’s say, hypothetically, you’re building a new product and you’ve just launched. You just introduced a paywall, and you see the number of users go down. In the first place, it looks like, “Oh, shit.” No, but this is normal. You have to understand, once people need to pay for it, you only get those people who really see value and want to continue working on it. And now you can create your hypothesis about what’s happening. So either you have to look into the data, right? So what’s the conversion rate from trial to paid, for example? Or how many did we really lose? If we end up with having only 0.5% of users than before, then we might be in big trouble. And now you start to hypothesize. Okay, maybe the price is too high. Maybe the places where we show the paywall are the wrong places. Maybe the value isn’t, like, the overall value isn’t good enough. So you start to hypothesize. And once you have these hypotheses and your goal and the analysis of what you think is happening, then you can decide and filter out everything. You pick the one where you think this is probably what’s happening, and every idea that does not help you fix this one issue or test this one hypothesis is out.
So I’m an advocate of knowing what you’re trying to do or trying to test, trying to learn, right? That’s your goal. And then filter out everything that does not fit into it. Then the question becomes, of course, where do these ideas come from? Anything where people said, “I believe this is going to be it, let’s build that,” without anything, like really just a good breakfast or something like that, you need to find more data points anyway, more evidence anyway, that says, “Yeah, that’s well, build it,” right? So if you don’t have it, you have to dig deeper to understand whether you should build it really. But anything that has some sort of a real channel where it came from – another example at Doodle, we were building new products, and we were in frequent feedback loops with our users. And anything that came from there, most of that was, of course, feature requests. So what we did is we bought something like a hypothesis heat map. So basically, a feature that would be requested from multiple users wouldn’t be the feature that we would build. We would look into it, which means, what’s happening? Is there something that is not working well? What do we have to fix? And then find out if that feature that they are requesting is the right feature, or if we should build something else in order to fix that problem that is behind the feature request, right?
At the same time, sometimes it’s about our own goal. For one of the products that we have launched that I was responsible for, we were wondering why our users don’t invite their guests into the Doodle system so that the whole scheduling happens in the Doodle system, but they’re rather sending the links out. And there was a very, very small feature request at the bottom of our list, which was asking for a text field to customize a message. This sounds like a thing. And because it’s so easy to build, we were like, “Let’s just build it and see what happens,” right? So you don’t have to test everything. It was easy to build, but also very easy to revert if it doesn’t work. We just take it out, and boom, we doubled the number of people who would invite others into the poll instead of sending out the link. These are just examples to explain that it can have different dimensions of looking at the same idea list and also advocating for not just starting from the ideas, but actually trying to figure out what’s really happening and what is the goal that we’re trying to achieve.
Low-hanging fruits vs. big impact
Wiktor Żołnowski: The goal comes first. Thanks to knowing what the goal is, what the job to be done is, we can filter out all of the requirements, at least at this stage. Filter out the requirements that are not fitting into the goal that we are currently focused on. Because I think that’s also important to be focused on one goal at a time, at least, or limit this number of goals to the minimum and focus on this goal and eliminate everything else, or just put it on the bottom of the backlog and maybe or may not come back to it later on. We limited this number. And then what I understand from what you’re saying is to actually find out the things that are maybe like low-hanging fruits, like the last one that you mentioned. Like something that is easy to build, and we don’t need to test it, we don’t need to interview more users, we don’t need to collect data because it will be easier to test it, easier to actually implement it and release it and see if it works. That’s one approach for things that are easy to do. For more complex things, you recommend to find more data, evidence, data insights, qualitative, quantitative. Yes?
Büşra Coşkuner: One thing that I would, however, maybe change in your summary is I’m not advocating for finding the low-hanging fruits. And I actually believe that the low-hanging fruits are a little bit of blockers for us because we have learned, like in business school or whatever, we have learned, “Oh, look for the low-hanging fruits because these are the ones that will bring us a lot of money fast.” Again, it depends. Especially when you’re early with the product, you actually want to ignore the low-hanging fruits, and you want to find the big chunks. It can be a completely new feature for your product. It can be an improvement of your existing product. It can be a completely new product that is supposed to push the main product. So there are different ways of solving the impactful thing that you have just found, right? So it’s the job to be done of the user, or the goal, or the outcome, or however you want to call it. It’s all jargon, right? So basically, the way you solve it is a separate discussion to what you’re actually focusing on. And what you’re focusing on, especially when you’re early, is big stuff. The thing that makes a bang and makes everyone look at you. Big stuff.
Managing risk and gathering evidence
Wiktor Żołnowski: But what about the risk? So the most risky things first? Or I mean, maybe not building the biggest, the riskiest things, but testing or validating or looking for more evidence for the most risky things at the beginning? Or if you think that this is going to be the most impactful thing, but at the same time it’s big as in complex and difficult and risky, then this is also your most riskiest assumption that you want to de-risk and find evidence whether you should really continue thinking about this at all, right?
Büşra Coşkuner: It’s all an “it depends” decision, right? If there is something else that you think, like if it’s impactful but easy to build and without a lot of risk, there are two ways. Either you really build it, or you try to figure out why you think it’s such low-risk stuff. So if it’s big and impactful, can it really be of low risk? Maybe yes, right? And then it’s fine, and maybe no one ever noticed that or something. Yeah, you never know. So but in general, the rule is you try to find the impactful stuff. And then if that is also risky, then you want to make sure you are not shooting your knee or your foot or wherever. So the first thing is about this “it depends” thing. So if it’s small and if your gut feeling says that might be something, it can have a big impact but it looks like it’s pretty easy to build and easy to revert, so the risk is really, really low, try it out. If it’s something very, very big, you want – and as in very risky – if it’s something very risky, you want to make sure that you have multiple checkpoints to gather evidence, like experiments, right? So when we say “test,” then we actually mean experiments to gather some input, to learn from them, and to understand the issue better, the problem better, the outcome better, the potential solution better. But we really do it step by step. So there are lots of experiment types, but in the end, they all kind of boil down to some sort of a mix of qualitative input through interviews or surveys. Some people see surveys as quantitative input; for me, it’s still qualitative because surveys are still don’t really tell you so much about numbers but more about, if you’re setting them up in a good way, intentions. But you start with this kind of low-evidence type thing.
So I like to distinguish between discovery and validation. So experiments to learn versus experiments to confirm. And first, we want to discover and learn about the problem space or the target customer or whatever is risky about it, right? So we want to learn about it. Can we see any causation between X and Y? Correlation is something that we can see in data, but causation is really difficult if you don’t have a data scientist and the right setup for that.
Wiktor Żołnowski: Yeah, true data and one-way experiments, exactly.
Büşra Coşkuner: And then from there, we can move on to some types of prototyping, right? It might be a paper prototype, maybe wireframing, like first low-fidelity, then maybe high-fidelity, but some sort of prototyping. Lego Serious Play is also some sort of prototyping, right? And to really understand if we got it right and if what we have in mind somehow resonates. But it’s still low evidence, right? And then we slightly move forward to experiment types that help us validate. Like things like, “Okay, if this idea resonates, would you pay for it? Would you sign a letter of intent? Would you help me get a warm intro to your manager? Would you spare some more time and look at another iteration?” Like anything that sounds like commitment, right? Time is commitment, money is commitment, referrals are commitment, any type of commitment. And then we move, we move further and actually build some stuff. It can still be just a clickable prototype, or we have no-code tools by now, we can use them, right? So just make sure that the core value is delivered or in a way of manually delivering it. There are experiment types like Wizard of Oz or concierge that make sure that you do stuff manually. In the first one, people don’t see that you’re doing it manually, and in the second one, they know and see that you’re doing it manually, but you deliver it manually. Fine. So then you continue learning. You are still learning while you’re confirming. You confirm that the value proposition is the right one, but at the same time, you are learning the details about your offer, what you need to tweak, what is working well, what is not working or what is resonating, what is not. And then you can go build it. But again, this was like a very general description, right? So it again depends on the type of product you’re working on and so on and so forth, on what you can do. And anything that is in between, between no risk and high risk, anything that is in between, you try to find the balance, understanding how many experiments you really need to learn and confirm. Maybe you have learned enough and you only need to confirm. That’s kind of a balance that you have to find.
The role of confidence level in prioritization
Wiktor Żołnowski: I’ve been watching your presentation today here at the conference, and I strongly recommend it to everyone. Especially one slide where you show a table with some criteria, like the impact that you already mentioned and a few other criteria. And then in the last column, there was the multiplier, that all of the other things, like ratings from one to 10 in each category, and then in the last one, there was a multiplier of the confidence level. And I think this is something that you are just talking about right now, that the more confidence you have in something, and even if it’s not so impactful but it is at least somehow impactful and it’s not so difficult to be done, and you have a high confidence level, then sometimes it will have a better score at the end of the day. And this high score will help you as a product manager to decide that maybe this requirement, this solution, is something that is worth looking at, to build first and foremost. So could you maybe even tell us more about this confidence level?
Büşra Coşkuner: Yeah, sure. So the slide that I showed you was an example of ICE scoring – Impact, Confidence, Effort. So impact is something that you have to define. What does impact mean at all? You have to define that in your company, what it actually means. Is it business impact? Is it impact on the customer behavior? How do we actually really measure impact? Because a score is a score, but you need something that actually defines what it really means. And then the confidence level is exactly, actually directly linked to the evidence that you have. If it’s anecdotal evidence, “Oh, in this company that I worked in the past, we had XYZ, let’s build it here too. There it worked well, I’m sure it will work here as well.” This is anecdotal. Only because it worked there doesn’t mean it’s going to work for you here as well. So therefore, your confidence level cannot be really high. So anything that we do that creates real market feedback increases your confidence level. Anything that is more like internal stuff is actually zero confidence. Like, it’s only opinions. One is maybe a more formed opinion, an opinion by a more experienced person or whatever, but it’s still an opinion. Even doing a competitive analysis, seeing that all the competitors have it, “We have to have it too,” is a sign and maybe it’s like some medium confidence. “Okay, if everyone has it, why don’t we have it? Let’s consider it.” But the main point is not even the score in the end. Like, even if we expect some mediocre impact and are quite confident about it and it’s easy to build, it’s your decision in the end. You have to make a call. And it’s all a bet. It’s a bet on that feature that it’s going to deliver what you have expected. In that moment, you need that kind of a feature that mediocre is good enough, right? And you expect it to be. Then go build it. But if you say, “No, no, no, we don’t have time or resources or the willingness to spend anything on anything mediocre,” then you still reject it, even though the score might be high. So the scores shouldn’t be something that you attach yourself too much to, but they are just another information channel for you for your decision-making.
Avoiding analysis paralysis
Wiktor Żołnowski: What to build? What do you think about the situation in which the founder or the product manager falls into, that they are testing everything and they are paralyzed by all of the uncertainty that they have, all of the things that they do not know yet? And instead of just delivering something for a couple of weeks, they are testing, experimenting, and actually, at the end of the day, do not deliver anything. And I’ve seen it a couple of times.
Büşra Coşkuner: Analysis paralysis, exactly.
Wiktor Żołnowski: I’ve seen it a couple of times. I always thought that when I would be building my product, I will never get there. And after two or three months of building my own product, guess what? I was there. And I needed a person from the outside of our team who came and saw what we are doing and asked two or three questions such as, “Okay guys, how is it bringing you closer to your goal? Or what is your goal? How is it bringing you closer to that goal?” And we were like, “Oh, no, it’s not moving us closer.” Yeah.
Büşra Coşkuner: And wait, you answered your question. Okay, okay. So, okay. So the best thing that you can do is really time-box it. Just like in a technical spike, right? So we time-box that. There is a reason why we do that. And when we time-box an analysis because we don’t want to get into paralysis, then it helps us actually shift our mindset in a way that we ask ourselves, “What do we really need to learn?” We have this much time, maybe a week, maybe two. “What do we really want to learn? What do we really need to learn?” It’s the build-measure-learn cycle. Actually, it starts with learn. “What do we want to learn?” “Okay, great. We want to learn this. How do we measure that? How do we know if we learned what we want to learn? We need to measure this.” “Okay, what’s the smallest thing that we can do, which is the ‘build’ part, in order to measure this so that we can learn that?” And when you think it this way, it’s not a guarantee, it’s just a help to not get into this situation too often. I think everybody gets in this situation. We were in this situation as well, right? With the new product that I was describing at Doodle, I was like, “Okay, we need to understand this and that better.” And at one point, our CPO was like, same thing, right? “Let’s launch. What do we really want to learn? We want to learn if this is valuable enough for users that they would cross that line. And we cannot learn it if we don’t launch. Launch it as it is. It already offers the main value, the core value of what we had in mind. We will see if the core value resonates.”
What to measure and why
Wiktor Żołnowski: Measuring things. What to measure? Everything?
Büşra Coşkuner: Everything? No, no, no, no. And then spend most of your time on analyzing those data. A different, it’s a different type of analysis paralysis, right? Then you are not making a decision because if you will move this metric here, that this metric goes down, so you cannot do this, and you need to revert that. So again, it’s all about goals, right? So when we want to decide what we want to measure, we need to understand, again, now different angles. So one thing that we need to look at is what’s the core value of the product? And then what are the core actions that lead to the core value? What is the happy path? And here we already have some inputs that can help us understand what we need to measure, like which steps to measure. And when you know which steps to measure, you usually also know what to measure. And then the question is, okay, but what I’m not so happy… getting every corner case is difficult, and you don’t want that. But what you want is you want to understand what is really happening in your product. You can’t start with really tracking a lot of things. What I advise teams now is that they do a cleansing every now and then. So basically checking in their tools, are there events that are not fired at all or just very rarely? Then get rid of them, clean it up, remove it.
I had one team where we were able to see that they jumped from the cart to a completely random page. And that was a pattern. And the discussion was, “What should we do now?” And we looked at that random page, and it contained some information that would basically create trust. So imagine you’re a new user for a shop, and the shop was completely new, and you don’t know if you can trust this shop or not. So you want to buy, right? Or you hesitate and want to check if you can trust this shop. So they would go to that specific page that would share some information about the company and so on and so forth. And that looked completely broken, so zero trust here, right? So of course, they would abandon the cart. Then what we did was fixing that page. And once we saw, okay, now this weird pattern behavior doesn’t happen anymore or is quite normal, as in after looking that page up, they usually go back to the cart, it really fixed itself after that page was fixed. We said, “Okay, you know guys, your decision, but now it’s time that now that you see that everything is going okay, you can also remove it,” right? So find those anomalies and then figure out if it’s still an anomaly and you really still need to track it or not.
So that’s a tracking aspect of metrics. The other part of metrics is the business aspect of metrics. And here it’s important that we as product teams make sure that we can connect our success to business success because otherwise, everything that we do is a hobby, to be honest. And here is the thing of, there are different methods and frameworks that help you do this, such as a KPI tree, such as – I love impact mapping for that, for example, that immediately connects or directly connects business goals with user outcomes and helps you eventually find the features you want to build. And in this map, you can, once you have an impact map, you can put metrics and be like, “Okay, we need to measure this. Here’s a metric that we need to check,” right? And you can do that very, very nicely with an impact map. But any sort of method that helps you make this connection, build this connection between business goals and the product goals, as in hopefully outcomes, because we are customer-centric, hopefully. Anything that builds this connection, hopefully. And if not, you should think about it. You know, there’s a thing called customer centricity. Anyway, that’s my Berliner sarcasm. Anything that helps us connect those dimensions, those levels, is considered to be a success driver because this way we can prove that our work helps or not. But then we know what we should not focus on and where we should not spend money on.
Pirate Metrics. I love Pirate Metrics. They are so easy, and they help you really connect those worlds, right? When I say Pirate Metrics or customer journey maps, I really mean not funnels but real maps, right? So not a linear thing. It’s a map that flows like the way how people use your product and how the one leads to the other. Like what part of your product is the activation part or can users experience the activation part? And what part of your product shows that this is retention, right? So kind of see it as a flow. But these maps help you totally find, to figure out what’s happening in your product that will then, when you find the happy path or not the happy, but those paths that lead to the conversion, the core action, when you find those and fix those, improve those, then you can show the connection to the business goals, right? And therefore, that part of the metric topic is also important to understand that we really need to understand or create the connection to the business. Hopefully actionable, but because I don’t have anything that I can draw on, I hope it was not too abstract.
The future of product management with changing technology
Wiktor Żołnowski: Yeah, but I think that everyone who will be more interested could watch your presentation from today as well. And they could definitely find all materials, all presentations that you gave in the past, also available on the internet. So that’s great. Okay, last but not least, what about the technology that is changing right now? What about the things such as – I don’t want to talk about AI only, of course – but things are changing or actually have changed in the last, let’s say, 10 years. Like, I remember that 10 years ago, building a product required a lot of people involved, like a bunch of developers, engineers who were building stuff for a long time, like one or two years before things got done and got to production. Right now, pretty much the same systems, platforms are built by a team of three to five people in six to seven months. That only shows that things are built faster. So this feedback loop is much shorter than it used to be, and it’s getting shorter with every new invention. Things are speeding up. How do you foresee the future of product management in this context?
Büşra Coşkuner: This is a super interesting question. And funny enough, two weeks ago, there was a Craft conference, and I gave a talk about that. Ta-da!
Wiktor Żołnowski: I haven’t watched it yet because most probably it’s not even released.
Büşra Coşkuner: I’m not sure if it was recorded, to be honest. So anyone wants to see that talk, sorry, next time. Next time you have to invite me. Kind of. So my view on that is, I’m coming here from the business perspective, okay? So I know developers say, “No, no, it’s not going to replace our jobs. So it’s so stupid. It’s not even intelligent. Why do we even call it intelligent? It’s not. It’s just an LLM.” Yeah, it’s just an LLM today. It’s only one and a half years ago that OpenAI released it to the public, and look what we are doing with it today. I mentioned my SaaS for copywriting. The core value of that, the core feature, runs with an LLM. And I built it initially completely on my own with a no-code tool and an LLM before I asked my friend to help me. So where’s the developer that I needed to build it? I didn’t. Now I have Timothy, and he is the product engineer in our team. It makes way more fun than building it alone, and he can obviously make way more complex connections today than I could do it on my own with no-code and AI. However, I believe as long as you have something extra than just coding, you’re going to have a bright future. I believe you have to figure out what that extra is, but you have to have that extra.
And now that we are, and now the business perspective on this is this: oh, now that we can do more in the same time, I have the option either to keep my team as it is and we do more in the same time, or I save costs and keep, I don’t know, only x% of my staff, and we build the same in the same time with less people. It depends on the manager what they want and what they’re trying to achieve to make the one or the other decision. It doesn’t matter if LLM cannot do it today. The system, however it’s going to look like in the end, will do it tomorrow. Maybe not LLM only, in a system. And the way that we are now getting feedback loops way faster because we are faster on the market, if you ask me, is something incredibly valuable because we want to have fast feedback in order to understand fast if we should stop, continue, or pivot. At the same time, I blame ourselves as humans that we insisted and we keep insisting for so long to code something before we release it to get feedback. So we could do it without coding. I get feedback early on without coding, and we could still be fast.
Wiktor Żołnowski: Sure. This is something that actually, I have exactly the same in my mind. And not necessarily only because of AI, LLMs, and this kind of stuff. I see this technology progress, even the programming language frameworks, like programming tools, not necessarily AI itself. All of those things speeded up product development in the last decade, not just the AI. AI is just the last one year and a half, and in programming, maybe the last year only. And it’s moving forward and moving forward very, very fast. I wouldn’t even try to guess what we will be talking about in the next two years in terms of AI and bringing new products, but also no-code and this kind of stuff as well. Maybe a little bit philosophical, but it’s not necessarily just philosophical because we already have data to support these kinds of theories.
Büşra Coşkuner: Yeah, exactly. And I really like it that we can have feedback so, so fast because in the end, when we build a product, it’s about this, right? It’s about understanding if it resonates in the market. And the market is the combination of our target group, maybe geographical region or whatever, the industry, the competitors in this field that we want to move, right? It’s a combination of multiple actors and criteria. That’s the market. And if we can get feedback from the market early on, meaning not only customer feedback, but also we make a move in the market, how will the competitor react to that? Aha, that’s another piece of information, right? Will they get in panic mode like Google did when OpenAI released ChatGPT? Or will they stay actually totally calm and there’s no harm, right? This is also a signal to everyone else in the market what’s going to happen, right? And these are valuable and very important progressions in the tech market, in the tech scene. But as you say, it’s with anything that’s been happening the last 10, 20 years, right? It helped us to get feedback faster, and that’s good. We want that.
Wiktor Żołnowski: That’s good. And even other things, I mentioned market research and checking the market. Recently, we at our company, we created a market research tool, an AI market research tool that is a simple LLM that is just, you place there your idea, answer a few questions, and you get an answer about your competition, market size, other things. And those things used to take us like a week or two to actually find out. Right now, it’s just a matter of seconds, not even. This is amazing. And this is super cool. And of course, there is some small level of hallucination from time to time, but most of the information that is there is pretty valid. Even for my product that I’m building, my startup that I’m building after hours, I was like, “What could I expect? Just an AI that would be stupid at the end of the day.” Actually, the result was exactly almost the same as we figured out before during our long time spent on the market research. But what the AI did better was the competition analysis. It provided me with more competitors than I ever found before. So that was actually something that, wow.
Büşra Coşkuner: Yeah, and of course then, today, definitely there needs to be a human to check the results if they’re maybe super stupid or even a lie, right? And then we have the whole discussion about intellectual property and so on, right? We have these problems today, but we will fix them. We will fix them. Another example, I had joined David Bland’s course like two, three weeks ago or so, where he shared his ChatGPT prompts that are supposed to help you generate hypotheses to experiment then on the viability, feasibility, and desirability aspect. And the assumptions that ChatGPT gave back were really good. I mean, in the end, we came up with almost all of the same assumptions. There were like two or so that were new to me and that I liked. But this thing, as you said, did it in seconds. And we, my friend and I, we spent like a couple of at least hours to do that, to think about the hypothesis, plus all the things that come after that meeting, right? So, “Oh yeah, there’s another assumption.” And then after that meeting, “Oh yeah, there’s another assumption. Let’s also put this one on the top of our deck,” and so on and so forth, right? And this thing did it in seconds. Things are changing.
Wiktor Żołnowski: And we are living in really interesting times. Okay, so thank you very much. If you could also tell people where they could find you, how to find you in the best way, where to find your knowledge that you are sharing with others, and how to find you and how to contact you?
Büşra Coşkuner: Sure. The best way you can find me is on LinkedIn. Connect with me, however you prefer. And there I share a lot of practical tips and tricks regularly because that’s really my goal. I want to make it easier. I want to make sure that abstract product theory is more tangible and practical. And that’s what I share.
Wiktor Żołnowski: Thank you. Thank you very much for today’s discussion. And thank you to all of you for watching it. And I strongly recommend you and invite you to subscribe to our YouTube channel to never miss out on any new episodes. Thank you.
Outro
Wiktor Żołnowski: Thank you. Pragmatic Talks is delivered to you by Pragmatic Coders, the first-choice software development partner for startup founders. Be sure to catch all new episodes. Subscribe to our YouTube, Spotify, or Apple Podcast channels. And if you are thinking about building your own startup or struggling with product development, contact us and find out what we can do together.
