Gemini CLI in Product Management – Faster Decisions, Better Context

Here’s what you can learn from this episode of Pragmatic Talks:
The core philosophy of using AI in product management
- Beyond saving time: The main goal of using AI is not just to get time back. It is to reinvest that time into higher-value activities, such as deeper problem analysis and more human-to-human interaction with clients and teams.
- Increasing value and reducing risk: By automating repetitive tasks, a product manager can focus on strategic thinking, getting closer to the customer’s problem, and ultimately delivering more value or reducing project risks.
- AI leverages expertise: The tools shown do not replace a product manager’s skills. Instead, they enhance and speed up the work of an experienced professional who understands the project context and can validate the AI’s output.
The product manager’s AI toolkit and setup
- Core tool – Gemini CLI: Dariusz uses Gemini CLI, a console-based AI tool, to manage project information and automate tasks.
- Interface – Visual Studio Code (VS Code): He uses VS Code, a developer’s code editor, to have a unified view of his project files and the command-line terminal in one place. This makes the workflow much easier to manage.
- Context is key – The `gemini.md` file: A central file named `gemini.md` acts as the “brain” of the project for the AI. It contains all high-level context like the product vision, strategy, user personas, and a decision log. This file is always loaded into the AI’s memory.
- Managing context smartly: To avoid overwhelming the AI’s limited memory, more detailed information (like specific business processes) is kept in separate files. These files are loaded into the context only when needed for a specific task.
A step-by-step workflow for generating user stories
- Step 1: Generate risks from a business process: Starting with a documented business process (e.g., user onboarding), a custom command is used to ask the AI to identify potential risks and “unhappy paths.”
- Step 2: Human review: The AI-generated list of risks is always reviewed by the product manager. This is a critical step to remove hallucinations or irrelevant points based on expert knowledge of the project.
- Step 3: Create user stories using a template: The refined list of risks is then used to generate user stories. A predefined template is crucial here. The demo showed that without a template, the AI produces generic and less useful user stories. With a template, it creates structured stories complete with acceptance criteria.
- From hours to minutes: This automated process can turn a task that would normally take a few hours of manual work into a process that takes only a few minutes, plus review time.
Integrating AI with essential product tools like Jira and Miro
- Using MCP servers: The Gemini CLI can connect to external tools like Jira, Miro, Slack, and email through a protocol called MCP. This allows the AI to perform actions in other applications.
- Automating Jira backlog creation: The user stories created in the previous step were automatically pushed to a Jira project, creating a backlog with just one command. This avoids manual copy-pasting and saves significant time, especially with large backlogs.
- Facilitating team collaboration in Miro: The same user stories were then sent to a Miro board as sticky notes. This prepares a workspace for the development team to have refinement meetings, discussions, and brainstorming sessions.
- Creating a feedback loop: It is also possible to read information back from Miro (e.g., technical notes added by the team) and use the AI to update the user stories in the local files or directly in Jira.
How to get started and the right mindset
- Standardize before you automate: AI works best on structured, standardized processes. If your workflow is chaotic, automating it will only create faster chaos. First, define your templates and processes, then apply AI.
- Learn from developer resources: The documentation for tools like Gemini CLI is often written for developers. A product manager needs to read “between the lines” to adapt these tools for their own use cases.
- Embrace new tools: Product managers can benefit from becoming comfortable with developer-like environments such as the command line and VS Code, as it can also improve communication and understanding with the development team.
Full Transcript
Wiktor Żołnowski: Welcome to the next episode of Pragmatic Talks. Today, with me is Dariusz Mozgowoj, with whom we are going to discuss how he is using AI in his day-to-day job as a product manager. Today you will hear about automation that is speeding up the work but is also increasing the ROI in terms of delivered value.
Dariusz Mozgowoj: Hello everyone. The way of work which we’re going to show you today definitely gives me a lot of time back, but this is not the ROI I’m looking for. I’m looking for giving more value or reducing the risk by going deeper into the problem. I mean, I’m using this time to get closer to the client, getting closer to the problem. It definitely pays off.
Wiktor Żołnowski: This episode will be about using AI to actually increase the human-to-human interaction in product development. Stay with us, and I hope you will enjoy this episode. In this podcast, we talk to founders and experts to share real stories and lessons from building and scaling digital products and companies. Pragmatic Talks is for those who want to understand how digital products are really built and grown. No fluff, no buzzwords, just honest conversations.
How product managers can use AI
Wiktor Żołnowski: AI in product development, AI in the product manager role–how can AI help product managers like you work with clients, with stakeholders, and with the development team? I know that you have a bunch of things to show us today, but maybe let’s start with a short explanation of how you are using AI in your day-to-day job at this moment.
Dariusz Mozgowoj: AI in the market can give us a lot of opportunities. Some of them are related to the fact that when you’re working on projects, you’re working with a lot of context over the course of the project. This context grows, and it’s quite difficult for a product manager to manage in their head. I’ve used the already existing software Gemini CLI, which was intentionally designed to manage the context for developers, but I managed to use this software to also manage the context, which is the knowledge of the project. And what it gives me is speed. It gives me the chance to quickly connect the dots between the knowledge from my side and the knowledge which I get from our clients or from the team, and compose it into some additional piece of knowledge that gives value. I can also connect with a different branch of tools which can also help me speed up my work. Actually, I see two main areas of improvement when I’m using this in this way: I get a lot more time for thinking strategically, and the second one is that I have an option to broaden my vision of the data I possess in the project.
Switching to the demo
Wiktor Żołnowski: I think that we should switch to a demo of how you’re actually using it, since a picture can tell more than a thousand words, and a video can tell you even more. So for anyone who is right now just listening to this podcast, I strongly recommend you save it for later, for whenever you have time to actually watch the video. It will be a good opportunity for you to actually see what we are talking about. Without it, I think that you may miss some context and will not get the full value from this episode. Of course, if you are not driving or riding on a bicycle or running or whatever, you can just go to our next or previous episodes and listen to them, but this episode is mainly provided for you to watch, not just listen to. So let’s start with the demo.
Setting up the environment and context
Dariusz Mozgowoj: So first of all, as I previously mentioned, I’m using the Gemini CLI. Just pure Gemini CLI is software which is console-based. So managing the files–we don’t have a lot of them–is quite difficult; you have to jump between the terminal and go into the shell in the terminal to look for the files you need. So I decided to use something which is called VS Code. It’s just an IDE for developers. It allows me to see the terminal and files in one place. So it’s what is presented on my screen at the moment. I’ve created a fake project, so I’m not providing you with real data because I don’t have permission from my clients, but it works exactly the same. It’s prepared to actually show you how this process looks like in a real environment. I have a file on the left. I will comment on this file soon, but first of all, you see the terminal. When I type the installed command `gemini`, I will start the process of running this Gemini CLI, and you see it has already shown up, and it also shows that we are doing this installation. It found that we have five MCP servers connected, but at the moment I’m already in the project. So look at these files. The most important file is `gemini.md`, and this file is like the heart or brain of our project. It contains information related to the project, like general information. Let’s say this is a vision, maybe a bit of strategy, the kind of personas we’re working with, maybe information about our clients as well, to whom we communicate and how. What we also can have here, you see, we have a description of our business model, and what I also have, maybe at the end of it, I put, let’s say, the product decision log. So if something changes or the client or the market needs something from me, I put it here so the model knows that something is different from the beginning, and it will bring this information when we discuss.
Wiktor Żołnowski: This is like a basic documentation of the project, like all of the strategical decisions, all of the vision of the product, personas, clients, etc. So the AI, the model, will have access to the context of this product that you are building.
Dariusz Mozgowoj: Yes, and the key is that this `gemini.md` file is loaded into the memory of the model each time you start working with it. So when you are working on your files in this project, this project context is always there. So when you have it, let’s say we have some discovery workshop and we define something like processes, I put these business processes in a separate directory. And you can see here we have the project, the dummy project, which is called Betalis. It is about the software for people who would like to take care of their health and health indicators. In this process, when we discussed during the workshop with our client, we found a couple of business processes which build up the entire business, and let’s say one of these is user onboarding, and each of these processes I put into a separate file. And this is additional context. When I work on Gemini CLI, I’m working with the `gemini.md` context, so the project context, and additional context, like the process context.
Managing context to avoid mistakes
Wiktor Żołnowski: Are those other files loaded into the model’s memory by default, or do you need to ask for them?
Dariusz Mozgowoj: I need to ask for them. Yeah, so that’s like a more detailed context that you are only reaching for whenever you are working in this context. So I assume that way the LLM is not losing the context, it’s not exceeding the capacity of its operational memory.
Dariusz Mozgowoj: I mean, adding so many files that if I loaded them at once, it would overwhelm the memory of the model, and then you would lose the information.
Wiktor Żołnowski: I think this is one of the most common mistakes when people are using LLMs, using AI both for some kind of creative work and coding, that they are providing everything they have into the context and are counting on the AI to figure out what is what. People do not understand that AI has a lot of limitations, especially in terms of the size of the context that it can process in its operational memory in the one thread that you’re using.
Generating risks and unhappy paths
Dariusz Mozgowoj: Exactly. So that way you have this general file that is always loaded, and you have more specific context for each discussion that you will have with this kind of model. On top of that, what you already said, there is also the fact that it allows me to combine different contexts as I want in terms of working on a specific case in a project. In here I can type whatever. Let’s say I would like to do something with the file. Gemini has some kinds of tools built into the software, and this actually allows me to write files, read files, and ask standard prompts, but I don’t think that’s relevant at this moment. The most important thing is that I can use specific commands to speed up my work. Let’s say we do something repeatedly. Let’s say we have a process; this is usually a happy path, and we would like to discover the risks around the process to cover this as well in our work. Let’s say, to help the client to understand how difficult it is, I can show you here we can put it in a special format. It’s very easy. It just has the extension `.toml`, but the file is like a description and prompt. And this is the prompt which actually takes into account the process I showed previously, whatever process it is, and also uses an argument. And when you do it, it will generate, as you see in this example, the risk table and potential unhappy paths which are related to the process that is the input.
Wiktor Żołnowski: Okay, okay, so let’s try.
Dariusz Mozgowoj: So if I want to do this and I have this file under the `commands` directory–I can have some subdirectories, but these subdirectories then become part of the name of the tool. So when I do the slash, the slash allows me to open the context for the tools. You see there are a lot of tools; these tools are actually the tools from Gemini, chain tools. Yeah, yeah, but I can start to type, you see the name is `unhappy`. So let’s try this `unhappy`. As you see, in this `commands`, when we use a slash, it opens the dialogue with different tools which we can use. Most of these tools are provided by Gemini CLI, but whatever you add to the `commands` becomes a tool, and you can also bring it into the context of this prompt. So I just need to write the context, and you see this is context `unhappy`, and when I’m putting `enter` now, so it’s here, and then the first thing which I put after this, or everything which I put after this, actually, is the argument for this, and this argument is placed inside of this `unhappy.toml` file. Let’s say that I want to find the unhappy paths for the process we discussed, so the user onboarding. You need to use the at-mark (@) and then the name of the file, so this is `user-onboarding.md`. Okay, when I press it, it actually starts working and taking the model, and it needs tokens to do the stuff. It will take its tokens and produce the output. There’s a chance that the model understands that okay, I will produce it and then I will save it into the file. And there’s also the chance that it will be presented to us first, and we will be asked to confirm it. Whatever happens is sometimes a bit different. You can put specific information in a command that you want it to always be put in some files. I didn’t do that because it’s not very important for me. When you don’t, sometimes it writes to the file, and sometimes not. And you see that at the moment it works… it just gives you some very funny conversational information like, “You see, this is the fantasy.” Sometimes it happens, so we need to do it again. Something is happening with Gemini. Usually after the second time, it works. Yeah, you see what happened already: it reads the file, it’s showing us it reads `user-onboarding.md` and reads `gemini.md`, so we know that it’s in. And now it’s defining onboarding steps, assessing potential failure points, and now it starts to work. It takes a bit, but oh yeah, and now it’s like it produces, showing you also, when you’re working on a file that already exists and you want to modify this file, you also see the difference showing which lines have been added and removed. It’s asking if you want to allow it to apply the change to `risk-user-onboarding.md`. Let’s go. So it created a new file with this and is presenting the payload to us here. But you see that it obviously completed the risk analysis and generated the file, and this file showed up here. I can open this file and you see this is a scenario: the unhappy scenario is ‘Email verification failure,’ and ‘User registers with email verification, fails, email is delayed, goes to spam, or link expires.’ And then we also have the risk matrix, which is like the probability, impact, and proposed mitigation for all these risks which have then been translated into the scenarios.
Wiktor Żołnowski: So basically, you just replaced the role of a test analyst who is looking for these kinds of paths to cover with tests and to create the test scenarios or test cases like this for developers or for anyone else to test it. And of course, I believe that developers now can use it for some kind of BDD or test-driven development or acceptance-driven development.
Dariusz Mozgowoj: Yeah, the test for that, yeah, absolutely. I will show in a couple of minutes how to deliver this to developers, but one disclosure here: whatever comes out from the model, I don’t trust it. To be sure, at this moment I will not send it to developers and they will code it and everybody will be happy. We need to know that the models sometimes hallucinate. So the first thing after I get this information is I will read it, and I will reflect on it from my experience, from the knowledge I have. And if I see something that is wrong, or a scenario that is already covered, then I remove it. And in a normal piece, I will show you, maybe I can write here to the model: “we have this file,” you see that when I put this @ and I start to write the name of the file, you see the name of the file, so that’s how I bring this information in, and I say, “Okay, I don’t like scenario number one. It’s already covered. Remove it from the list.” And it will happen. Removal strategy, it sounds very dramatic. As I said, it works exactly the same because VS Code and Gemini are designed for developers, and we see the difference which was removed from the file, and it’s already removed. I don’t need to accept it. If the change is too big, it sometimes asks me to accept it, but in this case, I was very clear that I want to remove it. Maybe that helps. And we can check now, the first scenario was email verification failure, so let’s check. Now we have scenario two as the first: wearable connection. Okay, so we have a kind of scenario, and what we can do, we can try to get this `gemini.md` context, which is already there, and these scenarios, and produce user stories. Why not? Let’s try to do this.
Creating user stories with templates
Dariusz Mozgowoj: I’m using something different; that will not be a tool like `unhappy`; that will be a template. So I strongly recommend using templates generally when working with LLMs. I mean, structure and standardize the output, so it’s very helpful. Here, I have a user story template, so my user story looks like the user story description and some kind of acceptance criteria, maybe some ‘Given-When-Then’ scenarios, and if we have some technical notes or something which is out of scope, we know that it will be filled as well. This information usually comes after we enrich this data after we discuss with the team and we get some information from the team, and then we can enrich the user stories with some technical notes. And maybe when we discuss this with the client as well, we see that we don’t want to do something, and that will be out of scope, but that’s the beginning. So we just have this structure. I can ask it, let’s say: “Okay, please” – you’re very kind when communicating with AI – “yes, `Based on @risk-user-onboarding.md create user stories based on the template @templates/user-stories.md. After that, write it to risk-user-stories.md.`” So I can define for myself where the final results should land. This also takes some seconds, not hours.
Wiktor Żołnowski: Comparing that work that you just did with this acceptance criteria to someone who would have to do this manually, write it down, figure it out, I believe that would be a couple of hours. Maybe there were a couple of scenarios, so maybe two hours or something like this, but still, those are two hours that you just shrunk into a minute.
Dariusz Mozgowoj: A minute, yeah, plus, I mean, maybe a couple of minutes, because after that, as I previously mentioned, I need to go through and check. But this check is where I am using my experience. I’m not a writer; I’m just using my brain to work with this and checking if everything is fine, much faster than anyone who would also be checking the work that they do. Exactly. Okay, so you see it also showed us the payload, but we also have a new file, `risk-user-stories`. We can show this here. So this is the first user story: “As a patient, I want to be informed and guided when my wearable device fails to connect so that I can understand the issue, try to resolve it, or find an alternative way to input my data.”
Wiktor Żołnowski: Sounds good to me. I’ve seen much worse user stories written by some people who were just applying to our product owner or product manager jobs, so this one is actually quite good.
Dariusz Mozgowoj: The LLM is smart. I said it’s as smart as the good context we provide. Yeah, so the more context and more very precise information we put in, the better the output from the workaround will be.
The importance of structured templates
Wiktor Żołnowski: Can we do an experiment right now? Could we ask it to actually write user stories without providing the template?
Dariusz Mozgowoj: We can try. Let’s see what happens. Yeah, just to spot the difference, or maybe you know, it already has it in its memory, so most probably it won’t be very different, but let’s try it. I can use the Gemini tools to reset the memory, and then because when I’m asking it now to do some kind of user stories again, it probably uses the same because it’s already in a context in a thread, so it’s like a memory refresh.
Wiktor Żołnowski: We just erased its brain.
Dariusz Mozgowoj: Yeah, we are now in a clean state of our memory, and we can ask it again: “Okay, please” – I shouldn’t use please, of course you can, I don’t mind – “`Okay, please create the best user story you can based on the scenarios from @risk-user-onboarding.md. Then write it to the file temp.md.`” We’ll see what happened. It could be that it reaches the internet to check what that means because a standard tool is web fetching, but I don’t know. We’ll see. I’ve never actually tried.
Wiktor Żołnowski: Regardless of the results, pragmatically, we always want to have things standardized, so this kind of solution that you showed with templates for user stories could be very good for just using the standards that we have.
Dariusz Mozgowoj: It seems that it doesn’t work. I will try again: “`Memory refresh. Create best user stories based on the scenarios from…` we’ll write it to fire, okay.” We see, actually, memory refresh doesn’t remove everything. Oh, I see what happened, because `temp.md` already exists. Okay, let’s try without it. I’ll move it to trash. A few moments later, so here we go, it finally got there. So we see, for scenario two, ‘Wearable connection failure,’ it just wrote a simple user story. It doesn’t provide you with the acceptance criteria, and maybe the scenario is just a simple user story: so, ‘As a patient, I want to be informed…’ ‘As a user…’ because it’s not getting that ‘patient’ from ‘wearable connected fails’… ‘I want to receive a clear error message, have the option to retry, be able to…’ So it’s actually providing a lot of information which could actually be different user stories.
Wiktor Żołnowski: Yeah, so as you can see, it’s not so obvious if you are not using templates, if you are not providing context, then those models are actually generating something that is not very useful. And seeing something like this, when I mentioned before that I’ve seen many worse user stories written by people who attempted to join us as product managers, this is the level of people who are juniors in product management. I’m not surprised that so many people are claiming that AI is not working or is not good enough to actually replace anybody’s work.
Dariusz Mozgowoj: In most cases, it’s not because the AI is not good enough, it’s because people are not using it properly or don’t know how to use it properly. This is a tool, and it still has some limitations. And if one doesn’t understand the limitations and doesn’t know how to omit these limitations or do something in a way that these limitations are not blockers anymore, then it doesn’t work. You have to understand what you’re doing, and then with your knowledge as a specialist, you can get very nice and valuable output. Yeah, it can leverage your expertise, not the other way around. Okay, so now we have the file. Well, maybe I will remove these dummy user stories, but we have these user stories which were created with the template, and okay, now I have them here. I would like the team to check them, we can discuss them during the refinement process, we need to estimate them, and then maybe when we get some more information, we can enrich them. But first, that information needs to be there to be very transparent in a place where we are looking at it.
Integrating with Jira and other tools
Wiktor Żołnowski: Yeah, we don’t want to hold our backlog in these kinds of MD files somewhere in the repository. There are much better tools for that.
Dariusz Mozgowoj: Yeah, so we have to somehow move them to our tool. We’re using Jira, yes, so of course, we can copy-paste this. We can copy-paste. It takes a couple if, let’s say, you have ten, but we are a little bit smarter than that. What it allows us here in Gemini CLI is it allows us to define the MCP servers. So what is an MCP server? For those who are not familiar with AI, MCP is a protocol which allows the models to connect with some external tools and services. So those services usually provide their own API or the MCP servers. The MCP server doesn’t have to be like… when you connect to this tool, you don’t have all the possibilities. The owner can define which functions you can use. Sometimes it’s quite open, and fortunately, Atlassian is very open and gives us a lot. Yeah, so Jira is a nice tool to use together with AI. I’ll agree with that. And maybe we can find a different tool which also is very good for using as well, but Jira definitely is. And I’ve created the Jira, and this is the Jira, you see. What I need to know is the key, which is GC, and sometimes I need the board number if I have more than one. Of course, these numbers can also be put into the context or into the command, but for this purpose, I didn’t do that. So I can show you that when I provide this information, it works. But first of all, let’s show the people watching us what we are talking about, the MCPs I have. When you have MCPs, and they are defined in settings here, we have some kind of structure in JSON. There will be some keys, but I will change them after the meeting to be sure that we secure that. But when I define those MCP servers, and there are different ways to do so, I can shortly describe them. So you can use a Dockerfile, so you can dockerize some MCPs and open this Docker on your personal computer and connect it, and actually, my Jira is working in this mode. You can also run an MCP from the repository and run it on your computer, so it will be local on your computer, not in a dockerized part. And the third one, which I am sometimes using, is n8n. In n8n, there is a specific node which is called an MCP trigger, and you can add some tools to it, and it provides you the URL to the MCP, and you can also use it here when you have your n8n server and you create the workflow with the MCP trigger, then you can use it as well here. So yeah, sometimes it’s helpful. So to see your MCPs, you see I also put the backslash, so I have access to the tools. `mcp` is the standard tool, but you need to put a comment, `list`, if you want to see what you have there. And you see that I have a couple of them. Actually, I showed you at the beginning that when we started, it started with five tools. So I have two, the first two, Email and Slack, they are from n8n. This is also very nice because we’re using it often here in Pragmatic Talks. So Miro is standing on my local machine, so I just run `npm run dev`. For those who don’t understand, we just run the software from the repository directly. And for Perplexity and Jira, these two are from Docker. And you see Perplexity, Ask, Jira, Native. You see the tools which are provided by this MCP. So for Perplexity, we have three, but we don’t need them anymore, I guess. But for Jira, there’s a lot of getting things from different places and creating things in different places. So there are a lot of things which you can do with Jira. The same with Miro. And for Slack, because it’s n8n MCPs, I only put those tools I needed, so we can extend it, of course, anytime we go into n8n, we can extend it and have more functions available. Okay, so let’s go to Jira because, as we remember, we created these user stories. We have a couple of them. I don’t remember… it was five, without one because we started from the second one, so this is four user stories. Let’s create them. So to call the MCP, I just need to use the same stuff as we’re doing with the files, but the MCP is not visible when I’m using the at-mark. I see only the files, I don’t see MCPs, so I need to remember the name. My name is `jira/native`. If that’s my name, you can put Jira or just ‘j’ if you want in the setting JSON file. And then now we call it. We don’t need to define the tool, the function; we need to describe in a human way what we want to do. The MCP is doing this for us; it’s choosing the right tool for the job that we are describing. It happens when you’re describing it not well that it jumps to a different tool, and it can create a mess, yeah, so you need to be very specific when you’re defining what you want from the MCP, exactly. So we need to now call the MCP. I call it by the name I have it in my setting JSON. So now the MCP knows which function to use; I just need to describe it. I describe it like “`Create all user stories from @risk-user-stories.md in the project with key GC and board number 32.`” So this number, the key and board number, can be put into the context. I didn’t do that for this to show you what’s actually going in. When I execute it, the MCP calls the proper tool, by my assumption, and uses our file, the user stories file, to create them in our backlog in Jira. Yes, based on the file at `risk-user-stories.md`, also description based on the data you have in the file. Let’s check what happens. So now it’s reading the files and trying to process this data, and it should call the MCP server. Okay, so I need to confirm. We have four because we have only four scenarios, so it creates only four user stories. We see the payload which has been sent to the MCP server, and it informs us that it created four user stories. Let’s refresh, and here we are. And as you see, we have these four user stories, all of them with the number, and we have them also as a file. We can save this information into our context, but you see that they also have acceptance criteria, they have scenarios, and the same description of the user stories as we had previously in the file. So it automates the process. Let’s say that after a workshop with the client, you create a user story map with hundreds of user stories, and you do the same stuff as I did in Gemini CLI, and you would like to create the backlog to start the project with the client. It now takes minutes.
Wiktor Żołnowski: I can imagine that we are using, like, we have a transcription of the meeting, for example, then we, for example, use a summary of the meeting, and then we put this transcription as a context for that, and then we can generate the entire backlog based on the summary.
Dariusz Mozgowoj: Yeah, what we are currently doing, we are preparing a standardized notes format from each meeting with a client, with stakeholders, especially the initial workshops, etc. So based on the standardized input, we can generate even better output for the LLM, exactly. But as we discussed at the beginning, I’ve created user stories from maybe some discussion with the client, maybe after some workshop, but these user stories need to be refined, or the entire team needs to look at them and define how to do this. Here, at the moment, we only define what we want to achieve.
Using Miro for team collaboration
Wiktor Żołnowski: As I previously mentioned, we have this MCP I have here, we have Miro. Let’s check if we can add the sticky notes to Miro so the team can work with this Miro board. Let’s do this. Let’s try to add it to Miro and let’s try to do some human work, not only AI work, on that.
Dariusz Mozgowoj: Okay, fortunately, our MCP is called ‘miro’, so I don’t need to remember much of the name. So I call it the same as Jira, by the name: “`Add sticky notes to the board based on the user stories we have.`”
Wiktor Żołnowski: Do you need to provide the name of the board, or you already have it in the settings?
Dariusz Mozgowoj: As you remember, when you connect to Miro, you have to define the board; it’s always the same. The bad thing about the Miro MCP is that it doesn’t recognize the place on the entire board, so it puts it somewhere. Sometimes it puts it in big stacks of sticky notes. Miro also doesn’t understand, when you’re reading from it, if something is lying on a frame, for example. You cannot say, “I have a frame with a name, read all the data which is on the frame.” It doesn’t work. I found a manual workaround: you need to name the sticky notes with some keyword, like a tag, and say, for example, our project is Betalis, so we call it ‘Betalis’. And now when I write, “Read me all the sticky notes which have the tag ‘Betalis’,” then it works. So it’s not perfect. I’m pretty sure that sooner or later they’ll figure it out, but having it work on the frame could be… In my opinion, at the moment, the architecture of Miro doesn’t allow you to make this distinction that something is lying on something else. It will help a lot because sometimes you just create a frame, put the sticky notes there, and you should think about this, but it’s not allowing that. So if anyone from Miro is watching us, please fix it, or put it on your backlog, or explain to us how to do it, because at the moment, MCP Miro actually doesn’t help me with so… shipping else, whatever that means. Like, this is a trick for people who are impatient to make them busy. Okay, it seems it works. Let’s check Miro. Okay, let’s find where they are. And now we have the entire thing that was in our backlog put onto a sticky note. It maybe doesn’t look very nice because it’s not formatted well, but it’s already there. I can put a couple of enters here and there, and I can start a meeting with the team. And of course, we can create our own command that will, for example, only provide the user story title or that will add this kind of spacing, formatting, etc., to that, like a template or whatever for the sticky notes. But it’s just showing you how quickly we can go from our work with the project context into some output like user stories, and we can follow up with the other parties to discuss. Let’s imagine that we have a refinement with the team, and we are adding additional sticky notes, let’s say the sticky notes will be some technical items for user story A, and we can call it here, ‘User Story A’. When we do this stuff, when we finish, we can tell the MCP server to read it back, and then we have additional information. So we can say, let’s enrich the information we have based on the information we got from our team, for example, and it works as well.
Wiktor Żołnowski: So basically, we can tell the AI to actually read the results of our brainstorming, of our refinement here in Miro, and put all of the things that we discussed with our team into the user stories that are in Jira.
Dariusz Mozgowoj: So we don’t need to bring them. Usually, I first put them into the files in Gemini CLI because I believe that with each iteration we do, we expand our context. The more context, as I said at the beginning, the better it is for us. Of course, we need to smartly manage this context because if the file is too big, doesn’t have proper sections, the model gets crazy.
Wiktor Żołnowski: It just came to my mind that we can also create a command for that. So we can have a process that is like a multi-step process that first, we update the MD file, and then we update the Jira stories, and the other way around.
Dariusz Mozgowoj: Yeah, yeah, we can define what we want exactly to be precise to the LLM. But the challenge for the user who is using this is to think in context. If you see that your context is too big, maybe it’s time to split it by some key, or maybe it’s time to compress it. Sometimes also, when you have a very long history, maybe not everything is relevant at the moment, and maybe all this information you have already has some outcome, so let’s compress it to the outcome and keep only the outcome.
Wiktor Żołnowski: Yeah, and just remember that AI models have a limited context window, so it will not remember all of the things that you are telling it. It will just remember the last conversations and just a couple of files that you can add to this context window.
Other automation possibilities
Dariusz Mozgowoj: Okay, let’s say you are my client, and I would like to ask you something about the user story I created. I can send you an email also. I’m not going into my email to copy over everything I want. I just may want to use my command center. And because, as you previously saw, one of the MCPs I have in my tool is the email one based on n8n. So I can also… maybe at the moment in this fake project, I don’t have the template for a very nice, structured email for my client, but hopefully, it was good for you just to check.
Wiktor Żołnowski: But right away, whenever you work with clients, with stakeholders, I would recommend you rather call them than write an email. But in case you’re living in a world where emails are still the main form of communication, you can use this kind of MCP service and CLI for actually automatizing your work and sending emails or sending Slack messages as well.
Dariusz Mozgowoj: Yeah, I can do that. But at this moment, my Slack message is only to my internal team, so it’s not related to this fake project. But normally during the daily work, even yesterday evening, I created a bunch of user stories for today’s refinement, and I sent all these user stories with their numbers to the team so that in the morning, before they start work, they know which user story numbers they need to look at before the refinement starts. I also have here the Perplexity Ask, so this is the stuff which can also help product managers or project managers do some research. I can do research. Let’s say I have some kind of new feature to implement, and maybe I have in my `gemini.md` defined some competitors. So I can ask Perplexity from here: “I have the context of my new feature, which we discussed is good for business. Let’s check how it works and what our competitors have in the same context.” Awesome. And after a couple of minutes, we get some kind of document which summarizes all this information, so it helps us to make a decision.
Final thoughts and learning resources
Wiktor Żołnowski: Okay, so I think that we’ve shown enough for today. I believe that many people are right now thinking, “Okay, that looks a little bit like magic.” So where could someone learn how to work that way? What are the learning sources?
Dariusz Mozgowoj: It’s geared towards developers, so I would say you need to read between the lines because what I base it on is only the Gemini documentation. Actually, everything is there. One thing is reading documentation like “read the manual,” and the second thing is you need to figure out your own processes. People who want to work like this need to figure out how they are working, what it is that they are repeating every day, and they need to create their own commands and their own templates for this kind of stuff. And I’m pretty sure that someone who has already standardized their work one way or another, they will not have a problem with just learning another tool and adding this tool on top of their standardization and just automatizing a lot of things and speeding up their development. But if someone is working in a very chaotic way, they may struggle a lot with using AI.
Wiktor Żołnowski: When you’re automating chaos, you just have a lot of messy chaos. Yeah, that’s true.
Dariusz Mozgowoj: I saw other people trying different stuff. I’m using the CLI here, but I also recognize some people abroad who are using cloud code to do the same thing. It actually works the same; that’s not the only tool you can use for that, absolutely. What is scary for people is a lot of work that maybe needs to be put in before this tool and this process pay off. For me, it took a couple of weeks, maybe doing stuff after work. But my assumption is that when I try to explain it to someone who has a bit of technical skill–you don’t need to be a developer, you don’t need to actually understand code to start working with this. It looks scary, but with someone who can guide you, after two or three hours, you can start using it, and then it brings more and more value over time.
Wiktor Żołnowski: Yeah, and I think that for the product managers who will start using tools like Visual Studio Code, or if they will start using console commands, etc., if they never did it in the past, they will better understand how developers work, and that may also help in communication and in being efficient in this role. Okay, so of course, if you have any comments, if you have any questions, do not hesitate to put them into the comments under this episode on YouTube or wherever else you see it, and let us know if you would like to see maybe some more detailed workshop or webinar where we’ll show you how to, step-by-step, configure your own product or some dummy product using the CLI, explained step-by-step so you can use it on your own. Or just let us know if there is any other topic around using AI in our day-to-day job that we can share with you that might be interesting for you. So thank you very much, and don’t forget to subscribe to our channel.
