AI Agents: How They Work

Here’s what you can learn from this episode of Pragmatic Talks:
What are AI agents and how do they differ from AI workflows?
- AI agents defined: An AI agent is an autonomous system that uses AI, especially Large Language Models (LLMs), to make decisions and take actions. You can delegate tasks to it, and it will work on its own with less need for human interaction.
- Agents vs. workflows: The main difference is adaptability. An AI workflow follows a fixed, predefined sequence of steps. An AI agent is more dynamic – it can decide which tools to use and in what order based on the situation. It can also react to unexpected results, like errors, and try a different approach to complete its goal.
How to build an AI agent
- Start with the problem: Before building an agent, you should clearly define your objective. For many problems, a simpler and more predictable AI workflow is enough and a better choice.
- Tools for building: You can use open-source frameworks designed for creating agents. Alternatively, you can code an agent from scratch using popular languages like Python or TypeScript. There are also low-code and no-code platforms that can help build AI workflows.
Fine-tuning an LLM vs. providing context
- Fine-tuning is rarely needed: Fine-tuning a model is usually not necessary. It is mainly for changing the *style* of an LLM’s responses, for example, to make it sound like a specific character like Yoda.
- Context is more important for knowledge: To give an agent up-to-date or private information, it is better to provide this data as context. This is done by dynamically fetching information from a database or document (a technique known as RAG) and giving it to the LLM with the user’s request.
Practical examples of AI agents and workflows
- A personal AI assistant: Jakub created a simple AI agent integrated with Slack. He asked it to summarize a long article from a web link and email him the summary. The agent decided on its own to use several tools: first to browse the web, then to summarize the text, and finally to send an email. This example shows that agents are powerful but can be slow, sometimes taking minutes to complete a task.
- An automated candidate screening tool: Pragmatic Coders uses an AI workflow to help their recruitment team. The tool automatically analyzes a candidate’s information (CV, LinkedIn profile) against the job requirements and creates an evaluation score. This saves the recruiters a lot of time on a repetitive task.
Limitations, costs, and security of AI agents
- Limitations: LLMs have a limited “context window,” which means there is a limit to how much information they can process at one time.
- Costs: Building and running AI systems involves costs for cloud infrastructure, computation, and paying for access to powerful models like GPT-4.
- Security and privacy: Protecting private data is very important. To keep data secure, you can use local, self-hosted LLMs that do not send data to third parties. Another good practice is to only send the necessary, non-sensitive parts of the data to the LLM.
The future impact of AI agents
- Transforming the job market: AI agents will not just replace jobs but will transform them. They will handle boring, repetitive tasks, which will increase productivity. The best way to adapt is to be curious, explore AI tools, and learn how to use them to improve your work.
- The evolving role of software developers: The job of a developer will change. Instead of writing every line of code, developers will act more like orchestrators who design systems and delegate tasks to AI. They will use low-code and no-code tools more often for building things quickly.
Read the full transcript
Wiktor Żołnowski: Welcome to the next episode of Pragmatic Talks. Today, we’ll try to cover the topic that everyone is speaking about right now, which is AI agents. Today, together with me is Jakub Pruszyński. Jakub is one of our senior developers and tech leads who recently specialized, especially in AI development and AI agent development. So, welcome, Jakub.
Jakub Pruszyński: Hi, everyone.
Wiktor Żołnowski: So let’s start with the beginning. Everyone is speaking about AI agents, AI systems, workflows with AI systems, LLMs, and other stuff. But what are AI agents?
What are AI agents?
Jakub Pruszyński: AI agents are a kind of autonomous system where you can delegate making decisions and taking actions to AI, basically. So you can react to user requests and demands and just leave it there.
Wiktor Żołnowski: How does it differ from regular software?
Jakub Pruszyński: AI agents, in simple terms, are this kind of application or system which fully utilizes AI to take actions, make decisions, and follow the request of the user. And they allow us to reduce the number of human interactions in the system.
Wiktor Żołnowski: What do you mean by ‘utilize AI’? What are they using?
Jakub Pruszyński: They are like traditional systems but with a different architecture. The key components of those applications are requests to large language models, which can, in a very unique and contextual way, understand the requests and needs of the user.
AI agents vs. AI workflows
Wiktor Żołnowski: Okay, there’s a lot of buzz around AI agents nowadays, and people are talking about it. Many people are calling things that probably are not AI agents by the name of AI agents, which maybe isn’t wrong because I think that the border between traditional workflows and proper AI agents is moving; it’s not a strict line. But from your perspective, from your experience, what is the difference between an AI agent and a workflow that is utilizing some kind of LLM or other AI?
Jakub Pruszyński: They are quite similar in their overall picture, but there is one big difference. AI workflows are a sequence of steps which are predefined, and they cannot change the flow. Agents, on the other hand, have similar components like tools, memory, and so on and so forth, but they can adapt to the current outputs and previous actions within the system. So basically, agents are making decisions on what tools to use, what, for example, website to browse through, and based on that, they are making a decision which path of the workflow to follow. It’s even more complex than that.
Wiktor Żołnowski: Is it even more complex than that?
Jakub Pruszyński: Yeah, that’s one aspect. The other half is that they can react in a very predictive way to the outputs of the previous action. So, for instance, if an error occurred, then an agent can react to that, maybe tweak the parameters a little bit which were used in the previous action, and thanks to that, it can continue the process.
The inner workings of an AI agent
Wiktor Żołnowski: How do those agents interact with the external world?
Jakub Pruszyński: They are data-oriented, so everything needs to be provided to them in the context. They need to understand that you have access to this kind of tool, that you can act with those tools in this or that way. Basically, they understand the big picture of that data, and that’s something new on the market because previously, the tools and AIs were very specialized in narrow domains. Right now, they can understand the broader context and human language in a way that was previously unavailable to us.
Wiktor Żołnowski: Let’s talk a bit more about this data and also about the ability of how to train an AI agent, or if AI agents are able to learn on their own. Is it possible for agents to learn on their own, on the go, when they are utilizing some things? And how do you train or pre-train this kind of agent with a set of data?
Jakub Pruszyński: Sure. Basically, you can say that they learn during the process because when they interact with internal tools or something similar, they make conclusions and observations about the previous actions and potential demands and requests from the user. So they can make a conclusion about what they should use and how to achieve the user’s needs and goals in the quickest and most optimal way.
Building your own AI agent
Wiktor Żołnowski: If I would like to create my own AI agent, how can I achieve it? How can I do this?
Jakub Pruszyński: First, you should probably ask yourself if you really need an agent because, in my opinion, we should apply the same rules as in general software development. So basically, we should define our objective, our problem, in a very comprehensive way. We should understand what we want to achieve, how we want to achieve it, and why. That’s the basic. After that, we can start with something very, very simple because probably in most cases, a simple automation or AI workflow is enough. Thanks to that, we have something simpler, something cheaper, and something which will fulfill our request in a more predictable way.
Wiktor Żołnowski: Okay, and let’s assume that I already have some goal. I already have some comprehensive documentation of how I want to achieve this goal, what I want to build. So what tools can I use to create my own agent?
Jakub Pruszyński: There are a few companies which are trying to fulfill that gap on the market. They create custom frameworks which are, in most cases, open-sourced or somehow available without a charge. On the other hand, you can just create something from scratch using the most popular languages like Python or TypeScript. They’re great for that, and they’re supported by big players in the market.
Wiktor Żołnowski: So there are either some kind of low-code/no-code solutions that I can use for workflows and connect with some LLMs, or I can code it on my own in Python or TypeScript.
Jakub Pruszyński: Yeah.
Fine-tuning LLMs vs. providing context
Wiktor Żołnowski: Okay, let’s talk a little bit more about LLMs. I know that it’s possible to fine-tune LLMs. Is it always necessary to do fine-tuning, to do relearning of the LLM when, for example, I would like to create a simple chatbot that will utilize our company knowledge base to answer, let’s say, employees’ questions or some of our clients’ questions? Should I use fine-tuning for that, or are there any other ways to actually teach my model to answer these kinds of questions?
Jakub Pruszyński: To answer this question truthfully, there are very few scenarios where you need to fine-tune the model. In most cases, very carefully crafted prompting is more than enough to achieve very good results that fulfill most of your scenarios. Fine-tuning, basically, is a matter of adjusting an already pre-trained model to very specific cases. For instance, if you’d like to, I don’t know, maybe make your chatbot sound like Yoda, then fine-tuning is something very useful for you because you can give a few hundred examples, for instance, of how to answer some questions in Yoda style. And thanks to that, it will follow this pattern throughout all conversations.
Wiktor Żołnowski: Okay, so it’s rather about the way the LLM is answering than the knowledge that it possesses, because knowledge could be accessed anyway.
Jakub Pruszyński: Yeah, yeah, it boils down to modifying the way the LLM understands the context of the conversation and based on which it generates the final answer, rather than finding some hidden gems in the knowledge that you provided to it.
Wiktor Żołnowski: I know there are quite a few ways to provide some data to the LLM, especially in the context of agents, like RAG or a simple database and long-term memory. So could you tell us a bit more about how to do this?
Jakub Pruszyński: It’s a very simple way of thinking about it because all LLMs are pre-trained, so they have some kind of cutoff date, after which new knowledge is not provided to them. And we, as developers or as a market, needed a way to provide contextual, important data to them to answer new questions related to recent events, something like that. And that’s why we figured out that we can do it by dynamically fetching some context-related information from the database, from memory, or something like that. Thanks to that, we can provide recent events to the LLM, make use of its knowledge and understanding of how to process text, and based on that, we generate something related to the actual needs.
Practical examples of AI agents
Wiktor Żołnowski: Cool. I know that you already have a few examples of AI agents. I also know that you have your own AI assistant, an AI agent that is assisting you. Maybe you can show us an example of how it’s used and how this kind of agent might be useful for regular people or for developers.
A personal AI assistant
Jakub Pruszyński: Sure. I created a simple application integrating with Slack. Basically, I don’t know Python, but I created a whole application using AI, just from scratch. So it’s worth trying.
Jakub Pruszyński: As you can see, the application is quite simple; it’s a chat. I already integrated a few tools. For instance, I can manage my personal budget, I can manage my to-do lists, I can scrape some websites and search for information based on that. And on top of everything, I can send notifications, emails, and so on and so forth. So I hope to add more tools and more possibilities along the way, but right now, it’s still limited. So let’s ask it for something simple. I have a link to recent research from Anthropic, and I don’t have time right now to read it, so I will ask it to summarize it and send me this summarization as an email. So I send it. And here is an interesting observation: it takes some time. That’s why agents are not always a good solution for all problems, because they operate in minutes or hours. So you don’t really want it in real time for the answer. As you can see here in our logs of the application, the agent is now limited to only four steps, just for reducing the costs and potential problems with that. As you can see, it decides based on our query that it needs to search the web. After that, it scraped the page. It summarized it using another tool, this time a document processor. After preparing the summarization, it decides to use Resend, which is a great tool if you need to send an email to yourself. And after that, it sent it and prepared the final answer. Here’s the final answer. So basically, it tells me that it summarized it and sent it to my personal email.
Wiktor Żołnowski: You can see that there is an email with the title ‘The Ethics and Insights of Claude’. Nice. I can imagine that people will not read articles anymore. Everything is going to be generated by AI and read by AI.
Jakub Pruszyński: Yeah, from one side, it’s a little bit sad. From the other side, it’s just another tool that people can use or not.
Wiktor Żołnowski: Yeah, I have a prediction about it because probably after many, many years, we’re going to see the same trend as in food or handmade craft. So basically, those things which are natural or crafted by hand are going to be much more expensive. Yeah, they will be, because the cost of creating them will still be much higher than automation. But yeah, I also predict–and I’m not sure it will be in many, many years; I think it will be in the next couple of years–that these handmade, handcrafted things or human-made things or written things will be pretty valuable. But still, it will be a niche. It will be only a small number of people who will actually be using it or chasing it and paying for it. Most people will just use all of the stuff that will be cheaper to generate and create.
Jakub Pruszyński: Yeah, the world is changing. It’s awesome how these kinds of solutions are changing the world.
Wiktor Żołnowski: And a small disclaimer for our audience here: when we are recording this, it’s the middle of February 2025. If you are watching it at, let’s say, the end of summer 2025, probably most of the things that we are telling you, most of the things that we are showing you, are already outdated. Please check our channel for newer episodes where we cover this kind of topic again, probably, with fresh content, fresh news, and fresh tools that will be available. Because the AI tools, the AI world, and AI utilization are developing so fast right now that things that were up-to-date a couple of weeks ago are now totally outdated and irrelevant. So please stay up-to-date with all of the AI tools and news. Also, if you are not familiar with any AI tools and maybe you used some AI tools a half year ago or a year ago, they are totally different right now. They are much better than they used to be. So please try to use them and see on your own what you could use them for and how you could use them. Most probably, you will be surprised by the progress those tools have made in the last couple of months. So try it on your own.
Jakub Pruszyński: I totally agree.
An automated candidate screening tool
Wiktor Żołnowski: Yeah, so let’s get back to the topic. Here we have the second example of the tool, which I think is more business-oriented, not a personal one. This is the tool that we recently started using at Pragmatic Coders. That’s the tool that allows us to simply add some scoring and some evaluation for the candidates who are applying for our open job positions. So please tell us more.
Jakub Pruszyński: This is basically an example of how we can utilize AI workflows. In this particular scenario, we are watching candidates in our system called Recruitee. We can collect data about job offers, their descriptions, their requirements, and based on that and based on the information about the candidate–his LinkedIn profile, his experience, his CV–we can decide and prepare an evaluation which is going to be helpful for our recruiters to make a decision about hiring or not hiring this particular candidate.
Wiktor Żołnowski: It’s basically doing the same thing that our recruiters did some time ago. They were manually checking LinkedIn, manually checking their GitHub profile, CV, and other stuff, which was pretty time-consuming and boring and can be automated right now.
Jakub Pruszyński: Exactly. And as you can see on the right, here is an example of such an evaluation. So, that’s you.
Wiktor Żołnowski: Yeah, that’s me. It’s good news. I would hire myself, probably. Quite a score. I wonder if the model is biased or what.
Jakub Pruszyński: No, you are just a good developer. There are many, many cases to apply simple AI workflows which are time-savers for our departments. The only problem is our imagination because usually, it’s super hard to find those cases, but after some practice, it’s getting easier.
Wiktor Żołnowski: So this is why I’m encouraging all of our people at Pragmatic Coders–not only software developers but also our office administration, financial administration, HR, marketing–to try on their own, try to build something with tools like Make or n8n or other workflow management tools and to see what’s possible. Then, if there is something that they see that could be automated, they could do this on their own, or they can come to Jakub or someone else to ask for help, and we can do this for them. Because sometimes, some technical knowledge–or very often, technical knowledge–is required to connect with some other system, to utilize some tools, or even to properly design the prompts that are used for the LLM. So yeah, some technical knowledge might be needed in many cases. So, by the way, if you would like to automate some of your processes, you can always reach out to us, and we can definitely help you with creating your own agents, help you with hosting them, securing them, maintaining them all the time, and developing them further. And this is something that I think is very important: whenever we build something for us at Pragmatic or for our clients, I see that we start with something like this, let’s say something simple. And when it starts working, everyone has a million ideas on how to extend it, what next could we do. For example, I remember for this case, the first idea was: ‘Whoa, it’s great! So let’s do the next step. Let’s allow the LLM to decide which candidate to send an interview invitation to, which candidate to move into the process.’ And then, you know, exchange emails, answer questions, do this kind of stuff. And then maybe even fill it with the data from the further steps in the process, from the further interviews, and then add another scoring, another evaluation based on the feedback from the interviewers. Or maybe even join it with some recording of the interviews and combine the feedback with the recording, teach the LLM… a bunch of ideas on how to develop it. So I think that the future is that creating AI agents won’t be like many others are trying to do this; it won’t be a project that you have, you’re starting it, you’re finishing it, and that’s the end, and it’s just working there. I think it will be more like a product where they will be evolving all of the time, and people will be improving them, adding more stuff to do, more tools to use, making them more and more complex.
Limitations, costs, and security
Wiktor Żołnowski: And that brings me to the other question: what are the limitations?
Jakub Pruszyński: That’s a tricky question, but you can basically think of it in two categories. The first one is related to the data because, as we mentioned, the context window–so the amount of information that you can load and keep the focus of the LLM on–is limited. That’s one thing. On the other hand, the cost related to computation, to maintaining infrastructure, and so on and so forth–that’s the big factor in the development of AI.
Wiktor Żołnowski: And by the cost of infrastructure, do we mean the cost of the SaaS, like software-as-a-service that you are using, or mainly the infrastructure as in hardware or the cloud?
Jakub Pruszyński: Mainly our infrastructure, the cloud, yeah.
Wiktor Żołnowski: The question that I often hear from our clients is if the data that is used by this kind of agent is secure. Could it be done in a way that the data will be secured, that the agent will not send the data to, let’s say, OpenAI or Google or some other huge corporates like Microsoft?
Jakub Pruszyński: Here comes the good ethics of being a developer because, basically, we should be aware of how our data is processed at each step and how it is used by the LLM. So basically, if we have some kind of data which is very private, then we should probably use some kind of local LLM where we are sure that the requests are processed locally on our machine or our server. The other approach is limited to just sending only the necessary data, not all data. We shouldn’t send, I don’t know, all financial reports, but maybe we should send only some tables which include the data that is necessary to follow the process further.
Wiktor Żołnowski: Yeah, and I also think that hosting your own model might be cheaper than actually paying for tokens in OpenAI or somewhere else. And by hosting your own model, I think we mean both the same: for example, using Llama from Meta or R1 from DeepSeek and deploying this kind of model on your own server.
Jakub Pruszyński: It’s great, for instance, also for cases like working offline, because a lot of software development right now is augmented by AI, and you are not always going to have access to great connectivity. In those scenarios, if you have good machines, something that you can start running with Ollama, then great, you can work whenever you want.
Jakub Pruszyński: One more thing that is worth mentioning is that AI agents can utilize various models, and you can always choose which model is the best for the task, for the specific task that you want to perform. You can also make it cheaper by actually choosing the models that are cheaper than the most expensive ones, like GPT-4o or other reasoning models.
The future impact of AI agents
Wiktor Żołnowski: Right now… okay, let’s talk about… okay, we already know what agents are, we know what they can do. What do you think? Will agents change the landscape of the job market right now? Do you think that agents will replace some kind of jobs, some kinds of people?
Transforming the job market
Jakub Pruszyński: They will probably transform the market because, as you can notice, everything is already somehow marked and touched by AI. So if you have enough skills and courage to explore this area, then for sure your skills are going to grow exponentially by using AI in a clever and creative way. So that’s one case. On the other hand, they can really bring peace and joy during some kind of task because they can delegate and fully solve some repetitive tasks. And it’s awesome if you don’t want to write a tiring email and you can just generate it, pinpointing some important information that you would like to include in it.
Wiktor Żołnowski: And when someone brings up the example of writing a boring email, I always think that on the other side, there is someone who needs to read this boring email. Yeah. And most probably, that person also used some kind of AI to summarize it, as you just showed us. So I wonder when we will get to the point that we will finally realize that communication is not about sending boring emails, but rather, if we really need to communicate with others, we simply need to talk to them and just leave the email for AI.
Jakub Pruszyński: Yeah, I totally agree.
Wiktor Żołnowski: So, regarding this kind of replacement of some boring tasks and boring jobs, do you think that there are any specific sectors, job sectors, that will, let’s say, benefit the most from AI agents, or actually, the jobs that will be lost because of AI agents and automation?
Jakub Pruszyński: I truly believe that each job can be somehow augmented by AI, and thanks to that, the productivity and effectiveness of such a role may be exponentially increased. So as long as we want to learn and look for scenarios where we can apply AI and somehow delegate those repetitive tasks or tasks which are not pleasant for us, then nothing will change in this matter.
Wiktor Żołnowski: If you could give advice to someone who is facing this ‘AI is everywhere, AI is going to replace my job position, etc.’–so if you could give advice on how people can adapt to this, call it a revolution, because in my opinion, it’s the next revolution after the technological revolution and the information revolution. This is another revolution that we are facing right now. How can people adapt to it? And what I think is more important is that the previous revolutions took a lot of time, and this revolution is moving so fast that I’m pretty sure that most people haven’t even noticed it yet, that things have already changed a lot. So how could people who will at some point notice, ‘Oh my God, the world changed,’ how can they adapt?
Jakub Pruszyński: They should explore. That’s something crucial because playing with it, just testing for fun, for something super stupid like, I don’t know, ‘Here is my shopping list, generate a list of recipes for this week’ or something like that–it shows potential and leads to some conclusions and observations about AI. And most of all, it brings some kind of aspiration to explore it further and further. So from my point of view, we should really explore this matter in every possible way because not everything that is given on a plate by some kind of expert is going to apply in our case.
Wiktor Żołnowski: Okay, let’s try to dive into the future. How do you think this area of AI agents will develop in the next couple of–I would like to ask you ‘a couple of years’, but that would be too hard to answer–so maybe in the next couple of months?
Jakub Pruszyński: I would say that the current trend–that AI is an augmented part of our reality, so it increases our productivity, we can delegate some kind of task or make those tasks quicker and in a more effective way–this trend is going to keep up. On the other hand, probably a lot of AI agents are going to be created, but from my perspective, they are going to work behind the curtain. So they are going to be something that is not really visible to the end user. They are rather going to take actions and operate on data in the back office or something similar, and based on that, we’re just going to get results after two hours of some kind of research that was conducted at our request.
The evolving role of software developers
Wiktor Żołnowski: And last but not least, from your perspective as a developer, what do you think will happen with the developer role in the next months or years? Because from what I observe, creating this kind of workflow, creating this kind of AI agent, is actually pretty fast. And by that, I mean that creating an agent like the one for recruiting took a couple of days of one person’s work. Creating such a system by simply coding it, coding some rules, automating things, connecting things together, would also probably take weeks, if not months. So how do you think the work of software developers will change, and what do software developers need to do to actually adapt to these changes?
Jakub Pruszyński: I believe that we should be more open-minded towards no-code and low-code solutions because they are great for quick proofs of concept, and we shouldn’t avoid them. We should think more in categories of delegating, in categories of creating, preparing, and transforming data for the AI, and be some kind of orchestrator which delegates tasks to various agents or AI assistants, more than programmers who pipe letters by letters and create functions.
Wiktor Żołnowski: Okay, so I think that’s it. That’s all that I wanted to cover today with you. I think that we’ll be back to this topic in a couple of weeks or months when we will also make some progress with our discoveries and the utilization of AI tools. We will also share with our audience what we have learned and what we are learning. But for you, if you have any questions about AI agents or you have some ideas on how AI agents could help you in your day-to-day work or day-to-day life, and you would like to discuss it with someone who has some knowledge and experience, do not hesitate to contact us. And also, please let us know in the comments if there are any other topics around AI and agents that you would like us to cover in the next episodes of Pragmatic Talks. So thank you, Jakub, and thank you for your time.
Jakub Pruszyński: Thank you. It was a pleasure to be here. Thanks.
