Exploring the AI Frontier

Jan Argasiński’s background and work
- Diverse Expertise: Jan Argasiński has a background in media studies and philosophy but currently works in applied computer science, with a focus on computational neuroscience and game design.
- Role at Sano Center: At Sano, a center for computational medicine, he applies computer science methods to solve medical problems using patient data. He works on modeling neuronal activity and analyzing brain signals.
- Neuroscience and Computer Science: His work explores how to use computer science to understand the brain and, conversely, how to apply knowledge from neuroscience to improve computer science, for example by creating more energy–efficient systems.
Defining artificial intelligence and consciousness
- A Broad Definition: AI is a general term for anything that shows intelligent–like behavior. Argasiński suggests a simple definition could be flexibility in solving problems, with a thermostat being a very basic example and a human being on the other end of the spectrum.
- The Problem of Intelligence: Defining AI is difficult because we lack a clear, universal definition of intelligence itself. We often use humans as the prototype for intelligent behavior.
- ChatGPT and Consciousness: The discussion explores whether large language models like ChatGPT can be considered conscious. Argasiński notes that we tend to personalize systems that can communicate with us. However, he sees consciousness as a complex philosophical problem, and we cannot truly know if even other humans are conscious – we just assume they are based on empathy.
- Knowledge vs. Statistics: A key question is whether ChatGPT has a true understanding or “world view.” It is built on statistical relationships between words, derived from huge amounts of text. It does not have an embodied experience of the world like humans do.
Practical applications and future of AI
- A Powerful Tool: AI is described as a sophisticated tool for statistical inference from large datasets. It works very well for specific, domain–focused problems like OCR (text recognition) and medical image analysis.
- AI in Entertainment and Gaming: AI offers limitless possibilities for creating virtual worlds and personalized content. Argasiński mentions the idea of a movie or book that adapts to the consumer. In gaming, AI can generate dynamic content, like levels styled on demand, and create more interactive NPCs, as seen in mods for games like The Elder Scrolls.
- AI in Medicine: In computational medicine, AI is used to create “digital twins” of patients to analyze data, help with diagnosis, and personalize treatments. This moves medicine towards a more data–driven and targeted approach.
- Embodied AI is the Next Step: Argasiński believes a major game–changer will be embodied AI – robots that can physically perceive and interact with the world. He suggests that this embodiment is necessary for an AI to develop a more human–like, object–based understanding of reality.
Obstacles and societal impact
- Hardware and Energy Limits: The current brute–force approach of using bigger models and more data is reaching its limits. Future progress will require more sophisticated algorithms and more energy–efficient hardware, possibly neuromorphic hardware inspired by the brain.
- AI is Political: The development of AI raises political questions about who benefits. High–tech is often corporate–driven, and tools like ChatGPT are trained on data created freely by the community, which then feeds a corporate product.
- Impact on Jobs and Education: AI will not replace people directly. Instead, people who use AI will replace those who do not. This progress requires people to adapt. In education, traditional evaluation methods like written exams are becoming obsolete, forcing a shift towards dynamic assessments like project work and conversations.
Advice for startup founders and career seekers
- Find a Niche: Argasiński advises startup founders to find a niche where AI can be applied. The current period is like the “golden age of apps,” with many opportunities to build new products. Applied AI is something that can be done today without waiting for future breakthroughs.
- Just Do It: For those wanting a career in AI, his advice is to simply start. The resources to learn are widely available online. He encourages finding a project you are passionate about and diving in, as the technology is accessible even on a standard laptop.
Full Transcript
Wiktor Żołnowski: Welcome to Pragmatic Talks, a podcast and video series where we discuss startups, contemporary digital product development, modern technologies, and product management. I am Wiktor Żołnowski, a CEO of Pragmatic Coders, the first-choice software development partners for startup founders. In this episode, we are joined by Jan Argasiński, a senior scientist at Sano Center for Computational Medicine and a lecturer at the Jagiellonian University. Today, we’ll discuss the recently very hot topic: artificial intelligence and its role in building digital products. We’ll start by reviewing what AI is and explaining its subsets, such as machine learning, natural language processing, large language models, and OCR, together with computer vision. Then, we’ll discuss the use of AI in the gaming industry, entertainment, medicine, and other business domains. Lastly, we’ll talk about how AI may influence the future of business and product development. Welcome to the next episode of Pragmatic Talks. Today we are going to cover the topic that everyone is speaking about: Artificial Intelligence. So, there is no one better we could invite for this episode than Jan Argasiński, who is an expert in AI. Maybe I will start with the first question, the question that I love to ask any of our guests. Who is Jan Argasiński and what is your story?
Jan Argasiński: This one is hard to answer because I’m sort of a jack of all trades. I have a background in media studies. I am also a philosopher, but currently, I work in the area of computer science–applied computer science–with a special emphasis on computational neuroscience, but I also did some stuff in the area of game design. So, a lot of things, but computation and computational things have always been interesting to me.
Wiktor Żołnowski: Perfect. So, I know that you work at SANO and you’re also a lecturer at Jagiellonian University. Could you tell us a little bit more about what you’re working on at SANO?
Jan Argasiński: Sano is a center for computational medicine, so we do stuff related to medical problems but apply methodology from computer science. So, no wet labs, no patients, only data. There are patients, but they are represented as zeros and ones in our computers–stuff like that. I work closely with the lab of Dr. Alessandro Crimi, who does things related to computational neuroscience. Particularly, computer vision and segmentation of MRIs and stuff like that, but I’m interested in general computational neuroscience.
Wiktor Żołnowski: Can you tell us a bit more about other projects that you work on at SANO?
Jan Argasiński: I am what they call a senior post-doc, and it means that I have sort of my own agenda. I need to create my own problems to solve. The interesting thing for me is applying artificial intelligence to the modeling of neuronal activity. So, I’m interested in how the brain works and how we can use methods from computer science to analyze neural stuff and the other way around also. So, how we can use knowledge from neuroscience, like neurobiology and neuroscience, and how we can apply it in computer science.
Wiktor Żołnowski: It’s pretty interesting, especially the part about what we could apply from computer science to neuroscience and how the brain works. If you could tell us a bit more?
Jan Argasiński: The brain is probably the most complicated object in the universe that we know of. So, it gives you the scale of the problem. We know that the brain works, to some degree at least, as a computational device. So, it computes things; it processes information, that’s for sure. So, if we have in our direct reach an object which is that efficient in data processing, it is interesting to look into how it works, actually. The brain is incredibly complicated, so you need sophisticated methods when you want to analyze what is actually happening there, and a lot is happening. So, we have computational methods of analysis of signals recorded from the brain, and also we can try to model how the brain works, so particular parts of the brain. And neuroscience is incredibly fast in development, so in the last 20 years, it’s just incredible what happened.
Wiktor Żołnowski: Do you think it’s faster than computer science in comparison?
Jan Argasiński: I think it’s comparable. Yeah, because when you read papers from 30 years ago, the basics are there, but if you read neurobiological papers from 30–40 years ago and you compare methods available, and prices of how much the experiments cost and so on, so, the amount of data we are dealing with; it’s incredible. It’s really incredible.
Wiktor Żołnowski: You mentioned that the brain is a very efficient computational machine, and I think that it’s very efficient, especially in terms of the energy that is used for the amount of computation that is done there, and the amount of data that is stored or processed in the brain. Is there anything that you can learn from that and already apply in computer science?
Jan Argasiński: Yeah, so if you think about it, you want to train artificial neural networks for face recognition, for example. Yeah, you need GPUs–you need graphics cards for that–and you run some algorithms, and it will consume a lot of electrical energy. And then I’m here. I had a banana and a coffee for breakfast today, and here I am, meeting new people, recognizing faces, learning new stuff, and all that with this relatively small device and with relatively low energy consumption. So, this is actually incredible, if you think about it. So, how can we learn from that? Modeling biological systems in artificial hardware and using algorithms is notoriously hard to do. We take more and more inspiration from biological systems. The basic idea for artificial neural networks–we call them artificial neural networks because there are natural neural networks. So, this is a parallel, and we can look at how our natural neural networks work, and we can try to take more from there. So, the latest idea is probably something we call spiking neural nets, that are more energy-efficient and they are slightly more like the natural neural network. We are far away from repairing, copying, or from having an actual brain on a chip, or something like that.
Wiktor Żołnowski: I would say that many innovations in our history, I would even say that most of them, were based on observing nature and how nature works, and we tried to imitate it because evolution made it very efficient. So, what you are doing right now is maybe just the next step of imitating nature or trying to copy it and apply it to artificial systems?
Jan Argasiński: Yeah, but at the same time, it sometimes can be tricky because people who tried to create a flying machine were trying to imitate how birds fly, and modern planes are nothing like that. So, sometimes you need to take inspiration from natural systems, but then sometimes you need to apply engineering, and you need to do things differently in order to get the result that you want. Because the physics of flying is the same with the plane and with the bird, but the mechanics of it are different, so you cannot apply the neural principles directly to this modern hardware, like silicon-based hardware. You can take some ideas, you can emulate some functions, but it’s not one-to-one. Never. So, this is what we should have in mind when we think about it.
Wiktor Żołnowski: Okay, maybe let’s start with the interest in artificial intelligence, AI. Let’s start with some basics. I know that many people around are talking about artificial intelligence. They are using various terms such as–I have a note–machine learning, OCR, natural language processing (NLP), data mining, and many other things. They’re using these terms, but not necessarily always knowing what they mean, and they are also calling many other things “artificial intelligence” when there is no artificial intelligence at all. If you can tell us a little bit more, explain what artificial intelligence is for 15-year-old kids?
Jan Argasiński: So, yeah, of course, there’s always this Feynman notion that if you can’t explain something to a child, then you don’t understand it.
Wiktor Żołnowski: You can try to explain it to our peers.
Jan Argasiński: So, the problem with artificial intelligence is that it’s a very general term. And it can be applied to basically everything that acts in a way that resembles intelligent behavior because, to define artificial intelligence, you need to have the definition of intelligence, and this is a problem, actually, because we don’t have a very good definition of intelligence. We tend to point at things that seem to act in ways that require some kind of sophisticated decision-making or data processing, and we call them artificial intelligence. If you think about it, a very basic system of artificial intelligence is the thermostat that we have here because it can sense the temperature in the room and it will adjust accordingly. So, it will turn off or turn on the heating or cooling accordingly. So, this is “intelligent” behavior. It’s not very intelligent. It’s very specialized in this intelligence, but it is, so it is flexible. I mean, again, flexibility in solving some problem–not rigid programming, not like a switch between states at a given threshold, but more flexibility. Maybe this would be a nice definition of intelligence. And then you have the whole spectrum from that. And so on the other end of the spectrum from the thermostat, you will have a human being, capable of learning and so on. For us, we are the prototype for intelligence. So, when you think about an intelligent machine, you compare it to what you’re doing in some circumstances. You will learn, you perceive; these are functions that we have. And, of course, in some terms, artificial intelligence can be better and is better than we are. But then, non-intelligent objects are too. So, for example, your basic calculator is much better than you at doing arithmetic, but it’s not super intelligent. It’s just engineered that way. It’s a device that does stuff better than you. But when would you call a calculator intelligent? If it would be able to solve problems in maybe a creative way? Maybe in a flexible way? Maybe in an innovative way? Or it would be able to solve different classes of problems? The terms I’ve used are psychological terms. This is, again, the problem.
Wiktor Żołnowski: It’s the problem with the definition of intelligence. We as a human species have a problem defining what else or who else is intelligent. Some countries have already decided that, for example, dolphins are non-human intelligent beings that have some rights. Some other countries recognize chimpanzees and gorillas as well, so…
Jan Argasiński: There is a debate on that. Again, this is the notion of sentience. Are these sophisticated objects, organisms with what we call higher cognitive function, sentient? Or are they conscious? Consciousness is a huge problem. We usually use empathy for recognizing this. Chimpanzees are easy because we can have empathy for chimpanzees because they are basically our cousins and we can see ourselves in them.
Wiktor Żołnowski: We can interact with them physically.
Jan Argasiński: And with mammals, I can say that my cat or dog is, to some degree, conscious, because I can empathize with it. But, of course, the problem is, “Can I do it with my refrigerator?” That has some cognitive function, for example. Mine doesn’t have it, but some refrigerators will decide what to buy or will recognize your face and say, “hello” to a particular member of the family, so it has some kind of cognitive function. At which point will you decide that, “Okay, this is conscious”? This is a philosophical problem. Fortunately, you don’t have to solve this problem to do better artificial intelligence. Because, at this level, artificial intelligence is just a mechanism for a machine to serve us better. So, we don’t have to solve consciousness to create better working things.
Wiktor Żołnowski: That’s true. There is plenty of discussion about, for example, whether GPT has a personality or it’s conscious already or can be conscious, or if the next version can be conscious or not. Do you think that this kind of discussion doesn’t make any sense at this moment, or maybe we should speak about it and already prepare for what may happen, or may not?
Jan Argasiński: I think the discussion always makes sense because it will also lead to another problem. Notice that we use ourselves as a prototype, so ChatGPT is humanoidal in the sense of not embodiment but in the sense that it is capable of discussing with us in a polite way. It can talk to you; it will respond to you. This is a very strong signal for us that something is similar to us. If you can exchange communication with something, then you will personalize it.
Wiktor Żołnowski: You can even say that, for example, these kinds of large language models, like ChatGPT, are smarter than dogs or chimpanzees or dolphins, because they can communicate with you. But… what is “they”?
Jan Argasiński: Yeah, what does it mean “smarter”? Because again, a calculator is smarter than me in solving some particular class of problems, and ChatGPT probably is smarter than we are in generating long discussions or nice-looking sentences. It writes better than most of us. Would you be able to write 20 pages of some kind of story in the style of, I don’t know, Hemingway? Would you?
Wiktor Żołnowski: After a lot of training, learning, etc., maybe I would. But it would take me a very, very long time. And ChatGPT can do this in a minute.
Jan Argasiński: So, ChatGPT is smarter than we are to some degree but, for me, the interesting part about ChatGPT is that everything is grounded in language because this is a large language model and it is basically a tool for creating plausible continuations of text that are grammatically correct, using statistics–using frequency measurements from the really huge amounts of text that were fed to this system. For me, the interesting part is not whether it’s conscious or not–because this is another problem, and I will go back to it in a second–but for me, what’s interesting is, can we say that there is some kind of knowledge in ChatGPT? There is statistical knowledge about the frequency of the pairs of words and sentences and so on, but can we deduce, can we infer a world view, like ontology, from language? Because if I have this pen, this is an object and with my embodied mind–I have my body and I perceive it. I can touch it. I know that this is an object. This is inanimate and so on. And ChatGPT would know that the word ‘pen’ occurs in a statistical relationship with other words, and these words will describe the physics of the pen, the usage of the pen, and what the pen is. Is it sufficient to have a world view? Can we say that ChatGPT has this knowledge about the world and its structure embedded in it, derived from this language? So, this is a super interesting philosophical question.
Wiktor Żołnowski: I can imagine that, if we want to compare–the next comparison won’t be very good and very good to hear–but let’s imagine that we would like to teach a human. Let’s take a child who doesn’t know anything yet. We would constrain this child, put them in a dark room without any sounds, any other possibility to interact with the world, and we would be just teaching or providing some words or some text to this child and we would try to teach this child to understand the world. And this is how actually ChatGPT was taught about the world. This child will develop their consciousness, but maybe not?
Jan Argasiński: Yeah, so from a philosophical point of view, the criterion of being conscious would be, “Is there a way to be me? A distinctive way to be me?”. Again, the problem is that I can say about myself that there most certainly are ways to be me and there is something, but I can’t even decide that about other people. There is this famous thing that they call the Zombie Thought Experiment. So, imagine that everyone else besides you are zombies or, I don’t know, androids or replicants from Blade Runner, and they are not conscious but they act as humans, as usual. So, you have this information; you know that they are not conscious but they act like people. And then you meet people, and I’m sitting with you here and I can ask: how do you know I am conscious?
Wiktor Żołnowski: A really good question.
Jan Argasiński: Yeah, how do you know that? How do you know I’m not a zombie, I’m not a replicant? A Blade Runner problem. How do you know that? The answer is that you use your empathy. So, you think there is a way of how I am.
Wiktor Żołnowski: I can ask a question about something provocative. The Voight-Kampff test…
Jan Argasiński: This is something, but actually what you do, you reverse Voight-Kampff. You know that you are conscious because you feel it internally. You know that you are conscious and then you think, “Okay, this is a person, it looks like me, it speaks like me, and so on, so it probably also is conscious.” Again, you are the prototype, and you refer to yourself all the time. So, here we have the same problem: ChatGPT behaves in a way that misleads us to think that it’s conscious, but how would you know? You can’t just ask it, because it’ll go, “Sure I am, why not?”
Wiktor Żołnowski: It’s trained to actually answer it in a way to make you believe that it is. The other way, any person can say, “Okay I’m not conscious.” I can say, “I’m not conscious”; what can you do with that? How do you verify that? So, consciousness is a problem that I wouldn’t put a lot of stress on. You already mentioned that AI tools or smart tools, AI tools such as LLMs like ChatGPT, have been made to solve particular problems. So, I believe that this is the issue with the way people are perceiving AI right now because ChatGPT is very popular right now. Everyone speaks about it; everyone knows how to use it, or maybe most people have already played with it, etc. Some people are using it for work, but many people are saying that “AI will never replace us; it will never replace our jobs” or something like this, because they are basing these kinds of assumptions on their interaction with ChatGPT, which is, as we mentioned, only a tool for conversation. Only a tool that was created only for that, and if you are trying to do any other stuff with it, it sometimes works. For example, you can do some creative work, you can do a lot of stuff with coding, for example; you’re also programming. So, it wasn’t a tool that was made for that, so it will never be–well, maybe not never–but at least for now it won’t be better than humans in the areas that it’s not taught to. So, maybe let’s talk about some other artificial intelligence applications than LLMs.
Jan Argasiński: Artificial intelligence is a tool. It’s just a fancy way of doing statistics, and when you have a sufficient amount of data, you can infer stuff from that. And you can do it in a very sophisticated way so that it looks like it is human-like intelligence. And, of course, we can apply this tool to a particular domain, and it works very, very well, as we all know. So, for example, you mentioned OCR. So, this text recognition, written text recognition. It is a problem that is very hard to solve with rigid methods. For example, if you write a program that will try to analyze some photos and it will try to find the letter S on the photo. So, it will try to seek this bend on one side, on the other side, and then the curve, and another curve, and then it will decide if it’s the letter S. This is notoriously hard to do. And with artificial intelligence, with deep learning, you can actually teach, based on examples, the network to recognize the text. And in this area, it works very well. For example, 10 years ago or 15 years ago and now are hugely different because we now have more sophisticated methods to do that. And, of course, it is also applicable in medicine. So, you have segmentation of medical images, you have different helpers for diagnostics, so you can do a lot of that stuff. So, the problem we are now facing is, can we use these general programs, basically software, to solve particular problems? Can we create a version of ChatGPT that will have very good and grounded medical knowledge, so it won’t just give you a plausible description of your illness? This is an important part: people, don’t diagnose yourself using ChatGPT! Please, no.
Wiktor Żołnowski: Try it, it’s not so bad.
Jan Argasiński: It is horrible. It’s horrible and it’s life-threatening, so don’t do that. So, the question is: can you create a version of ChatGPT that will have knowledge, like medical knowledge? And it touches on the problem that I told you about before. So, does it have an actual representation of ideas, of objects? And the short answer is that it doesn’t. Artificial intelligence in general is very good at solving particular problems. So, if you create a domain-specific artificial intelligence, the applications are basically limitless… up until the moment you need very high flexibility. Or online learning; online learning is super hard to do also because of computational reasons and energetics. So, this is the problem, because if you want to have a system that will react in real-time, then you can’t depend on pre-learned systems all the time. So, this is why these autonomous cars don’t work very well right now. But they’re still working good enough. As a support tool for drivers, they’re quite okay.
Wiktor Żołnowski: But full autonomy?
Jan Argasiński: Well, maybe not here in Poland, but in the US they are driving alone, and there aren’t so many problems with them.
Wiktor Żołnowski: Of course, there are some; nothing is perfect, but still. Yeah, they require a lot of work, a lot of data, a lot of learning.
Jan Argasiński: Again, another problem is with the hardware. Because there is a limit to how much you can calculate and how fast you can do it, and how much data you can feed into the system and so on. This limit is closer than we think, so this will be another thing to solve. How to process all this data for our intelligent systems.
Wiktor Żołnowski: So maybe let’s just try to target some topics where AI is already used. Recently, Apple announced their Apple Vision Pro headset, which is the AR or VR device that they are going to release pretty soon on the market. What is your opinion on that, and how do you think AI is used there or will be used there?
Jan Argasiński: Of course, I don’t know about the not-yet-released hardware by Apple, but I’m interested because they are promising new stuff. So, I think when it comes to the virtual world, the good solutions are already here because when you don’t have to adjust for reality–like in medicine where you need to be very careful what you do and be very grounded in your knowledge and your data and your actual live patient. If you remove that limit and you have a virtual world, and you just need to creatively fill it with elements of the narrative structure of objects, so imaginary objects, then the options are limitless. So, I’m still waiting for an actual metaverse–not Metaverse the company, incorporated, but metaverse as actual worlds that you can explore, like the metaverse that was mentioned by the writer who coined the term, Neal Stephenson. Yeah, I’m waiting for it. This would be interesting.
Wiktor Żołnowski: I have to ask that: don’t you think that we already live in a simulation?
Jan Argasiński: Yeah, why not? I’m fine with that.
Wiktor Żołnowski: Even if we do, we cannot do anything really.
Jan Argasiński: Yeah, so the counter-answer is, the counter-question is, even if so, what does it change?
Wiktor Żołnowski: And the progress that people made in terms of creating this virtual reality right now, and imitating the real world very well, makes us think that actually creating this kind of world that we live in might be possible or it could be possible by someone who has developed more computing power and had more time for learning how to use it?
Jan Argasiński: In very, very, very high-level theory, in theory, it is possible, because why not? All elements are there, but from a practical point of view, how would you do that?
Wiktor Żołnowski: Also, why would you do that? What would be the reason? The idea is that this virtual world could be better, more equal, and so on, like Utopia basically.
Jan Argasiński: We have tried it a couple of times in the real world, so what makes us think that we wouldn’t f*** it up?
Wiktor Żołnowski: Yeah, but in the real world, we have limited resources, so you can’t give everything to everyone in there, and in the Matrix, you could.
Jan Argasiński: That’s true.
Wiktor Żołnowski: So, this was the problem of the Matrix, of the first Matrix; why isn’t that? So they came up with the answer that when you create an actual Utopia, when everyone will have everything, then it will be problematic for people. Something bad will happen to humanity. But we don’t know that; this is just the gimmick the creators did to explain why the Matrix is imperfect, but we don’t know that. We could try, or maybe someone else did it right and here we are. We will see, but hopefully, we will manage to be here at a time when the virtual world, virtual realities, will be like you described.
Jan Argasiński: I’m fine with the idea that we live in a simulation, but if there are creators of the simulation, I would just kindly ask them, maybe skip cancer or children dying or hunger, because I can’t imagine how that is a good thing for anyone.
Wiktor Żołnowski: Exactly.
Jan Argasiński: And thinking that they are artificial creations, conscienceless, just bots created for us to have a measurement of how good we are. It’s an ethically horrifying concept.
Wiktor Żołnowski: Knowing that would be even worse. Okay, so maybe that’s proof that…
Jan Argasiński: We went far with this.
Wiktor Żołnowski: The discussion became a bit philosophical, but that’s also good.
Jan Argasiński: Yeah, we can even stretch and ask ourselves about medicine. Because when we apply virtual twins of patients, when we use big data to analyze cohorts of patients, what do we do exactly? We work on a virtual entity. This is a problem I’ve actually–maybe another problem I have, this is something I thought about at some point–in this paradigm of computational medicine. They are not patients anymore; they are not people.
Wiktor Żołnowski: They’re just data.
Jan Argasiński: Data on my computer. I have some images and, of course, it’s super important for us in SANO to think that at the end of this is the well-being of actual people, but when you do this science part, you don’t deal with real people, like they do in hospitals because they have actual patients that they need to care for, they need to empathize with, and we deal with data. So, for me, the solution was to think that at the end of the day, we are doing something to help these people that are anonymous to us.
Wiktor Żołnowski: What about other AI applications, like for example in movies and CGI?
Jan Argasiński: Yeah, again, Neal Stephenson, one of my favorite writers; he had this idea of a book that adjusts to its reader. It was for teaching purposes in his book. It’s Victorian cyberpunk. I recommend it; it’s super interesting. But this idea of movies or books that adjust their content to the consumer is really interesting. So, CGI doesn’t bother me anymore. We can do basically anything. You need to be creative to think of something, but you can show anything. For me, it would be interesting to have my own version of a movie, like for me. The idea would be that you could adjust, personalize it. This is again, a huge thing for us at SANO in terms of medicine, when this data-driven approach allows you to not just create the algorithms for treating diseases in general, but to adjust to particular patients, or groups of patients, but more targeted. It would be interesting. And in creative content, it is more possible right now. So, you could actually do it. Right now, it’s only the problem of costs. You need a delivery platform for that. So, this is the idea for a startup, for you. If you do it, remember about me. You can create a personalized delivery platform for any type of creative content.
Wiktor Żołnowski: Actually, we have clients with whom we’re working on something similar, who see this potential there. And we also have clients who are working on medical projects or with whom we are working on medical projects, where we use this digital twin idea and we’re applying artificial intelligence for diagnosis and also for choosing the right treatments for these people or changes in their life and other stuff.
Jan Argasiński: Yeah, so this will be huge.
Wiktor Żołnowski: It already is. There are people who are already using it, so it’s just a matter of time before that’ll go to the mainstream and we all will be using it. Also, sorry doctors. Now, we will delete it, but you will need to adjust your methods to the modern world.
Jan Argasiński: Yeah, well, it was always that way.
Wiktor Żołnowski: Always when I hear that AI will replace us, I say, “Maybe not all of us, maybe not even most of us, maybe not even us, but some types of work.” And I was also saying that in most cases, it won’t be AI that will replace us, but it will be people who use AI that will replace those who are not using AI, who are not adjusting. And that’s the risk of the progress that is there, and people who are not adjusting their work, their life, to how the world is moving forward, they will be left behind, simply. So, later they will have to adjust, at some point.
Jan Argasiński: Yeah, so this is, of course, the huge problem about who benefits, because of course, it’s always when it comes to high-tech, it is corporate-driven. So a lot of people have this problem that ChatGPT is not community-driven; it’s not for the people, by the people; it’s corporate.
Wiktor Żołnowski: It was supposed to be, as “Open AI”.
Jan Argasiński: Yeah, so it’s called “OpenAI,” and this is funny. So, this is a problem and, of course, we should do something to adjust for this. It is a corporate product, but it had the data that was freely available from the server, from the people who gave it voluntarily. It’s not always in your license agreement but, for example, this code writing software that we have right now that writes wonderful lines of Python and does sarcastic comments about your code. I’ve seen it; a sarcastic code review from AI is something new in my life. These tools were created, were taught on the data that people created, that people gave to the community for free. For example, from GitHub or Stack Overflow. GitHub was based on Git, which was created by Linus Torvalds. He’s not very amused by that. It was meant to be used for creating a better Linux kernel, and now it’s used to feed a corporate product.
Wiktor Żołnowski: As I said, this is the question of who benefits the most from that. At the end of the day, that will be corporates, but I’m still pretty optimistic because whatever they do, actually everyone benefits from that. Of course, some benefit more, and that’s another problem. This disproportion in the world. But still, I believe that there is something that we can all benefit from.
Jan Argasiński: Yes, of course, but this separation between the people who benefit and the rest of the world is getting stretched.
Wiktor Żołnowski: Recently, we see the rise of populism, for example, or other things.
Jan Argasiński: AI is political. This is something that not a lot of people talk about. This is political.
Wiktor Żołnowski: I think it’s very interesting. Okay, let’s go back up to the AI itself. Let’s leave the politics to the politics. You’re connected to the gaming industry.
Jan Argasiński: Yes.
Wiktor Żołnowski: And so, I’ve recently seen this YouTube video where someone showed how they actually connected ChatGPT to some Elder Scrolls franchise game, and they used ChatGPT as an NPC in the game. And they told a story that was fully generated, they remembered the context, so when you came back to this NPC–a non-player character for those who are not gamers–then this character actually remembered what they talked about with you, so you could follow up with the conversation. They were taught some basic data about the game so they could actually help you play the game and improve the story. What do you think about it?
Jan Argasiński: I think it’s wonderful, and of course, I’m always happy when I see a bottom-up innovation, because this modder who has this beloved game which is about 20 years old.
Wiktor Żołnowski: I remember I was a kid when I was playing that.
Jan Argasiński: So, they revive it constantly using state-of-the-art technologies, and it’s getting better and better. So, it’s wonderful, and also, these generative tools for creating content are easier to do and easier to stabilize as a working product than very engineering applications or medical applications, and we see that; we will see that a lot. I’m supervising two master’s degree theses that include the application of GANs, so generative adversarial networks, to create content within a game.
Wiktor Żołnowski: This is almost like the books that are written for the reader.
Jan Argasiński: Yes, and a style transfer also. You can say, “I want to play this level in the style of 19th-century Gothic horror,” and the environment, after like 10 minutes, you have a custom-made level that has assets in that style. So, this is something you can do; single master’s students at Jagiellonian can do it.
Wiktor Żołnowski: Wow. That will change the gaming we know. Okay, so, in terms of games, are there any ethical considerations we should take into consideration when we are using AI for gaming?
Jan Argasiński: Yeah, so, should you kill avatars? Should you kill NPCs in a game? So, if you think that ChatGPT is conscious, then think about these NPCs in Cyberpunk. So… No, I don’t consider it a real problem. Of course, the ethical issues here are: if you use a tool, it will influence you and your behavior. I’m not saying that if you like doing bad stuff in games you will do bad stuff in reality, but if you can do anything? Okay, then because you can do ethical training within a game. We had these games, like for example This War of Mine, which have an actual ethical problem within the game and they are super interesting because of that. This is not maybe AI-related in any way because any type of good trick could force you to reconsider your life points.
Wiktor Żołnowski: I meant this kind of automatically generated content is somehow out of control and maybe can be harmful at some point?
Jan Argasiński: The general problem with AI, this machine learning stuff and so on, is that we don’t know what exploits can occur in this system–what can go wrong with this system, basically. Because if you have old-fashioned engineering, then of course, it’s not ideal, but you have very particular ways to ensure that the technology is safe. The different methods to do that. And with artificial intelligence, it’s not that easy because if something doesn’t work very well, then the usual solution is to put more data until it will work better, so we don’t know exactly what’s going on there. And, of course, there are methods to try to understand that; embedding different methods in the algorithm that will allow us to infer what is actually happening inside the program, the application, so we can do that to some degree, but usually, we don’t know. So, this is a problem. And, of course, you can be sure that if you give something to the general public, they will exploit it. So, we’veseen spectacular examples for chatbots released by different companies, and the famous chatbot by Microsoft, what was her name? I don’t remember. So, it went very bad, very fast. People just exploited it, and they will do it with everything. So, gamers consciously exploit; they look for exploits within game systems to do different stuff, and if they have very open, very flexible systems, then they will do a lot with that. We can expect the unexpected, basically. It was Tay (Microsoft’s chatbot).
Wiktor Żołnowski: So, maybe let’s talk about the history of AI. Do you know how that started in our history?
Jan Argasiński: I’m not a historian. But, basically, the idea of artificial intelligence is present from the beginning of computer design because Turing, who is the father of computer science, and his idea was fundamental for the development of actual computers–computers in a modern sense–coined the Turing Test, which is a test for intelligence.
Wiktor Żołnowski: Actually, I remember that he was assuming that this progress will be much faster than it actually has been and that, during his life, he will have to use the Turing Test to test if they are talking with a computer or…
Jan Argasiński: Yeah, the borders here move all the time because when there were the first computers, then the criterion for intelligence was: is it able to give you a coherent response to a textual command? And right now, it’s an easy thing to do. And so, the border moved, and we are always moving these borders, probably until we reach human-level intelligence. And this is a problem, what does it mean? And then, we will go somewhere else, and we won’t have the prototype to compare anymore, and this will be a problem because we will have an alien intelligence that is not human-like, and what will we do about it? So, this is a problem. But, from a historical point of view, artificial intelligence is parallel, basically, to computer science. Every time when there was a development in computer science, something interesting happened with this idea of intelligence because this was always a goal, I suppose, of computation.
Wiktor Żołnowski: So if you could, in a few sentences, describe where we are with artificial intelligence right now and what, in your opinion, will be the very next step; where will we be in the next couple of years, let’s say?
Jan Argasiński: Okay, so we’veachieved a level where artificial intelligence is very spectacular and applicable to everyday life, so we see the boom of artificial intelligence. We can chat with it, we can generate images, movies. And this is something that is appealing to everyone because it’s not a super-specialized, domain-specific application of so-called intelligent algorithms, but this is spectacular in terms of being human-like. ChatGPT behaves like a human; you can chat with it. And from that, I suppose, the problem is that we are probably at the threshold of a hardware limit. Because there is only so much you can do with these classical CPUs and GPUs, and we discussed the problem of energy. So, it will become much more complicated to build bigger models again, and so on. So, we will probably have to get more sophisticated in terms of algorithms and hardware. And one of the options is to mimic biological systems and to create something like an artificial, less artificial, artificial network, or a more natural-like artificial network. I don’t know how to call it, but we will go that way.
Wiktor Żołnowski: Assuming that the next progress will be more in hardware, without new hardware, it will be hard to actually progress.
Jan Argasiński: I think there is a lot to be gained here by just building better algorithms because right now, ChatGPT is a brute force. It’s not significantly better in terms of computer science, software engineering, and so on. It’s just hugely bigger; it’s much, much, much, much bigger than what we’ve had 10 years ago. And there are things to be done here in terms of new algorithms; the spiking neural nets or different stuff that people will come up with, but at the same time, we’ll probably need better hardware. More energy-efficient and neuromorphic hardware is one idea but, I suppose there will be other ideas also. Hardware is a topic for a separate episode.
Wiktor Żołnowski: Yeah, it would require discussing the rudiments of how the CPU works and how the neuron works, and what a perceptron is, and it should be more educational because it becomes very complicated very fast. You can’t just discuss it casually. What do you think will be, or maybe already is, the impact of AI–of all of these tools that we have–on things such as our society right now, or the economy? We already spoke a little bit about politics, but what is the impact of AI on our day-to-day life, what do you see right now, and what do you think will happen in the next couple of years?
Jan Argasiński: If I knew, I would invest in some areas. But artificial intelligence is present everywhere right now and there will be more of it everywhere, but I don’t know about the future, to be honest. There is a lot to be done here, so we can expect more of this. More ChatGPT, more applications, more robotics, conversational agents, content on demand, and so on. And, I think, it will be very soon. Right now, it’s in the freezer room, waiting to be released.
Wiktor Żołnowski: To some extent, it’s already there, but people are not aware of it yet or all those tools are not as good as they could be. But for example, working on new software projects or products is much easier than it used to be, like even a year ago or two years ago. So, this progress is really moving pretty fast. I can agree that we will see more and more AI tools in our day-to-day life and we’ll have to use them because if we won’t use them, we’ll stay behind.
Jan Argasiński: So, in terms of our everyday life, I think a huge thing would be an embodied AI. Not only software agents, like ChatGPT, that you can write text and it will respond to you, but maybe a robot that walks around and does things. It has perception, it has embodiment, and you will be able to interact with it physically. And this is hugely important for us, for the systems, but also there are theories of an embodied mind. They suggest that it will also be important for these systems to learn from their surroundings. So, that will probably be the game-changer that I can think of: semi-autonomous and then autonomous robots around.
Wiktor Żołnowski: If we are speaking about this type of AI, robots, etc., interaction with AI. What’s your opinion about the AGI (Artificial General Intelligence)? I mean, a next breakthrough, this part that we as people, as a human species, will tell definitely that, “Okay, we created an AI that is conscious; we can, to some extent, determine that.” Do you think that this will happen in the foreseeable future, or do you think that it’s impossible or that there are some obstacles on the way?
Jan Argasiński: I expressed my doubts about the problem of consciousness, so you would have to define what you mean by “General.” If you mean ‘conscious,’ then I don’t know, because I have no good understanding of that term. I have an operational understanding, different definitions that I’ve read, but I don’t know if any of these definitions are better.
Wiktor Żołnowski: There are a couple of them and they aren’t uniform at times.
Jan Argasiński: But if you think, in general, something that can generalize, so it can do everything, anything…
Wiktor Żołnowski: Let’s say, a human-like intelligence.
Jan Argasiński: Yeah. So, I think for that you will need to have this embodied cognition. And I can imagine that there is, for example, a robot that has some pre-trained knowledge and it can be even ChatGPT-like; you can just create different associations, statistical connections, and so on. But then, if you give it cognition, in terms of sight, thought, and so on, this is the situation when you could probably get something close to understanding the world in terms of objects. And this is the game-changer. Because you mentioned this thing about a person kept in a dark room. So, there was this thought experiment in philosophy, where this girl was raised in a totally black and white environment and she never experienced… This is a thought experiment so let’s skip the problematic part; she was raised in circumstances where she only perceived black and white, but she was very interested in colors, so she read everything about colors that you could read. She knew everything about them: physics of color perception, paintings and art, and so on. But she only read about it, never experienced it. And then one day the doors open and she leaves the room, and she sees the colors. And the question is: does she gain any new knowledge from that? And if you say “yes,” she gained some new qualia, some new qualities from perceiving the colors, then you can draw a parallel story with a robot. ChatGPT is the app that knows a lot of stuff, but only in terms of language, and even not that, but let’s not go into that, because ChatGPT is not that sophisticated. But it could be, let’s say. After that, if you give it a body and you release it into nature, into the wilderness, so it could experience stuff and build this theoretical knowledge, statistical relationships around actual experience, will you get consciousness then? I don’t know because I don’t have a definition of consciousness, but you will get something very close to that. And it will be very interesting to see. And I think we could see it, if we will be healthy enough.
Wiktor Żołnowski: We are living in a very interesting time.
Jan Argasiński: Oh yeah.
Wiktor Żołnowski: And more is to come. So, can we teach AI to feel?
Jan Argasiński: Yes. So, what are emotions? What do you think? What does it mean to have an emotion?
Wiktor Żołnowski: Well, from the human perspective, it’s a very complex problem because an emotion is a part of the state of our mind but it’s also a part of the state of our body, hormones, and other stuff. So, it’s very hard to actually define in this only one dimension, especially when we talk about artificial systems. So, we would only want to know… actually, we can measure emotion through MRI scans and other stuff of our brain, but still, it’s only measuring the electrical state of our mind, or our body actually, but it doesn’t mean we are actually measuring the emotion as a whole.
Jan Argasiński: A very, very nice answer. I only don’t like the word “mind,” because I guess it’s…
Wiktor Żołnowski: Brain.
Jan Argasiński: Yes, brain is better. The important thing is that emotions have physiological correlation. So, this is something. When you feel something, you actually feel something in your body. And one of the theories that we used in our research, to operationalize this affective stuff, is the notion that emotions are appraised bodily changes. So, something is happening to you, but then you have some context to interpret that and you frame it as something. So, for example, if you are stressed on a roller coaster and your heart rate, galvanic skin response, your blood volume and tension, and so on skyrocket, it’s fine because you are on a roller coaster and it’s even pleasant for you. But, if you have this same set of physiological reactions randomly during the day, then you will call an ambulance because something very bad is happening. So, the context is super important here, and the problem is when you ask, “Can artificial intelligence have emotions?”, you’re asking, “What would be the equivalent of this in this system?” In this embodied AI, you would have an appraisal of the internal state of the hardware.
Wiktor Żołnowski: Like, for example, if something is broken in the robot. So, do you think that this robot or the AI that is in this robot can feel it, somehow? Or can we teach it to feel it?
Jan Argasiński: So, this is the problem. There is this book titled “Simple Experiments in Artificial Psychology,” I suppose. And you can build very simple systems. For example, I did it in my classes in affective computing; I created a robot–it was a very simple walking robot–and it had a light sensor, and I told the students that this robot hates light, and if you put it in a light source, then it will beep loudly and it will move into the shadow. So, I say, “Okay, it’s afraid of light.” Can we say that? Because the system is super simple. There’s no other actual artificial intelligence of any kind there.
Wiktor Żołnowski: It’s just a sense and response algorithm. So, does it actually feel? But, we can say that it doesn’t like light because it moves away from it.
Jan Argasiński: Yeah, doesn’t “like” light; we use the emotion or the feeling of fear, but I believe that this is something that we provide the name for. In terms of, you said that this robot fears the light or doesn’t like the light, but the question is, from the robot’s perspective, “if AI would name it the same way or perceive it the same way?” I think we don’t know.
Wiktor Żołnowski: Yeah, so we don’t know.
Jan Argasiński: But again, the theory says that it was how our emotions arrived. We’ve had these simple reactions, these simple correlations between stimuli and reactions and so on, and it got sophisticated because our organisms got sophisticated and we have very complex systems of monitoring our bodily states, and affects are just these systems that evolved to protect us, basically. Because there is this condition where people don’t feel pain. And usually, those unfortunate people die very early because they…
Wiktor Żołnowski: Don’t have an indication that something dangerous happened to them.
Jan Argasiński: They cut themselves, they break their bones and they’re still running because there is no internal monitoring system. For us, we have this system that gives you, when you feel pleasure or displeasure, you’re happy or sad, and it’s all based on this function. So, the question is, of course, how would you model it in an artificial system? Of course, you can model it by creating an avatar in a video game that will do what you would do if you were in a particular condition. So, if you scream at it or you push the avatar, this humanoid avatar, then it will cry, for example. But does it have an internal state of sadness? I wouldn’t say that.
Wiktor Żołnowski: I think we are coming back to the question about the consciousness of AI feeling.
Jan Argasiński: So, this is a problem. On this very simple layer, yes, avatars can behave as if they had emotions, and we can do it. We can build systems that will monitor our bodily states and try to infer in what state we are, with some probability. And we can do it. It’s doable, but from my experience, an actual industry experience, it works much better when you can provide context for that. So, when we create an application like a virtual reality app for training, and we get some physiological readings, but we knew that that person was in distress, in virtual distress, at the moment. So, it was attacked or something. Then we said, okay this is fear, and it usually was. Because we after asked, “What was your state?”. But people also can confuse their state… I have a nice anecdote about that, but I don’t know if you have time.
Wiktor Żołnowski: Of course, we have time.
Jan Argasiński: I saw Lisa Feldman Barrett, who wrote the book “How Emotions Are Made.” So, there is this wonderful anecdote about when she went for a date, for a dinner basically, not a date, with a member of her lab. She was a PhD candidate then, and she didn’t even like the guy, but he persuaded her to go. And then, during the date, she felt funny, she got these butterflies in her stomach, her heart rate was rising, and she thought, “Okay, maybe I’m starting to feel something for this guy.” I mean, there is something. So, after the dinner, she came back to her home and she vomited and then ended up in her bed with a temperature because she got sick. She had the flu or something. So, she confused the first symptoms of flu with actual romantic engagement. Can you be more wrong with your emotions? This is stuff that you shouldn’t be able to confuse. It’s context-dependent. It was the context provided that gave her the idea that she may feel something for this guy. So embodiment, again, embodiment for robots, and we will see. Like Boston Dynamics.
Wiktor Żołnowski: Yeah, they’re on a good way to do that. But I believe that it doesn’t need to be a humanoid robot; it just needs to be something that is able to interact with the environment.
Jan Argasiński: Yeah, it has to have perception.
Wiktor Żołnowski: The more senses we provide, the better the feedback will be.
Jan Argasiński: Perception, cognition, and complexity. These three things, and then you will have it.
Wiktor Żołnowski: We already spoke a little bit about philosophy, and I know you’re passionate about philosophy. So, I wonder, from your perspective as a philosopher: how is AI impacting modern philosophy?
Jan Argasiński: The favorite tool of philosophers is usually the thought experiment. Different kinds of thought experiments. The crazier, the better. Some of them just materialized. You can just test for the ideas that were just thought experiments some time ago. So, applied ontology, for example, like what exists and what are the modes of existence, and you have basically a playground for this now. You can test for different stuff. Or epistemology, what is knowledge, what does it mean to know something. So, you can ask right now the serious question about what does it mean if ChatGPT knows something. Does it have a representation of an object? This is a philosophy of language. This is something that philosophers have done for a century. So, it greatly enhanced the experimental possibilities of philosophy. Philosophy, in some way, became an experimental science, which is unexpected.
Wiktor Żołnowski: I know people who don’t call philosophy a “science” at all, but still, right now we are critical.
Jan Argasiński: So, I don’t mind if they don’t want to call it that, then don’t.
Wiktor Żołnowski: But, it was first philosophy, and then you had natural philosophy, and then you have physics, and then chemistry, and biology from that. Then you have epistemology, ontology. Still in philosophy, and some part of that would become experimental, particular fields of knowledge. So yes, definitely. You can argue that computer science is an applied philosophy of language in some way.
Wiktor Żołnowski: Since you’re a lecturer at the University, are you, or maybe not only at the University but maybe you’re doing this in other areas as well. Do you have a chance to apply AI for teaching or for training people?
Jan Argasiński: Not during the classes, because, I don’t know if I can say it out loud, but universities are an inherently medieval institution.
Wiktor Żołnowski: I once heard that the university is this building that is surrounded by reality. Neal Stephenson actually wrote a literal book about this. It’s called Anathem (and the Polish title is “Panatema”). It’s actually about; they put all the scientists into one giant building and they closed it with gates, and they released a few of them every 100 years. So, this is a very interesting book, a scholastic model of what you’ve said.
Jan Argasiński: I don’t know, I just noted it for future reading. So, I didn’t apply it during my classes, but I was a part of an industry project, an industry grant, and the goal was to create a training application in virtual reality using affective computing. And it was for training so-called first responders. So, firefighters and so on, and we used there something that was called Bayesian Knowledge Tracing. I don’t think it will be very beneficial to explain in detail how the model works, but basically, from the behavior of users in the virtual system–like different data points you can collect, also about their physiological state–you can create an inference engine that will recognize the level of their knowledge, skills, and abilities, and it will map it to a particular chart, so, a requirement.
Wiktor Żołnowski: No more exams, no more grades.
Jan Argasiński: And it’s Bayesian, so it’s statistical, and the more data you will feed it, the more precise results you will get. So, the more they played the game, the better model of their knowledge we had. If someone wants to Google it, Google Scholar it, I have a paper on that. The framework is described there.
Wiktor Żołnowski: We will link to the paper in the description of this video. Well, that sounds amazing. This is something that I believe we should read about, and we should be aware of. Because I believe this is something that will change the whole education soon. Not only education but also certification or this kind of stuff, because for example, right now acquiring some certificate is extremely easy, especially with the support of ChatGPT, but this way of testing or assessing real knowledge or real ability to use this knowledge even, in the real or virtual environment, a visual situation, this is something that is huge.
Jan Argasiński: For those who want to check it, sometimes it’s called “stealth assessment.” So, it’s hidden from them. They know that it will be used for assessment, what we collect, but they don’t know which part of the simulation is used for the assessment. When you asked if I used AI in the classes, I didn’t and I do not, but I had to change how I evaluate students because the exams and the written forms don’t make sense anymore. So, I don’t do exams anymore; I don’t ask them to write stuff. Everything is assessed dynamically; projects and such.
Wiktor Żołnowski: Because all of this idea of exams and testing and other stuff was so boring for me then.
Jan Argasiński: The only acceptable form of an exam for me was a direct conversation, and it took a lot of time because when you have 30, 40 people and one hour for each… I did it in the past. It’s exhausting. I don’t do exams anymore. I don’t believe in them. If my Dean is watching this, I hope she will.
Wiktor Żołnowski: We already talked a little bit about this, but what do you think are the major obstacles on the way to the future development of AI right now?
Jan Argasiński: I think this is this efficiency because we need to stop brute-forcing stuff, so more data is not an answer anymore. But, of course, I’m not accusing that all AI is just brute-forcing more data into a very simple algorithm. It is very sophisticated, but still, we can do better. I believe we can do better. So, obstacles are… some of them are of a hardware nature, so energy efficiency and data that we can process because even copying–I mean just transferring data from device to device–now becomes a huge problem because you have petabytes of stuff.
Wiktor Żołnowski: A problem of time, of bandwidth, of energy that you need to actually copy or transfer it.
Jan Argasiński: I recently found out that it’s faster for me to just go to the lab and copy the data on an SSD drive and then bring it to my office and work on that than to transfer. Of course, it’s a very minor problem because I do not deal with that high amounts of data. And you can have faster internet, but in terms of the whole industry, there are limits. And the limits are here, so we see this limit. But, people are working on that, so we don’t know. I don’t see a clear limit, end line, end game, in terms of an inherent limit that we cannot cross. There are engineering issues, technical issues, algorithmic issues, but I don’t see the line that “okay, behind that wall, we can’t jump.”
Wiktor Żołnowski: We are actually limitless.
Jan Argasiński: The speed at which we move is huge, so I can’t see that, but maybe it’s there and maybe we will hit it very fast.
Wiktor Żołnowski: That’s true. Actually, the progress that we made in the last, let’s say, four years is amazing. Four or five years ago, when I was talking about speaker mode AI with some wise people, they were like, “No, we are not there. There is nothing close in the foreseeable future. Maybe someday in our life, we will see some AI tools in practice, but it’s still theoretical.” Four years later, we have ChatGPT, we have many other tools for generative AI and other stuff.
Jan Argasiński: So, the end line is behind the horizon, but the horizon can be 10 minutes from here. Because you don’t know that.
Wiktor Żołnowski: That’s true.
Jan Argasiński: We are moving fast.
Wiktor Żołnowski: So, what was the recent breakthrough in AI development that you were the most excited about, or that impressed you the most?
Jan Argasiński: So, I think not these quantitative changes but qualitative. I’m more impressed by this neuromorphic stuff, so the idea that we should change basically the substrate of the whole thing. We need to redesign hardware, rethink how computers work, how memory works, and this is something that I think, “Okay, if we can push that, then it will change everything.” And so I’m excited about that, and we will see probably. I’m also always perplexed by Quantum Computing because I don’t understand anything. I’m just kidding, I like to read about it in popular literature and I’m interested in it, but no.
Wiktor Żołnowski: I’m planning an episode on that, as I know someone who is actually working on this.
Jan Argasiński: I will watch it, but in terms of how the hardware works; I understand the principle of what it is, but how do you create a quantum processor? I understand that if we could do that, if we could scale that, then it will be even more of a game-changer. It will change everything. So, we have at least two technologies that have the potential to redefine everything once again. So, we will see which one comes faster. I don’t know.
Wiktor Żołnowski: We’ll see, and hopefully, we will see it pretty soon.
Jan Argasiński: Philosophers and also physicists think that how the brain works, at this extremely basic level, is quantum, so Penrose and all that. So, maybe we can merge both.
Wiktor Żołnowski: Okay, so, since Pragmatic Talks is mostly dedicated to startup founders and this episode is mainly for them to better understand what artificial intelligence is, where we are with this technology, so they could figure out how to use it. I believe that there are plenty of possible uses of AI in modern software products, very innovative ways of doing it, but maybe you have some advice for startup founders in terms of artificial intelligence?
Jan Argasiński: Advice? Don’t be evil. I think this is something that is happening, so you can jump on this train and you can find a niche for you. Because there is a lot of uncovered ground. It’s like the golden age of apps, when there was an app for something every day. You remember that? When these mobile phones with touchy screens arrived and there was a huge explosion of different applications for everything. So, you can basically do the same with AI. You can find a niche and you can try to colonize it with your idea. It’s entirely doable right now and, of course, if you are brave enough, you can look into this new stuff, but be aware that we don’t know when it will arrive. So, being this very science-driven startup is not for everyone, but applied AI is something that you can do right now, any day.
Wiktor Żołnowski: Of course, the competitors are popping up every day but still, it doesn’t mean that there is no room for new stuff.
Jan Argasiński: So, when there is huge competition, of course, there is this idea to find a niche. As I see it, you can always find something that will benefit you financially also but it’s not something that anyone else is willing to do.
Wiktor Żołnowski: Okay, so last but not least question, since I know that also young people are watching these episodes, but the discussion is not only for young people, do you have any advice for people who want to adjust their career or design their career in a way that they will utilize AI, or maybe they will be working on AI or something like this? Do you have any advice for this kind of people who would like to develop their career in this area?
Jan Argasiński: My advice is you should just do it. Do not overthink it because it’s there; you can learn everything on the web and just find yourself a project that you will be passionate about and just jump there. I did it, and every day is a bliss because I find something new every day, because I thought, “Oh, this computational neuroscience is interesting,” and I think you can find something that is interesting for you, that drives you, even if it’s not AI itself. But if you can find an area of application of AI that you’re passionate about and you perceive it as a niche that could be colonized, then just do it. Don’t think about it. Because technology is here; you can do it on an everyday laptop. This is wonderful. You don’t have to have any particular hardware or even skills. Skills are to be learned.
Wiktor Żołnowski: Perfect. So is there anything else that you would like to add for our listeners?
Jan Argasiński: Do more philosophy. Philosophy is interesting.
Wiktor Żołnowski: It is, especially nowadays when you can test it, as we spoke.
Jan Argasiński: Exactly, everything is philosophy.
Wiktor Żołnowski: So, thank you very much. I hope that you enjoyed this conversation as much as I did. That was really good, and I would like to invite everyone to watch other episodes of Pragmatic Talks as well. So, thank you very much.
Jan Argasiński: Thank you, it was a pleasure.
Wiktor Żołnowski: Pragmatic Talks is delivered to you by Pragmatic Coders, the first-choice software development partners for startup founders. Be sure to catch all the new episodes. Subscribe to our YouTube, Spotify, or Apple Podcast channels, and if you are thinking about building your own startup or struggling with product development, contact us and find out what we can do together.



