Digital Strategist

"A dynamic thought leader in digital strategy, Giulio Toscani, Professor, ESADE Business and Law School, Spain, has more than 20 years of global experience spanning academia and industry. A chemical engineer by training, he began his career as a patent attorney in Lugano, Switzerland.
He has earned a PhD at the KTH Royal Institute of Technology, Sweden. He is also a side flute player, yoga teacher, Vipassana meditator, and ultra-trail runner. Currently, Toscani shares his expertise at renowned institutions such as Stockholm University, Politecnico di Milano, and NUCB, Japan.
worked with multinationals such as Telefónica, General Electric, Nike and PwC, on their digital transformations. As an advisor and investor, he is actively involved with Navozyme in Singapore and Muns in Spain.
In an exclusive interview with Corporate Citizen, Giulio Toscani, talks on his career journey, human-AI interaction, AI-driven future, positive potential of AI and much more"
CC: Take us through your education and career journey.
Giulio Toscani: I am a chemical engineer by training; my father was also an engineer. We have a shared family interest in technology. When I was ten-years-old, in 1982, there was this VIC-20 home computer, which I would use it to create my own video games. The computer relied on cassette tapes for storing data and programs, including games. Since then, I have been interested in digital systems, a time when everything around was using analogue systems. But, I knew then how digital technology was working.

I was born in Rome, the capital city of Italy, residing in my early years in L’Aquila, a city 100 kilometers east of Rome. While we in the family were all tech-savvy, my father also had love for art, which is why I also developed interest for western classical music and opera.
I have done my bachelor’s in UK in chemical engineering and master’s in Biotechnology in Italy. I have done an MBA in Spain, followed by PhD in Industrial Economics, from KTH Royal Institute of Technology. I have also done a programme in AI, from Massachusetts Institute of Technology.
I started my career working as a chemical engineer, but in patents, so my work was related to both legal and technical things. Then I worked in Telefónica, which is one of the five largest telephone operators and mobile network providers in the world. My studies in human computer interaction (HCI) happened when I was with Telefónica. While working there, I realised that everyone was talking about data and no one was talking about strategy and the human part—this element was missing.
Why no one is mentioning the human part in artificial intelligence? The reason is that you cannot talk about everything. People talk about AI, but they don’t talk about data that much. First comes data and then comes AI, but no one mentions it. And, now there is this third element, which is the human factor— why? AI started as a technical expert system— only those who were into technical domain were talking about it. They don’t care about the human part. When AI started spreading and human managers got involved, that is when we started talking about the human part. It was the time, when I got interested into human AI interaction—I didn’t want to teach just AI, I wanted to also teach AI and the human factor. Then I came to know that there was this specialisation human-computer interaction (HCI). I started reading about it and working on it, and found that HCI is not well defined. I wanted to understand HCI in a different way and not as it is officially defined, as a disciplinary field.
I have authored a book titled “Augmented - prAIority to Enhance Human Judgment through Data and AI”, based on the priority concept, and which explains positive potential of AI for human augmentation, with a focus throughout on human–AI collaboration. PrAIority, puts together data, AI and the human factor. This book, released on 1 May, 2025, is a result of my in-depth research on the impact of cross disciplinary knowledge, of sharing knowledge, of classic knowledge, on problems in framing things. I have done research on use-time of data scientists who started meeting at different point of time, and the quality creative time they have spent. The book defines how we can make AI an augmenting intelligence.
Artificial Intelligence is going to replace tasks, not jobs. Once you integrate what AI does better than you, into your own task—some part of the task will be done by AI and some part you will be doing it. This is the idea of augmenting, it’s not replacement
— Giulio Toscani

CC: AI is becoming good at many human jobs, which is raising reasonable fears that it will ultimately replace human workers throughout the economy. How true is it?
Artificial Intelligence is going to replace tasks, not jobs. For example, AI may be good at recording and transcribing the interview you are taking, but you will be taking the final decision on how the interview will finally look. You will do the initial draft and then you will send it to me for my corrections and final approval. So, there is a human control on what AI will be doing. Once you integrate what AI does better than you, into your own task—some part of the task will be done by AI and some part you will be doing it. This is the idea of augmenting, it's not replacement. We will keep needing journalists, because they will have to do their part to complete the task. No one would want to read a book or article written by AI. It doesn’t understand the nuances, and more worryingly, it still makes mistakes.
CC: Will AI radically alter how work gets done and who will do it? Isn’t technology’s larger role to compliment and augment human capabilities, and not replace them?
For example, ChatGPT can write good articles. You can ask ChatGPT to write introduction to this interview you are taking—it may do a better job, but still you will have to revise it. It may save some of your time, but it may not have that completeness you want. Once the interview has been transcribed by the AI system, you can ask ChatGPT to integrate the interview with the introduction. ChatGPT can do it, but you have to control what it has written. So, there is always the need of human intervention. Earlier it may have taken two days to transcribe the interview; now you can do the same in less than an hour.
CC: Though AI may create new jobs, but will it not replace more jobs than what it creates?
Artificial Intelligence will create new jobs— there will be new jobs in personalisations. Think about autonomous car—no one has to drive the car and may be you will have to share the car with other people. An autonomous car not just takes you from point ‘A’ to ‘B’, but maybe we can have nail polishing service in the car, or musical instrument teaching service in the car. There may be a violinist teacher that will teach you to play violin, while you are travelling in the car. So, there may be new jobs created on an autonomous car.
Why no one is mentioning the human part in artificial intelligence? The reason is that you cannot talk about everything. People talk about AI, but they don’t talk about data that much. First comes data and then comes AI, but no one mentions it. And, now there is this third element, which is the human factor
CC: AI will cause no harm to human beings, but if an autonomous car is involved in a fatal accident, will AI be responsible for that accident?
As this is all new, we have to decide who will be responsible in such a situation. We have to decide if the software maker or the car maker will be responsible. When the car faces an unavoidable accident situation, the AI controlling the car will have to decide to save the passenger onboard or the pedestrian on the road. But, once the car chooses to kill the pedestrian, who is responsible for the death of the pedestrian? If it is the fault of pedestrian, you can pin the fault on pedestrian, but if the mistake is from the car, then we don’t know—this is the dilemma. I think, for sure the software has to make the first decision, but then the car manufacturer has the responsibility to buy a software which is of good quality and reliable. But, if no one knows that the problems are in the software, the car companies would start buying cheap software because they don’t care about the responsibility. So, the responsibility will always have to be shared.
CC: AI has the speed, scalability and quantitative capabilities, but will it also replicate the human leadership, teamwork, creativity and social skills?
Before thinking about how AI can do things, we have to think if we really understand leadership, teamwork and so on. All these psychological processes, are not yet well understood. We don’t know really what is leadership. We have from transformational leadership to real leadership—everyone has his own understanding. So, if we don’t understand leadership ourselves, how can we teach leadership to AI—that’s a big thing. AI will learn what we will tell it to learn. But, if we don’t understand something ourselves, we cannot teach that to AI.
CC: AI is perceived as conscious by users during human-AI interactions, is it more of a concern?
We assign consciousness to what relates to us. If someone interacts with us, it can be a dog, which is not as smart as a pig, but the pig does not interact with us. We give consciousness to the dog but not to the pig, and probably pig is more conscious than the dog. This will happen. People will start getting attached to the telephone—we are already attached. But this always happens—if you love playing a musical instrument, you get attached to the instrument, and may be you talk with your instrument and give consciousness to the instrument. Unless it gets to a crazy level, we can accept it. Someone is talking to chatbot—many are already doing it— because they have no one else to talk to. Let them talk, unless they start doing something dangerous. We try to create a system where they also have some human interaction—everyone is free to do what they want to do.
CC: Can we ascribe human-like consciousness to AI?
The problem is that we still don’t really understand what consciousness is. We define what consciousness is, but there is no agreement. Someone say consciousness is related to God, some will say consciousness is related to your body, others will say consciousness is related to your mind. We still don’t have one single definition of consciousness. What we agree is that, once we are creating pain, that is wrong. So, whatever is experiencing pain, if it's an insect or a human being, we know it is not right and we should try to avoid it. So, we probably don’t know if there is consciousness or not, but definitely there is pain and pain is very real. AI doesn’t know and cannot know what pain is, so there is no consciousness there. AI will only do what we will tell it to do. For example, if we ask AI, how can we cure cancer? AI will say, you just kill the humans and there would be no cancer anymore.
CC: How should we evaluate AI through a moral lens?
When we consider the moral elements in a society— for example, you have this practice of arranged marriage, AI can create a match made on more variables and maybe it can be better. Mothers can find a better match for their children. Maybe AI could be used in a positive way. But, if we consider religious stats in a country where you have thousands of Gods, groups and castes, then probably AI starts suggesting one way of doing things. AI probably will never draw up with the majority, or the mainstream things. In India nothing is really mainstream, because you always have variations. So, people may consider this variation like against their own minority group or against their beliefs. AI somehow will be proposing a western model, as its mainly trained on western things; as of now it is mainly western point of view.
In India you have lots of data and software engineers, but data comes from other sources, mainly from the west. India does not collect much of data used for training LLM models like ChatGPT; it is collected from other sources and that is why the Indian point of view is not that well represented by LLM. I think someone more into Indian culture can better answer this question.
CC: Despite of many benefits from integrating AI into business practices, it is not without challenges. Can AI make things worse?
Yes, it can make things worse if you don’t have the data that you need. If you are crunching data that you don’t need, like the color of job candidate's shirt to understand if they fit with the company culture, you will get insights that are of course wrong or useless. There are lots of examples where companies have tried to use technology and AI, and it has gone completely wrong. They may have not used the privacy or cybersecurity properly—when you consider this, AI can go really wrong. So, to use AI rightly, you have to know what data you need, you have to collect the data you need, have to analyse the data in a good way, and then you have to respond in a proper way. It is called the STAR process, which is to sense the data, to transmit the data, to analyse, and to react. For this STAR process, you must have data collection, AI processing and human judgements.
To use AI rightly, you have to know what data you need, you have to collect the data you need, have to analyse the data in a good way, and then you have to respond in a proper way. It is called the STAR process, which is to sense the data, to transmit the data, to ana lyse, and to react. For this STAR process, you must have data collection, AI processing and human judgements

CC: AI could be as good as the data you feed in it. So, how can we build a durable measurement for an AI, to tackle privacy concerns surrounding AI and its potential impact?
You have to create the ongoing process— remember that the AI you have now is the worst ever. It keeps getting better every day. That means, to have better AI probably you need more data. But then, you have another problem—even if we are collecting and using more data, AI is becoming less successful. Because, earlier it would have free access to almost all the data, magazines, newspapers etc., but now we have payment barrier for access to many of the data. So, suddenly lot of data is not accessible anymore. We have this contradiction—we produce more data, but we have less data available.
CC: Is there a chance that using AI we can go a wrong way?
AI will know better ways to keep you stuck with your mobile phones. TikTok is an example, its recommendation algorithm can indeed create a loop where users get stuck watching videos, leading to excessive time spent on the platform. Even though they say you can use anti-addiction filter, it will start after two-three hours—for me it is like killing my brain (“Brain rot” is the word of 2024). There are people who spend many hours on TikTok. We have people who consume junk TV every day, and even when they go to a bar to socialise and meet people, they have junk conversation. The level of knowledge, proficiency and literacy, doesn’t indicate to a higher level of culture.
CC: The main purpose of HCI is to create AI systems that are user-friendly, trustworthy, ethical and beneficial, for humans. Are we going on the right track regarding all these things?
The new systems that are created, are really created for commercial purposes. If you take the biggest and most successful app, it is not an app to better manage the taxes or to better manage the schools around the world. Social media is all about video streaming, advertising, showing football, adult content—adult content is huge business on the internet. So, the commercial thing is the power, and it is where the action is—it is the direction of these apps also.