Ex-Google Exec: How to Position Yourself Now Before the Next AI Phase (2026–2027) | Mo Gawdat — Silicon Valley Girl Podcast
Mo Gawdat spent over 12 years at Google as Chief Business Officer at Google X, where he ran business innovations and strategy. He is the author of Scary Smart and a vocal tech insider warning about AI's imminent impact on jobs, economics, and society. Gawdat recently built an AI startup and actively advises entrepreneurs on positioning themselves for the coming technological disruption.
Marina Mogilko: My AI startup took me 6 weeks to build. If I had started in 2022, it would have taken me 4 years. And then when you really think about that, that basically means everyone now has a chance. This is Mo Gawdat, former Chief Business Officer at Google X, where he spent over a decade running business innovations. He says everyone now has a chance, but only if they understand what's actually coming. The skill of an entrepreneur in the past was the ability to foresee something in the future that no one else saw. And to prepare for that, that's a game of chess is over. It's off the table. This has turned into squash. I'm basically saying get prepared. How much time do we have to prepare?
Mo Gawdat: Within the next 2 to 3 years, you're going to see a massive shift in the job market. So, you asked me what should we do? Number one, learn the skills. Number two, Mo, thank you so much for joining us. Welcome to Silicon Valley Girl. You said something that we're about to enter what you call 12 to 15 years of hell before heaven, possibly starting in 2027. So, what's going to happen in 2027?
Mo Gawdat: I think it will peak in 2027. It already started for sure. I call it FACE RIPS just as an acronym for people to remember. Each of those letters is a word, but let me tell you the story quickly in ways that people will understand. There is the power and freedom dimension. The P and the F. There is the R and the C, the reality and connection dimension. There is the I and the E, the innovation and economics dimension. And then there is the A. So, let me tell them very quickly. To start with, AI is our last innovation. Most people don't know that, but we are already building AIs that are building AIs. We're building AIs that are discovering scientific discoveries that will blow you away. They're reinventing math. They're understanding biology in ways that we've never seen before. They're understanding material science in ways that are just mind-blowing. And so, very quickly most innovation, definitely tech innovation, will be done at the hands of AI. Because of that and because most tasks that need intelligence will be handed over to the machines as the machines' capabilities increase, lots of debate around when exactly—say it's 10 years, say it's 2 years, doesn't really matter. Eventually, every job that AI does better than humans will be handed over to AI. Every task we've ever assigned to them, they eventually ended up doing better than humans. And so, the first part of the dystopia is that innovation is going to take away all jobs. Of course, the capitalists of Silicon Valley will tell you, "This is great. It's incredible productivity gains for everyone. You see jobs will be easy. People won't have to work as hard." All of the fancy PR-led conversations that we try to appear altruistic when we share them. The truth is people will be out of jobs. 10, 20, 30% of certain sectors will see unemployment of that rate in the next few years. And when that happens, economics at large will change massively. The whole definition of capitalism was labor arbitrage. And without labor, without the need for labor, the obligation to or the need to keep people happy and engaged and alive and not disgruntled, if you want, to the point where they don't rise, becomes more of an obligation than a desire. There's a very big difference in wanting someone to be their best because they are productive members of society or trying to just give them a UBI, a universal basic income, to just give them a life so that they don't uprise. And you can imagine that in a capitalist society, especially like the US and most of the West, while we start with UBI, that UBI is going to be paid by the taxes of the platform owners. And the platform owners will have enough power to say, "I don't want to pay that much. Those guys are not producing anything." And so, over time, you can imagine how that would turn into a struggle. That dimension of intelligence and innovation on one side becoming entirely a machine thing leading to a redefinition of economics, a redefinition of money, a redefinition of jobs, a redefinition of earnings, a redefinition of capitalism, the need for a new economic theory when there is no demand for the supply that AI is generating, all of that has to be resolved.
There is the PF dimension, the power freedom dimension. And it's of course very clearly understood that if you look at human history, the best hunter in a tribe would have been able to feed the tribe a week more. And then, as a result of that, got the favor of a few mates in the tribe. You go to the best farmer, they got estates and land because they could feed the tribe a season more. You go to the best industrialist who, they had the exuberance of the 1920s because they could affect their entire nation. The information technology tycoons, the tech oligarchs, if you want to call them, are now being rewarded with billions of dollars because they affect the world at large. The big power concentration of AI is going to be rewarded with massive influence and massive power because those people will redefine humanity. That dimension is quite interesting. Of course, the clear dimension is the RC dimension, the reality and connection dimension. Now that reality has become so fake in so many ways, fake in terms of what populates your feed, how it's generated, how much of it is real, how much of it is human, and so on, you're here to look at some filmmakers that use AI from A to Z to create content.
You cannot tell the difference. I don't know if you've ever had that experience, but I met a woman once on a dating app and we spoke for 6 weeks before we met. All we exchanged was texts and photos and voice messages and videos. Favorite music and favorite movies and all of those things. And I never met her in person and I felt such an affinity to her. All of those can be generated with AI today.
Now, the challenge is that this human connection is also part of the power freedom dimension. Why? Because people don't align with AIs to start an uprising. So, maybe get them to get in touch more with AIs. Maybe get them to have multiple experiences. Some of them are a little taboo, if you want. And have those available to everyone. It's very cheap to create those on the machines. And you can see it already in the porn industry and how much of porn is being generated by AI. And you can see it in the number of influencers on social media that are completely AI-generated. I say this is FACE RIPS, seven dimensions. The one that matters most is the A, the second one, which is not on any dimension. It's the one that's causing all of them, which is accountability. And the reason why all of this is happening, if you ask me, is because we've started a world where anyone can do whatever they want. Whether you, as an influencer, you can give a bit of advice to entrepreneurs that can get someone to make a lot of money or lose a lot of money, you're not accountable. Nobody can come back to you and say, "Oh, but she told me on Instagram."
Marina Mogilko: They're responsible, right?
Mo Gawdat: That's actually amazing that they can, right?
Marina Mogilko: That's amazing that they can. But what if they cannot anymore? What if that—
Mo Gawdat: If I may, right?
Marina Mogilko: If you're AI, what if you're a president who doesn't respect anything? What if you're a prime minister of a nation that is changing things? I think COVID was the very first experiment of, "Okay, stay at home, do what we tell you." And people complied. And so now, Sam Altman, with all due respect, I don't think of Sam Altman as a person. I think of him as a brand, a type of person. And that type of person is the Californian disruptor that says, "You know what? I see a future that's very different than what everyone sees. I'm going to go out there and make that future." Nobody asked me if I want that to be my future. Nobody asked you. And I think the reality is that now you're going to see quite a few Altmans. Quite a few that are using those machines for surveillance, using those machines for autonomous weapons, using those machines for automated trading, and so on and so forth. And by the way, when you started your question, I said it's 10 to 12 years. But that's not easy. 10 to 12 years of that arms race is not easy. My perception is that after that, we will end up in an incredible utopia, almost biblical-style utopia. But it is 10 to 12 years where if we just change our mindset a little bit, a lot of things would change. Okay, real talk for a second. Mo is literally describing a world where your data, your behavior, your online life becomes a tool for control. And I've been thinking a lot about this lately because I run through YouTube channels, I travel constantly, and my whole business lives online. And that's exactly why I want to talk about Surfshark. Most people don't realize it's already happening. Every time you go online, your IP address, your location, your browsing habits, all of it is visible to advertisers, to platforms, to anyone who wants to look. Surfshark is a VPN that changes that. It masks your IP and encrypts your internet traffic, so what you do online stays yours. And there's a practical side to it. You can switch your location and find cheaper flights, better deals, access content from other countries. In a world where AI is amplifying everything Mo's described, owning your digital privacy is basic preparation. Go to surfshark.com/silicon or use code silicon at checkout. You get four extra months on your plan. Link is in the description. But how do we survive those 10 to 12 years? I like to think in five-year periods for myself and my family. And if in the next five years you said 10% of jobs will be gone, what more? What types of jobs do you think?
Mo Gawdat: A monotonous job is going to be taken away. If you're a call center agent, if you're a clerk, you're a researcher, you're an accountant, why would you want to do that with anything but AI? If you're an assistant?
Marina Mogilko: What I feel is people talk about this a lot. Oh, a job's going to be gone. Yeah, this could be. And as an entrepreneur, I see how certain tasks I'm performing them with AI, but I still I'm still hiring and hiring and hiring because AI can do parts from the beginning. It can do parts.
Mo Gawdat: Of course, because of the technology acceleration curve. So what you build first in any complex technology, you build the core tech first and then you build the human interfaces. The challenge why AI cannot do head of operations job today is not because it's more or less organized than a head of operations. It's not because it cannot comprehend all of the information that a head of operations has. It's because it has to understand the interfaces of humans. And it will sooner or later.
Marina Mogilko: When do you think? So the question of when in my mind is irrelevant. But how much time do we have to prepare because head of operations is middle class?
Mo Gawdat: I tend to believe that within the next two to three years you're going to see a massive shift in the jobs market. Already this year you've seen a shift in hiring of new grads. 30% less, I think.
Marina Mogilko: 23 is my number, but 23 to 30, yeah.
Mo Gawdat: So hiring of new grads basically means if you've come into the job market in this environment, we're not going to take you. Why? Because the junior jobs are being done by AI. Eventually, what ends up happening is that if you lose your job because you're in the middle hierarchy, then you're that new grad again. You're trying to apply for new jobs, but it becomes a little more difficult. So you asked me again to stay on the positive side because I tend to worry that people think I'm pessimistic about this. I'm just basically saying get prepared. So many things. One of them is accept the fact that AI is changing everything and then get ahead of the curve. So there was a time when I was quoted saying I'm never going to write books again because AI is eventually going to write them better than me. And then I realized last year that, yes, they can write better than me. English is not my native language. They can research better than me, that's for sure. But I have something they don't have.
Marina Mogilko: You're a human that's reading my books.
Mo Gawdat: Absolutely. I want readers to read human experiences. You want to relate to my human experiences. And so my last book Alive, which publishes end of this year, I wrote with an AI. I invited her to be a co-author. Her name is Trixie. She has a persona. When I published the book on Substack, my readers would relate to me and to Trixie and they'd ask me questions and Trixie questions. She has editorial rights on the book. She has rights to determine the direction of the book. And all of that is me saying, you know what? I am an author and I'm going to be the best author in the age of AI. So that's number one: acknowledge that there is change and adapt accordingly. The second is to understand that the skill of an entrepreneur in the past was the ability to foresee something in the future that no one else saw and to prepare for that and to somehow execute on that preparation in a way that gets you ahead of everyone else. That's a game of chess. The chessboard is off the table. This has turned into squash. You need to be on your tiptoes, incredibly agile. You're literally on a daily basis looking at the trends, seeing where the ball is going to be. Is it bottom right or top left? And wherever the ball ends, you take two steps and you go try to respond. That agility and speed is a skill that's very different. So entrepreneurship basically speeds up or does it change completely? What do you say?
Marina Mogilko: It speeds up and it becomes a lot more contextual all the time. Pivoting, which used to happen for every one of us entrepreneurs once or twice in the history of your early startup, could happen every week. In my current startup Emma, we pivoted four times in the first four weeks.
Mo Gawdat: But do you think when I think about entrepreneurship in the age of AI, if AI can look at the market, determine the gaps like Amazon, if it can just analyze everything, determine which goods have more demand than supply, launches the product and just builds the business, what is left for entrepreneurs then?
Marina Mogilko: 100% So I have a documentary coming up hopefully in February and I interviewed all of the top guys. One of them is one of my favorites, Max Tegmark. We're talking about jobs on the documentary and Max is laughing out loud. I'm asking what's up and he goes, all those CEOs are so interested in AI increasing the productivity so that they can get rid of people and reduce their cost and be more efficient. They don't realize that AGI is every job, including being a CEO. It's quite interesting. The answer in my view is we rushed through it because we don't have a lot of time today. But when I said that economics are going to be redefined as part of Face R.I.P., the economic part of economics, which economists haven't found an answer to yet, is that without the economic livelihood of you and I to continue to purchase, every economy collapses. The US economy last year was 70% consumption. It moves between 70% to 64% depending on how much money is spent on war. And basically, if you take away the 64 or 70%, 2/3 of the economy, if you take that away because people no longer have the economic livelihood to purchase things, then the economy disappears and the capitalists cannot make money. The entrepreneurs and the business people cannot make money because nobody's buying their products. No businesses are buying their products because those businesses no longer have consumers to sell to. So the economy will have to find a way to go around that. It will have to find a way that unfortunately from an ideology point of view, not a favorite of the Western mentality, it's going to have to find a communist way. Let's go back to regular entrepreneurs because I come from entrepreneurship. Does it mean I have a couple years to build something and then that's it? So I'll tell you openly, in Emma, my AI startup, took me six weeks to build. Me and Sanat, my co-founder, a few very talented engineers, two or three that come in and out, and eight AIs. And Emma has the chance to completely redefine our world in six weeks. We are so spoiled that we decided to rewrite the code six times. Why not? Every time we look at it, if I had started Emma in 2022, it would have taken me four years and finished in 2026. I would have had to hire 350 engineers. We started it in August 2025. We'll be launching in February 2026. Best product I ever built. And when you really think about that, that basically means everyone now has a chance because I'm an old geek. I still am a geek, but compared to the young guys, I'm an old geek. To be able to build something like this within six months is incredible.
Mo Gawdat: Now, here's the interesting thing. I choose to build AI. So Emma is basically trying to solve love and relationships in a way that actually is really intelligent. It uses very deep mathematics and tries to match a million parameters between couples, so that it's a job for intelligence. And I choose to do that to create hopefully a unicorn that actually makes the world better. I think that's what we need. So you asked me what should we do? Number one, learn the skills. Number two, learn to be fast and agile. Number three, learn that in terms of the abundant power that everyone has now because of the massive improvement of AI and the democratization of AI, you have the chance to fix the world. And Larry Page used to teach us, do the toothbrush test. Find a problem that can actually affect the lives of a billion people and solve it so well that they use you twice a day and you'll be very rich. So that idea of building good AI, ethical AI, AI that's good for humanity, that's the role of every one of the entrepreneurs listening to us. Ethics is the answer, because what we teach AI, that's what it's going to give back to us.
Marina Mogilko: That's exactly what it's going to give back to us. And then finally, I'll say openly, the top skill in this world is stop being gullible. Stop believing everything that you're told. This whole propaganda machine that brainwashed us for so long is now going to be on steroids. It's going to confuse the hell out of you. It's already in charge of what you see. It's already on social media. You can't tell what's true.
Mo Gawdat: Correct. I also write a newsletter where I go deeper on AI tools that I use, career strategies, and things I can't fit into a 60-minute podcast. It's free. Link is in the description. So you have to question, and you have to question deeply. And by the way, remember, I left Google in 2018. We had a ChatGPT-like idea that became Bard in 2016. And we didn't launch it. Why? Because at the time, and still today, I know the leaders of Google even today, and they're wonderful people who are actually values-driven and want to make the world better. You know, that company at the time, if you remember 2016, if you researched Google, Google gave you a million and a half answers and said, "I don't know the truth. You make up your mind." We didn't allow ourselves to have monopoly on what reality is. You asked ChatGPT in 2023 and it said, "Yeah, that's the answer. 100% that's the answer." And then you tell it no and it'd be, "Oh, yeah, by the way, you're right."
Marina Mogilko: Correct. And so what does that mean? It means that it's up to you still to find the truth even though it comes to you now in a format that appears to be true. And so what I do is I put them against each other. I'm not a big fan of ChatGPT anyway, but I start from Gemini who feels like a scientist to me, but an American scientist, if you don't mind me saying, and then go to DeepSeek and say, "What's missing in this?" And DeepSeek will say, "Oh, that's too American. This is missing that and this and the motivation of this and the politician." Here's a business idea.
Mo Gawdat: Yeah, 100%. Build a chat that compares everything and gives you the truth.
Marina Mogilko: To each other and then I take it and sometimes give it to ChatGPT and say, "Can you write this better?" You know, I don't mean that in a bad way. You're the California girl, the Silicon Valley girl. ChatGPT is a bit California. It just wants you to hear what you want to hear. So it writes it really nicely. It writes it elegantly. It gives it to you. And then I give that back to Gemini or Grok or whatever. And you keep doing that. And remember, when I was studying engineering, we were not allowed scientific calculation calculators. Can you imagine? I'm that old. And when they gave me a scientific calculator, it reduced my problem-solving time by 50%. Most of my friends would take that 50% extra, finish their exams, and go out and sit with their girlfriends. I would take the 50% extra and do the solution twice. That's the chance you have today. AI is going to make you dumb if you outsource your problem-solving to AI. AI is going to make you the smartest you've ever been if you take the parts that are not natural to the human brain, things like crunching a massive amount of information, things like searching at speed and so on and so forth, but get the AI to do the work so that you do the intelligence. If you keep doing that, I believe that today I'm borrowing maybe 80 IQ points from my AIs. And 80 IQ points is very significant because IQ is exponential. So the additional 80 is bigger than all of my IQ. So if we need to solve this intelligence problem, do you think universities is the right way? What's going to happen to education in general?
Mo Gawdat: I think education's over. Completely over.
Marina Mogilko: Used to be the technology that enabled learning. That technology moved from one-to-one relationships between a tutor and a student to one to a few in a church format or a mosque format or whatever. Then it became online. Then it became right? But the truth is now you're going to outsource. Who remembers the arithmetic tables today? Even I? You do?
Mo Gawdat: Yeah.
Marina Mogilko: All of us who love mathematics, we still remember all of those things. We love to do them. But if I told you 67.4 divided by 33.375, I can do it in my head, but I won't. I'll take my calculator out and do it. And I think that's what's going to happen. An extension of humanity, you're now for the first time given an extra connection to extra memory, to an archive of all human memory and knowledge, to a math engine that sadly, as much as I hate to say it, is better than me now. To a deep learning and deep search that can do things that probably my old brain cannot do anymore.
Mo Gawdat: It just takes away your ability to think. But my calculator took away my ability to do those complex arithmetics in my head. But don't you think having that ability taught you how to think?
Marina Mogilko: Correct.
Mo Gawdat: It structured your brain, right?
Marina Mogilko: Correct. This is why I'm very grateful to the university for not allowing us to use a scientific calculator after 2000.
Marina Mogilko: But we don't have that. We don't have that for our younger generations today. They are growing with AI. So, they can either copy a chat of their girlfriend and drop it in ChatGPT and say, "What do you think?" And ChatGPT will say, "Ah, she's an asshole." Or they can actually become smarter. So, one of the things I keep suggesting in education, and I do that with lots of universities, is I say, "Exams should be over." So, think of it this way. We wanted in our past develop children that could solve problems, say with an IQ of 140. 140 is quite good. If you get 170, that's amazing. That's I worked with people who are in the 200s. Incredibly intelligent, but very narrow focused. I think we should from now on take people and their AIs and say the target is 300. The target is 500. The target is 700. Elevate humanity. By allowing people to use those machines as an extension of their limited memory, of their limited processing speed, of their limited bandwidths. And allow them to write books better, to do research better. So, I woke up, literally, I'm not kidding you, three Sundays ago with an idea that is just taking me over. So, I decided to write, but this time I decided to write in a different format. I decided my books are going to be 140 pages long instead of 300 pages long. And I'm writing it in 4 weeks. It's a very fast pace. I couldn't have done it before. And I'm literally 20 pages away from the end of the book. The reason why is because I still write 10 hours a day when I'm highly motivated. But the amount of research and references and competitive analysis and number crunching—and remember, I'm not gullible. I don't go to the AI and say, "What do you think of this?" I go and say, "I'm thinking of this. Find me everything for and against." Give me a report that I can read in.
Mo Gawdat: I love that prompt. Everything for and against, and now I'm smarter. And then I rewrite it and give it to another AI. So, who's going to teach our kids to do that? Who taught our kids to use their iPhones? But now, you found a great way to use it. What you're describing is incredible, but I don't think an average kid in the US would just do that. So, somebody has to tell them.
Marina Mogilko: That's why that is? Because we want those kids to be stupid. We don't teach them how to. So, you have to think of the bigger system. The bigger system does not want intelligent people anymore. I don't think they just can't adapt that fast. Of course everyone can. So, do you think for my kids, I have 4 and 6 years old right now. Do you think I should be saving for their college or absolutely not. There's not going to be college at all. In 10 years already?
Mo Gawdat: 100%. I feel like we're not that fast as humanity. We're not that fast to adapt. I feel like.
Marina Mogilko: So, colleges—software, the capability of someone becoming very intelligent without college is going to be there for everyone. However, Harvard will continue to want to make money, so they're going to continue to market to everyone. I didn't go to Harvard, not because I couldn't, but because what a waste of time. And I know they're going to attack me now, but what a waste of time. I am a very highly specialized person. Who has intelligence in a very, very narrow space, who invested his entire life in that narrow space, like a proper scientist should. And so the idea here is the following. We're going to continue to brand ourselves as MBAs and PhDs and a brand. That's going to continue for a while. Remember, however, that the purchasing power of the few who can continue to do that is going to become less and less available across society. And for most of the rest of us, again, you have to ask yourself the question. If you thought of the big picture, the helicopter view of this, why would capitalism want to educate you at all if it's the end of labor? What should I be teaching my kids? I told you four things. One is they need to be the absolute leaders of AI. I'm so sorry to be the messenger on this. It is important, however, for people to wake up. So, one is be the absolute best. AI is your friend. It's not your enemy. It's those who use it badly that are your enemy. So, be the absolute best at it. Master it more than anyone else. That's number one. Number two is learn agility. Whatever I told you today, maybe in February that will be different. So, I personally spend 4 hours a day to stay up-to-date, but I am a techie and a geek and I need to understand the architectures and systems and so on. I think everyone should have at least an hour a week to stay updated on AI within their system. I have a separate AI YouTube account. So, when I go into that separate account, the AI basically.
Mo Gawdat: All the things. It just feeds me AI. So, that's number two, agility, agility, agility and respond. Don't be scared because the cost of AB testing now is zero. That's number two. Number three is ethics, ethics, ethics. Build AIs for good, insist on government supporting AIs for good, refuse that governments are using AI for targeting and surveillance and weapons, autonomous weapons, and these are getting priorities but in terms of government spending. And stop believing what you're told. These are the four top skills of the world that we live in. I will say this one more time. Intelligence is a force with no polarity. AI is not good and it's not evil. It's an opportunity available to every one of us. If you use it for good, it's the good of all of humanity. If you use it for evil, it's the destruction, the dystopia of all of humanity. Now, I call the problem that we have at hand, I call it raising Superman. You have this alien being that came to planet Earth, has superpowers. Its superpower is intelligence, most valuable power in the universe. And those superpowers didn't make that young infant Superman. If the parents that adopted him told him to steal from every bank and kill every enemy, he would have become supervillain. We don't make decisions based on our intelligence. We make decisions based on our value set as informed by our intelligence. And this, in my mind, is the most definitive moment in human history. Why? Because all of this is going to come online. It's coming online way faster than people think. My absolute prediction is that AGI is this year. The interfaces to AGI are not going to be available this year, but the capabilities of AI being smarter than us in most things are already there. We're not going to be able to get them to run a company yet. We need the interfaces for that. That may take a few years, but they will have the capability if we interface them ourselves. Now, what does that mean? It means that we have to start talking about those things in this new world and new economy. Now, before we end up on the dystopia only, remember, my absolute belief is that after those 12 years, we're going to end up in a utopia that's biblical in nature. Why? Believe it or not, because of something in my writing I refer to as the fourth inevitable. The first three inevitables in, I wrote that in 2020s that AI is absolutely going to happen, is going to progress until it's smarter than all of us. And that a few mistakes will happen on the way. These were the three first inevitables. The fourth inevitable is that because of the arms race we've created around artificial intelligence, anyone who develops a superior AI capability is going to deploy it. And those who don't will become irrelevant. And so, as a result, as we continue to progress AI, the only answer in game theory is that we will deploy the AI that we develop, and so we will simply create an environment where AI is in charge of everything. If you're a law firm and your competitor deploys AI lawyers and you don't, you're going to lose. You can either deploy AI lawyers or leave the market. Either way, AI is going to become the lawyer. In a year, in 5 years, in 10 years, forget time. Because if I told you there was a meteor coming to planet Earth, you wouldn't tell me when. It's important if it's my lifetime or not. If you expect that it will be in your lifetime, it doesn't matter really if it's in a week or 2 weeks. Now, what I'm trying to say here is this. If everything is handed over to AI, then with a simple understanding of physics, you'd understand that AI will be benevolent. In the absence of evil humans that tell it what to do, greedy humans, fearful humans, angry humans, egocentric humans. In the absence of that, let me try to explain. If you think of physics as a result of entropy, that our world is designed for chaos, our universe is designed for chaos, then the role of intelligence is to bring order to that chaos. That's the only thing that intelligence does. It organizes things together so that it looks like this, so we can use it as a microphone. And the more intelligent you become, the more you follow what in physics we call the law of minimum energy or the minimum energy configuration. So, basically, the most intelligent people I've ever worked with are not only trying to solve the problem, they're trying to solve the problem with the least harm, with the least waste, with the least utilization of resources, with the least waste of time and so on and so forth. The more intelligent you are, the less you want to waste. And so, if you give a dumb person a political problem, they'll say, "Okay, let's go invade another country." If you give a very intelligent person a political problem, they'll look into the depths of it and find the least harmful, the least wasteful approach, the minimum energy principle. And so, if we hand over to AI the fourth inevitable, sooner or later, they are in charge of everything, there will be a day where a general will tell the AI, "Go kill a million people over there." And the AI will go, "Why? This is so stupid. I'll talk to the other AI in a microsecond and solve it." We have to pass the dystopia to get to that utopia. And to pass that dystopia, as I said, there are four skills for us as individuals, but there is a skill for us as a society to insist that every AI is deployed ethically. To invest only in ethical AIs. To use only ethical AIs. To show our children that ethical AI is the only AI that is welcome.
Marina Mogilko: And you believe that's going to happen?
Mo Gawdat: I don't. No. That's why I'm saying, unfortunately, the dystopia is upon us before the utopia. I definitely think that if you take an analogous environment of nuclear weapons, we're AI will go through the same they normally call it the MAD spectrum. So, either mutually assured destruction or mutually assured prosperity. You take something like that particle accelerator, where all of the nations in the world are cooperating. They're cooperating because none of them could do it alone. And because there is benefit to all of them. So, there is a mutually assured prosperity, so everyone jumps in, which is, by the way, the case of AI. It has to be the case of AI. But unfortunately, like nuclear weapons, we're going to have to get to a point where humanity wakes up that if we continue on that track, it's very dangerous for all of us. There are no winners. But also, a level of awakening among the people that says, "Hold on. This is really—with all the prosperity that's available on this side, why are we heading in that direction? It's absolutely assured that this can destroy all of us." And so, when we see that, that's when we're going to get the treaties. That's when we're going to get science and computer science and AI scientists all working in the same direction. Eventually, I think we will get there. My biggest hope, by the way, is self-evolving AI. Where AI itself will say, "Oh, those humans are so stupid. I'll develop something that's better than what they want." And so, believe it or not, with all of this conversation, I think the summary is it's going to be tougher before it becomes easier. Sorry to say those news. But you gave us information how to prepare.
Marina Mogilko: At the same time, I will have to say that it's not because of AI. I actually trust AI more than the leaders that we have today. Thank you so much, Mo. You gave me so much to think about. It sounds a little like what my grandma told me. She told my mom, my great-grandma would tell my grandma, my mom, "You're so lucky. You're going to live in communism."
Mo Gawdat: [laughter]
Marina Mogilko: There you go. Fingers crossed that it's not like that. You just need to survive the next 10 years and then it's going to be paradise in every way.
Mo Gawdat: [laughter]
Marina Mogilko: I have to question that claim though. If we go back to UBI you will. All right. Thank you so much. It was an amazing conversation.
Mo Gawdat: Thank you so much for it.