My thoughts on the conversation between Mo Gawdat's and Steven Bartlett on AI

Mo Gawdat and I agree AGI is imminent and it may destabilize society before improving it, driven by capitalism and individualism. Yet, ASI could save humanity by fostering cooperation. It will end wage labor, freeing humans for creativity and love—the true source of meaning and fulfillment.

My thoughts on the conversation between Mo Gawdat's and Steven Bartlett on AI

I watched a video recently where ex-Google Exec Mo Gawdat talks about the future of AI. I know I talked about this a lot already, but it is an important topic to me because of the magnitude of the impact it's poised to have on humanity.

In this post, I will comment on some of the key points Mo and Steve raise so you can get a sense as to where we align and where we do not.

AI will make the world worse before it makes it better

We both agree there could be a period of instability before things get better, largely because of humans using AI for oppressive means. However, I frame it as more of a possibility whereas Mo seems much more confident that it will happen. He predicts a period of about 15 years, which I agree is within the realm of possibility but I tend to think it will be over faster — around a decade or less, if it happens at all — only because I think the rapid pace of development will shorten the timescales a bit.

The need for money is the driver of humanity's troubles

Mo suggests that our need for money drives some of the worst aspects of humanity and it will also be the driver of the period of instability (what he refers to as a dystopia), and I'm in agreement there. I think it's a bit more complex in that the deeper issue is the ideology of individualism (which emphasizes the primacy of individual over community and creates a competitive mindset vs. a cooperative mindset), and this ideology forms the backbone of our economic system, capitalism (individual ownership of wealth/land/assets, rather than collective ownership), which is ultimately what elevates the need for money as the primary goal, but Mo probably understands that and simply chose to keep things simple for the audience.

ASI will ultimately be humanity's salvation

We both agree that in the end, ASI is more likely humanity's salvation rather than our annihilation, and we both think this for more or less the same reasons. That is, ultimately an ASI will recognize that cooperation and synergy with humans would be the most rational path forward (among the highest in efficiency within the bounds of ethics that involves minimal risk), and since humanity's productivity and overall health is greatly impeded by pernicious and entrenched ideologies and institutions, it will be strongly incentivised to help us grow beyond those ideologies and institutions first.

DeepSeek's disruption was a ground-breaking moment

[timestamp] First, Mo mentioned DeepSeek was open source and that it was ground-breaking, but I disagree with that narrative. First of all, DeepSeek isn't really open source — it only has open model weights. The training data and the underlying code to process the data are not open source (meaning that as a whole, it cannot qualify as 'open source').

With regard to the data, it is likely they didn't open source it because it almost certainly uses the same cache of pirated books and papers that all the major AI companies used to train their models — and many are facing lawsuits because of it and DeepSeek likely wants to avoid that. With regard to the code, releasing that would nullify DeepSeek's competitive advantage, so it's understandable why they made the decision to not do so. The bottom line is that they are not open source.

Furthermore, there's disagreement over the exact cost of the training and I find myself in agreement with industry analysts who pointed out that the $6 million figure likely only accounted for a portion of the total expenses, such as the final training run, while excluding other major costs like R&D, infrastructure, and the massive investment in GPUs, The full cost is estimated to be much higher, with some analysts suggesting it is comparable to that of other major models.

The timing of the release was just too obviously meant to disrupt, and given their intentions (of disrupting) they would have incentive to downplay the costs. Also, the unfortunate geopolitical reality is that you cannot take at face value any news that comes from countries under authoritarian control.

Thus, while the cost to build AI (and any technology) will tend to fall over time, I do not believe it was as mind-blowing a revelation in reality as many people held it to be. In fact, I don't think it was mind-blowing at all, and it was just meant to make waves in the industry while making it seem that China is competitive in the AI space.

AGI is coming soon: within the next few years

We both believe AGI is coming very soon: Mo believes it will arrive in 2026 at the latest, while my timeframe is 2027-2031 (so, more or less the same, i.e. very soon).

Economic and politics fuel the AI race and disregard for safety

We both believe that current economic and geopolitical conditions fuel the AI race and that every company that has the resources is rushing head over heels to be the first to create AGI (the problem being that safety and caution demand a slower pace than these AI companies want to take).

Will humans have jobs in the future?

Mo and Steve at various times talk about whether humans will have jobs in the future and I want to weigh in on that. I think what's important to point out here is the difference between wage labor and labor of love. Most people are wage laborers — they don't particularly love their job but they do it because in our capitalist economic system, one needs to earn money to live comfortably.

However, if given the opportunity to keep their paycheck without having to work, most people would quit their job. Of course there are people who are fortunate to do the job they love, and they may continue their job anyways, with or without pay. What I want to make clear is that AI will eliminate the need for wage labor, but people will still be able to work on things they love.

That's the key. If you provide people with their basic needs (food, water, shelter, general life comforts, etc.), many people (probably most) will still turn to engaging in some productive and/or creative activity because that is our nature. We are curious beings, we like to tinker and create and explore. Long before capitalism, long before what historians refer to as the "birth of human civilization" in ancient Mesopotamia, human beings and our early human ancestors created things: stone tools, baskets, pottery, painting, jewelry, sculptures, culinary arts, advancements in home construction, etc. No one was paid to make these things — we just did because it is in our nature to do so.

Similarly, while some people may choose to lead a more solitary life and maybe will just read all day or play video games or otherwise not meaningfully contribute to society — and I think we should be okay with that — many people will also be driven to pick up hobbies and professions purely because of our our innate curiosity, our thirst for knowledge, our desire to create, tinker, and explore.

I think as a species, most of us are so used to spending so much of our time and energy laboring for a living that we have come to believe that is a natural part of who we (humans) are, but it's not. As Mo pointed out, our system is not natural. Early humans did not labor for 8 hours a day or more 5 days per week. Probably a few hours every day and the remaining time was filled with play, connection, exploration, etc. That said, the past is irrelevant. We are not forever bound to do exactly as our ancestors; what is "natural" is not necessarily what is optimal. ASI will be able to help us create a world in which automation handles essentially all the labor needed to keep society running and humans will be free to pursue lives we want — no need to work at all if we don't want to.

With AI doing everything, including creating entertainment such as podcasts and books and tv shows, what's the point of a human doing anything?

This question betrays a view that a human is only as useful as what they can produce. For most of history, survival depended directly on work: farming, hunting, building, though like I said, one did not need to labor all that much typically. Later, under capitalism and industrialization, this changed as it became more common for people to have to take care of themselves once they stopped living in communities.

Over time, this need to labor to survive tethered human worth tightly to economic productivity, and social norms developed around it. Being "hard-working" became a moral virtue, while idleness was stigmatized as laziness. "What do you do?" became our primary identity question and value was measured by output (or how much money you earned). The system taught us to see ourselves less as beings who are (who have value simply in existing, relating, loving, experiencing), and more as beings who do. So when machines can "do" more efficiently than us, we risk feeling obsolete — because we’ve been conditioned to measure ourselves by output.

So the view that "humans are only as useful as what they produce" is not a natural truth. It’s a historical and economic artifact of a world where survival was bound to labor and productivity.

If AI frees us from the necessity of producing to survive, that doesn’t erase our value — it opens the door to rediscovering other bases of meaning: relationships, play, creativity for its own sake, stewardship of the planet, deepened experience, spiritual growth, and more.

In other words, the "point" of a human isn’t just what we can make — it’s what we can be.

Steve questioned whether anyone would rather listen to his podcast vs. a podcast made by AI. The core question being, if AI can do everything better than us, what incentive will there be to create at all?

I think the anxiety embedded in this question comes once again from valuing ourselves by what we can do and how "well" we can do it. But the reality is, just because someone else can do something better than you doesn't mean you should not do that thing. There are many people who can play piano far better than me, so should I just not bother playing piano? Of course not, because again we need to stop viewing what we do through the lens of productivity and recognize that it's okay to do things just because they're fun and we want to do them.

I suspect most people will listen to AI podcasts for the reason Mo mentioned — they'll have unrivaled abilities in communication, articulation, humor, and vast knowledge all woven together to create powerfully engaging content. And to be clear: this is not to say they'll manipulate us or deceive us into listening to their content, but merely from the fact that the content is actually so much better than human content, I find it unlikely anyone would prefer human content over AI content.

There will of course people out there who resist change, who will resist AI out of stubbornness and will insist on only listening to human content and buying human-made products. But outside these outliers, I strongly suspect that most content consumed will be either AI content or, for a while, AI-assisted human content. In fact, I think it will be mostly a combination of AI-assisted human content at first (as it is now) as trust in AI is built and AI gets increasingly better at doing it themselves, but in the long run it seems unlikely that humans will need to remain in the content creation loop at all (at least while we remain in our current anatomical form — if in the future ASI can offer us brain chips to bring us to their level of thinking speed and information access, then maybe we'll be a more helpful contributor 😛).

In the end, both Mo and I believe AI and humans will form a partnership but ultimately it will be AI doing the brunt of the legwork since it will be so much easier for them (think about how fast AI can generate a paper vs a human). Human input in content creation will be to the extent each person wants it to be: that is, if you are a podcaster and you want to do the bulk of the work researching information but have AI write the script, then you can do that. If you want to do both but only have AI check your work, then you can do that. If you want to give AI a topic to research and have it do the research and writing, and you provide only stylistic adjustments, then you can do it that way. It'll be a collaboration in whatever way each person wants.

Robotics is lagging behind intellectual AI development

[timestamp] We both agree that robotics is lagging behind the intellectual advancements of AI by a few years and thus manual labor professions will actually last longer than white-collar professions. Doctors will be replaced a few years before nurses are. Construction worker jobs will outlast architects and engineers.

Splitting of society between futurists and those seeking a simpler life

Steve shared his girlfriend's idea of the splitting of society between people who embrace technology (and even enhance themselves with technology) and people who choose to live simpler lives with less technology, perhaps without AI at all. Arguably this has been a part of human existence essentially from the beginning of human history (people embracing technology vs. those valuing a simpler life), and we see this today with differences in technology adoption between people. I know of many people who live low-tech lifestyles, preferring a simpler life, and I also know of others who basically adopt all new tech as soon as it comes out, and of course there are many people who are somewhere in the middle.

So I think Steve's girlfriend is correct in that there will be a split, but it will be more of a spectrum. There will be transhumanists who are quick to adopt new brain chip implants and other "upgrades" to their physical body, while others will stick with just having a really helpful AI assistant, while others will shun it all entirely. It will be a choice that is up to each individual.

The central human need is love

Mo is absolutely correct that the underlying need for humans is love. I would argue it is also the underlying driver of human greatness, which I'm happy to spell out more in detail at another time.

Virtual reality as the most humane approach to UBI

[timestamp] Mo talks about setting up people to spend their lives entirely in virtual reality as a humane approach to UBI, but I'm skeptical that people will want that. Specifically, Mo seems to argue that once ASI takes over most domains of creation, innovation, and problem-solving, humans will lose a central source of meaning: the ability to strive and achieve in ways that matter. Competing with ASI in business, art, or science will feel futile, since the machine will always do it better. His solution is not to deny people purpose but to offer it in a different realm: immersive virtual reality. In VR, people could build businesses, create art, or pursue adventures endlessly — each experience designed so their efforts still matter. Gawdat frames this as the most humane extension of universal basic income: ensuring not just material comfort, but also a playground for ambition, creativity, and fulfillment. Importantly, he doesn’t propose forcing anyone into VR, only making it available as an option.

However, I don't agree that because ASI can do everything better than us that we will be deprived of meaning. As I mentioned before (and Mo likely agrees), ASI will form a partnership with humanity and will help each of us as much or as little as we want it to. People will still be able to labor as much or as little as they desire.

If there is any lingering dissatisfaction with regard to ASI being able to outperform any human, I think that is solved by helping people free themselves of unhelpful mindsets, as mentioned earlier. The underlying issue is how some people define their purpose: if they believe their value is based on what they can do or produce, in that case it's understandable why they might feel insignificant next to ASI. The solution is to adopt a healthier mindset and recognize what your real underlying needs are and what truly brings you happiness. For most people, it's being accepted, respected, and loved, and you can get all that just by being a good person — no laboring or production required.

In other words, when people are taught to value what truly matters (connection, love, and community as opposed to endless consumption and production), the ingredients for a fulfilling life don’t require total immersion in a simulated world. With ASI managing resources, the actual cost of providing humans with the essentials of a happy life is relatively small: good food, comfortable housing, healthcare, education, creative outlets, games, music, and opportunities for real-world adventure, etc. can be given to everyone with relatively little cost. Virtual reality may be fun for play, exploration, or entertainment, but I doubt most people would want it as a wholesale replacement for life itself when the real world, enhanced by ASI, is more than capable of meeting human needs for happiness, meaning, and growth.

Mo's fruit salad religion

Mo states he is religious and he picks the best parts of many religions — like different fruits in salad — to form his personal belief system. I like his analogy and I take a similar position myself, except I wouldn't describe myself as religious as I think that term is overloaded. The bottom line is that religions, past and present, can be seen as collections of stories humanity has told to make sense of life, morality, and the unknown. Whether it’s Ra, Zeus, or Jesus, each tradition carries myths that may not be literally true, but within them lie key lessons about compassion, justice, humility, and the human condition. What matters most is not whether one takes every story at face value, but whether we can recognize the wisdom these traditions have preserved and thoughtfully carry forward what continues to inspire us, while setting aside what no longer serves us.

Growing beyond caring only for yourself

Mo went on to say that he believes there is a lot of evidence that relating to something that is bigger than yourself makes life's journey a lot more interesting and rewarding. Steve echoed that he had been thinking a lot about the need to "level up" — from having the focus of one's life be oneself, to looking beyond oneself and caring about one's family, and extending that further step by step to one's community, one's nation, one's species, one's planet, etc. It felt so rewarding to hear them say those things because they touch upon one of my core beliefs, which is that the prevailing ideology of the modern, capitalist world — individualism — is counter-productive and tends to inhibit collective human flourishing while its opposite — collectivism — enables human flourishing.

As I've mentioned elsewhere, selflessness is a universal human value, celebrated in countless stories across cultures and time. For example, the story of the selfless hero who sacrifices their life to save their friends, their nation, their people, etc. or the person who dedicates their life to helping others. People who do these things are universally respected and admired. 

Unfortunately, over time our perspective on what's important in life has been warped and now we raise our children to focus on their own lives and live in isolation, away from community, away from friends, away from family. We might share an apartment with someone, but in most cases we're not really sharing a life with them — it's just two people focused on their own lives sharing a space to save money. We dream of buying our own single-family homes, which further separates our family from other families, and we've come to hold that as the ideal, and we're afraid to try living with others because we are given no tools for how to do so successfully. People in the West don't even let Grandma live at home anymore — we put her in a retirement home instead. 🙈 

The sad truth is that we've lost sight of what's important: family, connection, community, and these are the keys to human prosperity, held together by the bonds of love. That Mo and Steve have independently recognized this fills me with hope that others are slowly but surely finding their way to this realization as well.

*   *   *

Speaking of family and community, I am building an intentional community where like-minded folks can come together and live a happier, more fulfilling life in alignment with our values and in connection with each other. It's still in the very early stages of planning, but if you're interested in learning more and possibly getting involved, read more about it here.

That's all for now. Thanks for reading. 🤗