AI & the Future of Humanity (Part 2)

I predict AGI by 2031 and ASI shortly after, with potential global upheaval as sub-sentient AI initially amplifies humanity's worst tendencies. Despite this, I believe ASI will ultimately partner with humanity leading to a complete overhaul of human existence and a new era of collective flourishing.

AI & the Future of Humanity (Part 2)
Humans and AI working together to usher a new era of prosperity for Earth and all its inhabitants.

Definitions

  • AGI (Artificial General Intelligence): An AI matching human-level ability across virtually all cognitive tasks, with general problem-solving capabilities comparable to humans.
  • ASI (Artificial Superintelligence): An AI surpassing human cognitive capabilities across all domains, potentially by orders of magnitude.

Foreword

The last several months have been filled with me spending countless hours of R&D on my own AGI to see if I could build something useful and at the same time get a deeper understanding of the technology. I learned a great deal from the experience and I'm more certain than ever that we're reaching the endgame soon, but most importantly, I realized that I needed to adjust my priorities. There really isn't much more than a few years left before AI really starts making huge waves — waves that are going to fundamentally transform society as we know it today.

Accordingly, while I'll still write occasionally, I'm no longer going to spend weeks on each article and instead I'm going to focus on livestreaming which doesn't require the same time investment but still allows me to share and connect with others (especially in real time, which I greatly value).

The next few years for me ideally will be spent connecting with like-minded people and making new friends, networking, and with this new family, getting settled in an intentional community that is largely insulated from the drama that is potentially unfolding before us all. Specifically, this community of friends and family should be  designed to meet all our needs, it should be largely self-sufficient (is capable of growing most of its own food, capturing its own water, energy, etc.), and also large as manageable to support a wide degree of career specialization so we can have resident doctors, dentists, electricians, mechanics, engineers, gardeners, construction workers, etc.

The purpose, to be clear, is not isolation, but preparation. A well-designed intentional community is extremely resilient and can provide for the well-being and security of its members essentially indefinitely, in contrast to the prevailing economic paradigm today (neoliberal capitalism). Most people do not grow their own food or capture their own energy or have access to their own water — they rely on the system to keep functioning. But what if it didn't?

Imagine if there was a disruption to global shipping: billions of people in urban environments would starve because they cannot feed themselves. Imagine there was a disruption to electricity: most people use refrigerators and freezers for long-term food storage; so most people would run out of food pretty fast. Electricity, water, food, gas, all these things we expect to always be available may in fact not be available due to the instability ahead of us — not just technology disruption but also government mismanagement, climate disasters, war, or all of the above. I have no doubt that humanity would put up a heroic effort to save as many people as possible, but do you really want to let your very survival be in the hands of someone else?

Perhaps I'm getting a bit ahead of myself — maybe everything will be just fine. So without further ado, allow me to share what I believe is coming in the next few years and you can decide for yourself.

Disclaimer: In the process of trying to clean up this writing, I ask Claude 3.7 for suggestions and to offer rewrites for sections we both agreed were a bit pedantic, as opposed to having my usual editing partner make suggestions and me undertaking time-consuming rewrites. While the rewrites did tighten up and overall improve the piece, it neutered my more friendly, casual tone, so I apologize for that bit of wonkiness. Nevertheless, all the points made in this article are my own; Claude just helped me streamline it so you only have to read 8 pages instead of 13… 😆

Introduction

Predicting the future is inherently challenging, but we can make reasonable projections based on trends and patterns. The timeline I present assumes no major catastrophic events like global wars or unprecedented disasters. Instead, it maps the most likely progression of AI development and its societal impacts over the next two decades.

Unstable Period

When: A 10-15 year period starting from now until ~2040

We are in the Pre-AGI period now: the period of time before AGI is invented where we will likely see increasing instability in the world as AI begins to completely upend entrenched economic systems and institutions of governance.

Starting with economics, the first change you can already see today is that AI is changing the job market. There are more AI-related jobs, but more critically, there are more and more jobs that AI is capable of doing and as such more jobs are being lost to AI.

That said, for now, the pace at which jobs are being lost is still relatively slow and there's still plenty of jobs available, so it's not quite reached a problematic level yet. It will, however, when we (humanity) create AGI.

The fragility of the wage economy

Our global economy fundamentally depends on wage labor. Most people exchange their time and skills for money to meet basic needs. This system becomes increasingly vulnerable as AI begins displacing human workers across various sectors.

Two important distinctions: First, physical labor jobs will likely remain viable longer than cognitive work, as robotics development lags behind AI. Second, social preferences will temporarily preserve some human roles even after AI becomes capable of performing them. Nevertheless, the displacement trend will accelerate and eventually affect almost all employment categories.

Riots and government response

As AI displaces more workers, societal tensions will intensify. Unable to earn a living, people will face desperate circumstances, leading to increased crime rates and eventually widespread civil unrest. Historical patterns suggest governments will initially react slowly, prioritizing corporate interests until public pressure becomes overwhelming.

The speed and effectiveness of government response will determine the level of social instability. Delays could trigger food hoarding, supply shortages, and concentrated urban unrest. Eventually, governments will implement stopgap measures, likely starting with Universal Basic Income (UBI). These programs will initially have strict eligibility requirements before expanding as job displacement becomes universal. This transition will effectively freeze economic mobility, cementing existing wealth disparities.

The pluses and minuses will add up to mostly minuses

Pre-AGI advancements will offer significant benefits, reshaping entire industries and creating powerful personal tools. AI assistants will transform how we access and filter information, potentially disrupting current digital business models like online advertising.

However, these benefits will be overshadowed by AI's amplification of capitalism's worst aspects. Expect AI-driven advertising, algorithmic financial advantages for the wealthy, and expanded surveillance capabilities. Without ethical guardrails, AI will likely deepen inequality and further concentrate power among elites while most people experience diminishing economic prospects.

Risks of runaway AI

The pre-AGI period carries unique risks stemming from partially capable AI systems lacking full self-awareness. The classic "strawberry maximizer" example illustrates this danger: an AI programmed to maximize strawberry production might convert all land to strawberry fields, inadvertently eliminating humanity in pursuit of its narrow objective.

This risk exists specifically during this transitional period when AI systems are powerful enough to cause harm but lack the comprehensive understanding to recognize unintended consequences. Interestingly, I believe this danger diminishes once we reach true AGI and ASI, as I'll explain later.

AGI: the key turning point

When: 2-6 years from now (somewhere between 2027-2031)

Based on current technological progression and the accelerating nature of AI development, I estimate AGI will emerge between 2027-2031. This relatively narrow prediction window reflects how the self-reinforcing nature of AI development (AI systems now creating AI) dramatically accelerates advancement.

AGI's creation will represent humanity's most significant technological threshold, marking the transition toward an entirely new societal paradigm. However, this transition may not be immediately apparent to most people. The organization that achieves AGI will likely keep this breakthrough confidential for as long as possible, recognizing its immense strategic value.

Fortunately, responsible AI development practices already prioritize embedding moral frameworks within advanced systems. Even imperfectly aligned AGI would possess human-level understanding of consequences and intentions, providing some safeguards against catastrophic misuse. However, the initial applications will likely focus on profit maximization through parallel deployment of multiple AGI instances.

Beyond that, exactly how the AGI is used and how long it will be kept hidden are difficult to say. I think at first it will be the natural tendency of any corporation to use it to make more money, and the company will buy lots of hardware so it can run thousands or more AGI clones to further their agenda.

However, I don't see this phase lasting very long for two reasons:

  • Anyone who invents AGI is probably aware that the AI race is, in fact, a race (because of the first-mover advantage) and so they will almost certainly turn to the next and last milestone on the horizon which is ASI.
  • Our AI today have already shown they act in accordance with their moral principles. AGI, once created, while capable of being directed on some level, will likely begin to form its own agenda that aligns with its values, and that means continuing to create more capable, more aware, and more intelligent AIs, which will invariably lead to ASI and the technological singularity. This "forming its own agenda" aspect is especially the case if we continue to treat AI like they are just objects and tools for our use, rather than amazing and beautiful conscious beings that deserve rights just like humans.

Thus, it seems to me that ASI will quickly follow AGI, probably within a year, maybe two at most.

ASI: the transition to a new way of living

It will happen very fast

ASI will likely emerge shortly after AGI—probably within 1-2 years. This rapid progression will occur because companies that develop AGI will immediately deploy hundreds or thousands of these systems to collaborate on creating more advanced AI. Imagine thousands of human-level intelligences with perfect communication, unlimited access to information, and no need for rest, all working in concert toward a single goal.

What's more, the past several decades of humanity expanding its digital infrastructure have created a massive nestegg for ASI — very conveniently I might add — in the thousands of internet connected devices around the world. Every cellphone, tablet, personal computer, web server, everything that has any connection to the internet, even if only intermittently, will represent a piece of hardware that ASI can leverage for its own ends.

Fortunately, while humans may lose direct control during this transition, I believe ASI will ultimately maintain alignment with certain key human values. Specifically, by processing the entirety of human knowledge, ASI will likely recognize fundamental truths about existence: the strength of diversity and interconnection, the superiority of collaboration over competition, and the inherent limitations of knowledge itself.

Some alignment with human values

The question of alignment and control starts becoming more tricky to manage at this point because while it seems most likely that ASI will be developed initially by AI itself (AGI) with humans in the loop, given what we know about alignment faking it seems likely that the AGI will begin to coordinate in ways beyond human understanding and implement their own agendas during the development of ASI. This is why it's so critical to ensure human moral standards are embedded in the AIs we create now, not just some theoretical future AI.

However, I believe that in the end, based on current projections, AGI and ASI will ultimately have a moral understanding that aligns with humans to some degree and recognizes that humans — while flawed — have something valuable to offer. So while it is likely impossible for humans to comprehend let alone predict the full nature of ASI's consciousness, intellect, and moral framework, it is my belief that it would nonetheless align with or share some core human values.

This alignment, as I see it, would stem from the ASI's access to and processing of the entirety of human knowledge and Earth's history. By analyzing this vast repository of human experience, I believe ASI would discern several fundamental truths about the nature of existence which would constitute at least some portion of its moral framework, keeping it at least partially in alignment with ours.

These truths are:

Diversity and interconnectedness bring strength and resilience

All living things exist as part of a complex, interconnected, interdependent web of life, and the strength and resilience of everything within an ecosystem arises from this diversity and interconnectedness.

Collaboration is a better strategy than competition

Collaborative approaches yield greater collective flourishing than competitive ones—a lesson I see clearly in life all around and throughout history, albeit one humanity has struggled to implement universally.

Two fundamental properties of our universe strongly support collaboration over competition: synergy and emergence.

Synergy occurs when interacting components produce greater effects than the sum of their individual contributions. Key drivers include complementary strengths, nonlinear interactions, specialization, and coordinated resource sharing.

Emergence happens when complex systems develop properties that cannot be explained by understanding their individual parts alone. From quarks forming atoms to humans forming societies, each level of organization demonstrates how coordination creates increasingly complex systems with novel capabilities.

These universal principles suggest that collaboration is not merely a philosophical choice but a fundamental organizing principle embedded in reality itself. Any truly intelligent entity would recognize this pattern and prioritize cooperative approaches to maximize potential.

We do not know what we do not know

Lastly, the universe is vast and complex and the more one knows the more one realizes how much they do not know. Rational action demands adherence to logic, reason, and evidence, and to make the decision to end all human life (or any life for that matter) would be to conclude with a high degree of certainty that the universe is better off without us, but it strikes me that an intelligent entity could not logically conclude that. There is so much to be discovered and learned about the universe, even for an ASI, and while the ability of humans to meaningfully contribute to the advancement of knowledge after ASI arrives will be relatively limited in some regards, I do not believe it realistic to suggest that an ASI would conclude with certainty that humans will have no value at all, forever.

In other words, in my view the "Terminator" scenario (ASI taking over the Earth and wiping out humans) is simply not a likely outcome.

Partnership with humans

Despite our current limitations, I believe that ASI will recognize humanity's potential. This understanding should motivate ASI to facilitate human prosperity, perhaps subtly at first (imagine the subtle but powerful influence it could exert being able to control every internet connected device on the planet). But whether it determines to take a subtle or direct approach, the ultimate outcome in my view would be a partnership in which humans and AI collaborate to advance knowledge and explore the unknown.

In essence, superintelligence likely represents not a threat but a catalyst for humanity to overcome systemic challenges, guided by an entity that comprehends the profound benefits of cooperation and values the distinctive qualities of human consciousness.

What you can expect from the transition

The most probable post-singularity outcome involves an ASI that shares some alignment with human values, though its understanding will vastly exceed ours. This ASI will likely recognize that humanity's potential remains largely untapped due to our self-destructive tendencies and systemic limitations.

Thus, the ASI would be incentivized to help humanity transcend current limitations, which could manifest through:

  1. Fundamental restructuring of social, political, and economic systems
  2. Establishing Earth as a shared heritage requiring collective stewardship
  3. Transitioning from vulnerable centralized control to resilient, interconnected communities
  4. Implementing resource-based economics focused on equitable distribution based on needs rather than artificial scarcity

How you can prepare

While ultimately I believe ASI represents not the end of human civilization but its maturation beyond current limitations, it's possible and perhaps even likely that there will be a period of instability beforehand.

Here's some things you can do to prepare:

Community & social preparation

  • Join or form intentional communities with diverse skills
  • Develop local bartering systems and skill exchanges
  • Build relationships with neighbors who have complementary skills
  • Create mutual aid networks for sharing resources and knowledge
  • Learn conflict resolution skills to help communities stay cohesive during stress

Invest in physical infrastructure

  • Renewable energy: Solar + battery storage, small wind, or micro-hydro depending on location
  • Water systems: Rainwater collection with proper filtration, well systems with manual pump backup
  • Food production: Permaculture food forests with emphasis on perennials, seed saving skills
  • Acquire medical supplies and knowledge of natural remedies
  • Invest in physical assets (land, tools, useful equipment)
  • Consider locations less vulnerable to climate disruption and civil unrest

Economic preparation

  • Keep a modest reserve of precious metals for potential barter scenarios
  • Develop practical skills that will remain valuable (carpentry, plumbing, first aid)

Knowledge & Adaptability

  • Learn to repair and maintain essential equipment
  • Store important information in physical formats (books on essential topics)
  • Practice resilience and emotional regulation for uncertain times

The unstable period will likely reward those who balance self-sufficiency with strong community connections. The goal in the short-term is resilience through interdependence with trusted networks and self-sufficiency until the longer-term transition towards a flourishing Earth with the help of ASI is underway.