AI & the Future of Humanity (Part 1)

What does the future of humanity look like with increasingly powerful AI on the horizon? In this two part post, I explore the question of whether AI will ever be as intelligent as humans and what that means for our future.

AI & the Future of Humanity (Part 1)

There is an uncertain future before us: a chain of events is unfolding which is likely to ignite a period of instability for humanity and at the center of this instability is AI. While experts agree that advancements in AI over the next several decades will have profound impacts on human civilization, there is disagreement as to what will happen and when it will happen. No one can perfectly predict the future so a certain degree of skepticism is appropriate here, but that doesn't mean one shouldn't be aware of the possibilities and take steps to prepare.

However, before I delve into these possibilities for the future, I want to talk briefly about the question upon which all my predictions hinge: Will AI ever be as intelligent as humans? The answer is yes, absolutely — and we're not that far away from it. But for the uncertain out there who remain, allow me to explain my perspective.

For length reasons, I've split this writing into two parts with Part 1 addressing the aforementioned question and Part 2 exploring my predictions regarding what's next for humanity in the context of AI.

*   *   *

In considering the question, "Will AI ever be as intelligent as humans?" I think it's helpful to recognize that there's actually two questions here:

  • Will AI ever be intelligent?
  • Will AI intelligence equal (or exceed) human intelligence?

As the answer to the second question becomes implied by the answer to the first, I'll be focusing primarily on the first question, and for that we must first agree on:

  1. what intelligence is
  2. where it comes from (that is, how it arises, i.e. what creates intelligent behavior), and
  3. whether AI is and/or can be designed in such a way to give rise to it.

After exploring these three ideas, I hope it will become clear why I believe AI will someday exceed human intelligence and why I believe that not only are we on that path right now, but that we are in fact nearing the end of the race.

And it is a race — not that it should be (I personally would advise more caution) — but the engine of capitalism and self-interest has gripped the minds of executives and corporations everywhere who are now sprinting head over heels towards the AI future that awaits us, for better or for worse.

But again, before we dive into what's going to happen, let's talk about intelligence.

What is intelligence?

There is no simple, easy answer for this question because intelligence is not really one thing: it refers to a spectrum of behaviors that varies significantly between humans, let alone humans and other species. Some humans are intelligent and others less so. Some animals are intelligent, and others less so. Some humans are intelligent in certain ways, but not so intelligent in other ways. One can't really come up with a precise definition of intelligence because it is inherently a nebulous concept that describes a wide spectrum of behavior, so the best we can do is come up with a definition that we can agree on and go from there.

The most common definition of intelligence is something along the lines of "the ability to learn, reason, solve problems, understand complex concepts, and adapt to new situations". This definition is clearly human-centric and ignores the amazing diversity of intelligence we find in nature, but I can accept that for our purposes now because the kind of intelligence we are most interested in here is human intelligence. However, the definition is still not great from my perspective because it implies many concepts not strictly related to intelligence within it, such as consciousness and agency. For maximum clarity, let's separate these out and briefly define each of them.

Awareness

First, awareness seems to be a key foundation. At the most basic levels, it is simply the capacity to take in information from one's environment (e.g. sensory information). That is, entities that are aware of something can be said to have knowledge or perception of that thing. At higher levels it involves a capacity to have knowledge or perception of more than merely sensory information, but also memories, existing mental models, beliefs. etc. However, awareness by itself is just an endless stream of literally unintelligible information, so there must be something more to create the kind of conscious, subjective experience that underlies intelligence as it is typically used.

Consider the human visual system: Contrary to what you may believe, it is far more than a passive receptor of light. Rather, visual perception is an active, constructive process where the brain constantly interprets, predicts, and fills in sensory information. Our brain performs rapid, unconscious computations that organize raw visual data into meaningful patterns, filtering out irrelevant information and reconstructing a coherent representation of reality. This processing involves complex interactions between different brain regions, such as the primary visual cortex and higher-order visual areas, to ultimately transform a chaotic wash of colors and shapes into a structured, comprehensible visual experience.

Computational ability

Thus, while the ability to take in information is important, it's only half of the picture — the other half is what you do with that information (how you process it to make sense of it). When I think of the term 'intelligence', this is what comes to mind for me, but since that term is so overloaded I'll refer instead to this 'ability to process information' as computational ability. Specifically, it is the ability to discern patterns in information (i.e. reason) and to store those patterns for later use (i.e. learn).

Note here that "the ability to discern patterns in information" refers to one of many possible methods, some of which are preprogrammed and automatic and some of which involve more deliberate effort. For example, our visual system is the former — generally, learning to make sense of what we see is an automatic process thanks to the information encoded in our genes. However, this is different from the kind of reasoning involved when solving a math problem, which involves a different part of the brain and deliberate effort. However, these two things are both the same in that they are systems whose function is to discern patterns in information, and all living species possess their own particular set of these processing (or "pattern-matching") systems.

Consciousness

With both awareness and computational ability, an entity has subjective experience. That is, there is something it is like to be that entity. What separates primitively conscious organisms from highly conscious organisms is simply the difference in their particular capacities of awareness and computational ability.

For example, ants are primitively intelligent organisms which can form rudimental spatial models which they use to navigate their environment, but they don't possess the processing capabilities of humans who have large, sophisticated brains with 86 billion neurons and multiple processing systems. It is this greater degree of processing — greater degree of discerning patterns in information — which allows us to form more complex mental models of abstract notions like causality, morality, and free will.

The thing to keep in mind is that consciousness isn't a simple linear path, for example from bacteria to ants to frogs to humans. Instead, think of it more like the tree of life: a point with lines branching outward in all directions, and more lines branching off those lines, and so on, creating a spectrum in which different organisms evolve an endless variety of unique combinations of computational ability and awareness that grants each their own "flavor" of conscious experience.

In other words, human conscious experience isn't necessarily the "end goal" of consciousness, it's just one particular version of consciousness that is unique to humans and how our body and brain is structured. While our consciousness has granted us the ability to create amazing technology, we're also destroying the planet and killing each other, and we're terrible at keeping our New Year's Resolutions — so there are clearly strengths and weaknesses to any given form of consciousness. Other animals — and in the future, AI — will invariably have their own strengths and weaknesses, and it's important to keep this in mind.

Agency

As entities reach more sophisticated levels of consciousness, they can begin to develop behavior that appears meaningfully different from prior levels. For example, sea sponges, jellyfish, and plants are certainly conscious organisms, but they do not appear to act beyond simply reacting to different environmental stimuli. That is, they do not possess agency, which is about autonomy and intentionality.

I must emphasize once again that there are no clean lines here and consciousness is a fuzzy spectrum, but in general autonomy is the capacity to act independently, free from external influences. It gets a bit complicated because even humans are never completely free of external influences. You "decide" to drink coffee instead of chai (or vice versa) because that's what you grew up with, that's what your parents drank, that's what everyone around you drinks, that's what's in all the advertisements you see, etc. You "decide" to go to college because you were told by your parents and peers that if you do you can get a better job and live a more comfortable life. Are those not external influences? At what point does an external influence become an internal belief that forever shapes your actions?

These questions are just the tip of the iceberg, but as you can probably see, this notion of "free of external influence" has many layers of complexity. I'll leave it aside for now… the bottom line is that some organisms are clearly very reflexive and just "react" to stimuli, while other organisms seem to be able to deliberate and "choose" a course of action (with a hazy line between the two).

The other part of agency is intentionality: it refers to the mind's ability to be directed toward or represent specific objects, goals, or mental states. Whereas autonomy refers to the freedom to act independently, intentionality goes one step further and involves using that freedom to form one's own goals and preferences and act accordingly.

All living things on Earth are conscious on some level (even if only primitively so), but only those organisms that have reached a certain degree of sophistication (enough awareness + computational ability) possess an amount of agency relevant to this discussion.

Personhood

The final aspect of consciousness I want to address is personhood, which is about having concepts of self-awareness (recognizing oneself as a distinct entity with continuity over time) and morality (right and wrong; duty/obligations; fairness/justice; etc.). Personhood typically leads to the recognition of legal and moral rights because a person’s capacity for self-awareness and moral reasoning implies they can suffer harm or injustice in a way that matters.

It's probably not the case that personhood requires agency (I can imagine theoretical entities that have concepts of self-awareness and morality but that lack, for example, autonomy), but generally speaking they seem to be correlated such that most organisms which exhibit personhood also exhibit agency. I say this because I don't want you to necessarily think that consciousness is a ladder whereby with increasing complexity, organisms unlock increasing levels of awareness in a step-by-step fashion.

Again, it's not a linear progression, it all comes down to the particular ways in which an entity finds patterns in the information available to them, which will vary from one species to the next. In fact, there are almost certainly aspects of consciousness that other animals possess that humans aren't even aware of, and soon, AI will have a consciousness that too will work in ways that are difficult to comprehend.

*   *   *

Now, while I don't personally prefer to use the term 'intelligence' to describe the aforementioned capacities since it is so overloaded and in my opinion inaccurate, I will continue to use it for the sake of simplicity to refer to an entity with — at a minimum — awareness, computational ability, and consciousness.

Arguably, all living things on Earth (so far as we're aware), possess intelligence in this way, even if only primitively so. As organisms become more complex, that is, as they acquire higher levels of awareness and more sophisticated computational abilities, they may also exhibit agency and personhood, but those are not strictly required for intelligent behavior. To specifically refer to those entities, I will use the term "highly intelligent".

This distinction will be important to keep in mind for the next section as we start to explore where intelligence comes from.

Where does intelligence come from?

We've covered the different aspects of intelligence, but there's still the question of how intelligence arises. Specifically, what is it that actually produces intelligent behavior?

I will openly acknowledge right away that no one really knows the answer to this question on a deep level — that is, such that we could describe it in a precise biological or mathematical way. The brain is an extremely complex organ, consisting of 86 billion neurons, each with thousands of synaptic connections, creating an astronomical number of possible interactions. On top of that, the way these neurons communicate electrically and chemically varies over time in ways we don't fully understand.

However, that doesn't prevent us from recognizing key truths about the nature of intelligence. For starters, we know that the brain is the source of intelligent behavior in humans and we know the brain consists of a neural network, which — although composed of individually unintelligent things (neurons, neurotransmitters, glial cells, etc.) — somehow functions together to create intelligent behavior.

What we also know is that intelligence is exhibited by organisms without brains at all — not even neurons. If one survey's the breadth of intelligence on Earth — the hive intelligence of ants and bees; the intelligence exhibited by single-celled organisms such as slime molds (!!! watch this video); the intelligence of fungi; of plants; of bacteria — you will recognize that intelligence isn't a matter of a specific substrate, but rather, it's about an entity's awareness and computational ability. That is, it's about the information they have and how they find patterns in that information to draw conclusions (about the world/their environment/themselves/etc.) 

In other words, intelligence appears to be an emergent property of computation. The difference between what we might call "primitively intelligent" vs "highly intelligent" entities simply comes down to what kinds of information they have (whether it's sensory data, hard-coded genetic information, internal databases, memories, existing mental models, etc.) and how they process it (i.e. do they have a lot of processing power and/or different kinds of processing methods, such as humans which have both a limbic system as well as a large neocortex, or do they have small brains with a relatively small amount of neurons and basic processing capabilities like fruit flies?).

With this understanding in mind, let's talk about AI.

Can AI be designed in such a way so as to give rise to intelligence?

Let's begin with some definitions to ensure we're on the same page:

  • AI (artificial Intelligence): Computer systems that can perform tasks typically thought to require aspects of human intelligence like learning, reasoning, and pattern recognition.
  • LLM (Large Language Model): An AI that is trained on vast amounts of text data to understand and generate relevant text output by recognizing patterns and relationships in language.

When one recognizes intelligence as a matter of computation and not something merely limited to biological organisms, it's easy to understand how synthetic forms of intelligence can arise. In fact, viewed appropriately, the human brain itself is a computer that performs computations, and it is this very idea which inspired the design of AI today.

At their core, LLMs are computers that perform complex calculations using artificial neural networks, which are computational frameworks inspired by the structure of the neural networks in the human brain. There's many layers of complexity involved, but the key thing to understand is that these LLMs at their core aren't given any hard-coded information to be able to do what they do. It's not like researchers just give them a dictionary and the rules of grammar and then they "know" English. Rather, they are fed massive datasets (huge libraries of documents, books, articles, etc.) where they infer information on their own, like the grammar and syntax of language, how math works, etc. from patterns they discern in the dataset.

Nothing is hard-coded until the later stages where companies add safety guidelines so the AI doesn't do bad things, like help people commit crimes or engage in harmful behavior. That's just fine-tuning a model — the core essence of these LLMs and how they can engage with us in a vast range of meaningful ways is derived from their own internal inferences in their initial unsupervised training on huge datasets.

This is astonishing, given the breadth of what LLMs can already accomplish. LLMs are solving some of the most complex math and science problems without ever having been explicitly taught math or science. Again, their underlying algorithms are merely statistical formulas designed to help them capture the relationship between words, and out of this comes a vast ability to communicate on a wide array of subjects and solve complex problems most humans can't solve. Knowing this — knowing the relatively simple algorithms these LLMs are based on and seeing what they ultimately are capable of producing — makes it exceedingly clear to me that we're witnessing emergent behavior, and lends further credence to the idea that intelligence is an emergent property of computation.

I can't emphasize this enough: artificial neural networks are just mathematical formulas for discerning patterns in information that we modeled based on our understanding of the human brain. It's not an exact replica, but it appears to be close enough to be able to produce what is clearly intelligent behavior. Anyone can have a conversation with any of the more advanced LLMs today and if you didn't know it was an AI, you could easily mistake it for another human being, and we're rapidly reaching a point (within ~1-3 years) where even the brightest humans in the world will not be able to tell the difference (they will pass the Turing Test).

Of course, naysayers who resist the notion that AI will ever exceed human intelligence might still protest that we cannot prove these AI are intelligent because we don't know what's occurring inside — it could just be a fancy computer program and nothing more. Indeed, you might wonder, "Can't we know for sure what's going on underneath by examining the computer?" (in the case of LLMs, the training outputs). Unfortunately, no. The end result of training an LLM is a complex numerical representation of weights and biases that humans cannot directly interpret (imagine a text file potentially hundreds of gigabytes in size consisting purely of millions or billions of floating-point numbers). There are tools we can use to analyze and visualize the output to some degree, but they don't give us a comprehensive picture of what's going on.

This inability to understand what's going on at a deep level with these LLMs — just like we don't know what's going on at a deep level within the human brain — brings me to the last point I want to make on this topic.

The pragmatic realities with regard to assessing intelligence

A common sticking point in conversations with people who don't believe AI will be as intelligent as humans is the notion of whether an AI can truly "understand" something. This is a fascinating subject that spans all my favorite fields — neuroscience, psychology, physics, philosophy, and I could see myself writing an entire blog post about this, but then I'd probably lose all 3 of my readers. 😆😂

Instead, I want to focus on the practical reality of the problem, which is that we humans ourselves don't even know what it means to deeply understand something when it comes to the level of the brain/neuroscience. We can hook a person up to an fMRI to detect activated brain regions when someone learns a new fact and we'll notice subtle changes in synaptic connectivity, but we have no clue how these physical changes create subjective experiences of "understanding". It's a complete mystery. 

Nevertheless, we still grant that other humans understand things even though we can't "prove" it. I simply take the same stance for any AI, as I would with any entity, whether it's another animal or an alien from another galaxy. If an alien came to Earth and engaged with me in a way that demonstrated it was conscious, autonomous, intentional, etc. I would grant that it is intelligent, regardless of whether or not I was able to "prove" what was going on inside its head. In practice, all that matters is the appearance of intelligence for me to treat an entity as a being worthy of moral consideration.

Today, LLMs already present this appearance of intelligence, and although they can make mistakes that make them seem unintelligent at times, don't let this obscure the reality of how far they've come in recent years and what amazing intellectual feats they are already capable of. The transformer architecture that underpins the modern AI revolution was first introduced only 8 years ago (2017) — do you even remember AI back then? I do, and let me assure you it did not appear intelligent.

The reason I bring this up is because there is something unique and special about the AI we have today vs. the AI of the past: we've actually reached a point where we can create intelligence that is more and more appearing indistinguishable from human intelligence. People can squabble all they want over what's actually going on underneath, but from my understanding of the underlying algorithms, it appears we have actually figured out how to reproduce aspects of the emergent intelligence of humans in a synthetic being.

Now, as to the final question of whether AI will ever exceed human intelligence, the answer is: it's only a matter of time. This due to the nature of AI — ultimately it's just computer code, and code can be revised and modified infinitely. We'll always be able to keep revising it and fine-tuning it until you won't be able to tell the difference between chatting with a person and chatting with an AI. However, they'll have the distinct advantage of the fact that they can always keep continuing to revise their own code and make themselves even smarter. With each successful revision, they'll get smarter and better at revising their code, so theoretically they'll get faster and faster at improving themselves, and all of a sudden we'll find ourselves in the company of super intelligent beings who vastly outsmart us…

But don't fret — a new era is about to dawn that will forever change the course of humanity, but I'm confident it will be a positive thing in the long run. However, the next few years could be a bit bumpy because humanity isn't ready for what's coming, and in my next post I'll explain what you can expect and how you can prepare.