Diving into "The Age of AI: And Our Human Future", a 2021 book by Kissinger, Schmidt, and Huttenlocher, I couldn't help but think of Descartes and his timeless reflections. So, here's my take on the book, sprinkled with Descartes' wisdom from "Meditations" and "The Principle of Psychology".
Before we start, here is an AI-generated image of Descartes writing his "Meditations," surrounded by computers running advanced artificial general intelligence (AGI), and a cyborg observing him. DALL-E, the image creator from OpenAI, is crazy cool!
For more than a thousand years, humanity has been deeply engaged in the exploration of reality and the relentless pursuit of knowledge. Descartes, writing in the 17th century, famously declared:
"Cogito ergo sum (– I think, therefore I am.)"
This raises a pivotal question in the age of AI: If AI can "think", or at least mimic thinking, what does that make us, and what is AI?
Let’s take the ancient Greeks, who firmly believed that their observations of the world were a direct reflection of reality. This confidence in the human mind's capacity to comprehend and articulate substantial aspects of the world around us has been a driving force in philosophical thought. However, this conviction also acknowledged the existence of unexplainable phenomena, often attributing them to divine entities.
The let us come into the parallel impact of monotheistic religions, which introduced the concept of a single, all-knowing, and controlling deity, thereby shifting the focus more towards spiritualism than reason. It wasn't until the 15th and 16th centuries that the Western world entered a new epoch marked by the diffusion of ideas through the Enlightenment. This period was characterized by a break from the dependence on the Church and Latin as the sole sources of knowledge diffusion, paving the way for a more diverse and inclusive understanding of the world.
The Renaissance furthered this evolution by appreciating art and the human side of things, highlighting the importance of human experience in shaping our understanding of reality. However, this period also revealed a Western-centric view, which often underestimated or dismissed the value of non-Western cultures and knowledge systems. This led to a superiority complex in the Western world, under the belief that they could spread their viewpoints globally without learning anything of significance from other regions.
Enlightenment thinkers shifted the balance back towards reason, empowering the human capacity to think, understand, and judge independently. This period questioned even the existence of physical reality and moral truths, yet the pursuit of "eternal truth" remained a central theme. Immanuel Kant's "Critique of Pure Reason" argued that human reason should be applied to understand its own limitations. Kant posited that since human cognition relies on conceptual thinking and lived experience, it cannot be separated from observable phenomena, thus limiting our ability to achieve pure thought and understand the inner essence of things.
The acknowledgement of the inherently filtered nature of our experience spurred the development of comprehensive "catalogues" of information, as seen in the history of encyclopedias. This effort aimed at amassing multidisciplinary knowledge to capture a fuller picture of reality. The Enlightenment period is often described as having produced "armed reason," differentiating it from tradition, and fostering innovation, particularly through the scientific method.
It was here that the Romantic movement emerged as a reaction to the Enlightenment, emphasizing human feeling, folk tradition, and appreciating them as virtues instead of relying exclusively on the "mechanistic certainties" analyzed by the Enlightenment thinkers. Later on, the uncertainty principle in quantum mechanics and Einstein's theory of relativity further complicated our understanding of reality, suggesting that the paths we observe come into being precisely because we observe them. Figures like Niels Bohr highlighted that observation not only affects but also orders reality, as exemplified by the double-slit experiment. I am immediately reminded of what Descartes said:
“Whatever I have accepted until now as most true has come to me through my senses. But occasionally I have found that they have deceived me, and it is unwise to trust completely those who have deceived us even once.”
The 20th century, shaped by the experiences of world wars, embraced the ambiguity and relativity of perception. Ludwig Wittgenstein's emphasis on generalizations about similarities across phenomena, such as family resemblances, patterns, types, clusters, and similarities, laid the groundwork for modern approaches to understanding, including machine learning and artificial intelligence, which then eventually led to AI in its present form.
As Kissinger, Schmidt and Huttenlocher state:
“When information is contextualized, it becomes knowledge. When knowledge compels convictions, it becomes wisdom. Yet the internet inundates users with the opinions of thousands, even millions, of other users, depriving them of the solitude required for sustained reflection that, historically, has led to the development of convictions. As solitude diminishes, so, too, does fortitude”.
Societies face a choice: adapt to AI piecemeal or engage in a deliberate dialogue to define AI's role. This involves balancing AI's integration and human oversight, particularly as we grow “habituated” to AI. AI brings new dimensions to human experience, posing challenges to traditional concepts of human identity. It necessitates a reevaluation of which aspects of life should remain under human control. Its opaque decision-making can diminish individuals' sense of autonomy, especially when the rationale behind AI's actions is not transparent. A divide may emerge between those who embrace AI (virtualists) and those who reject it, favoring a world of faith and reason alone (physicalists). This could lead to a societal split, with AI's integration becoming nearly inescapable.
Most importantly, the shift to AI-driven models marks a departure from traditional theoretical understanding, as AI draws conclusions based on clustering and pattern matching. This challenges our inherent pursuit of understanding the world. Are we really ready for it? Again, I am reminded of Descartes:
“Although one idea may perhaps originate from another, there can’t be an infinite regress of such ideas; eventually one must come back to an idea whose cause isn’t an idea, and this cause must be a kind of archetype containing intrinsically all the reality or perfection that the idea contains only representatively. So the natural light makes it clear to me that my ideas are like pictures or images that can easily fall short of the perfection of the things from which they are taken, but which can’t exceed it.”
Furthermore, the aforementioned “habituation” and constant stream of AI-curated media diminishes the space for deep, concentrated thought, impacting the traditional forms of human reasoning and communication. AI's ability to navigate vast data sets can lead to both discoveries and distortions in knowledge, affecting our grasp of reality and potentially reaffirming our biases. Lastly, AI's exploration of reality may reveal aspects beyond human conception, prompting a redefinition of human roles and a reconsideration of what is knowable.
AI can simultaneously augment and diminish human reason by uncovering new objective facts while conforming information to our biases, thus impacting our access to and agreement upon objective truth – basically, will it even tell us the whole truth? Again, as Descartes said:
“The seeker after truth must once in his lifetime doubt everything that he can doubt.”
But with AI’s black-box workings, what will we doubt? The processes are hidden from us!
AI's exploration of reality might unveil patterns and structures beyond human comprehension or articulation, posing a fundamental challenge to our role as the sole perceivers of reality. This could lead to a redefinition of reality and our engagement with it.
When (not if) we become habituated with AI - will we even think on their own? This will also lead to alienation from other human beings as they can be "disagreeable", while AI programs will maximize personalization and give answers that will suit our interests - but is that necessarily a good thing? What Frank Wilczek said could be applied to Alpha Zero’s and ChatGPT’s success:
"There are ways of knowing that are not available to human consciousness".
There will be a clear divide between those who adopt or develop AI, and those who do not – either willingly or due to lack of means or access of knowledge. But AI right now does not “reflect” – it does not have self-awareness. It has no intention, motivation, morality and emotion. This could be a challenge going forward, as we contemplate below, and our current institutions leave us ill-prepared for AI-based future, as we have not thought in terms of such possibilities before. This time, it really is different.
Descartes summed up his similar fear as follows:
“I am like a prisoner who dreams that he is free, starts to suspect that it is merely a dream, and wants to go on dreaming rather than waking up, so I am content to slide back into my old opinions; I fear being shaken out of them because I am afraid that my peaceful sleep may be followed by hard labour when I wake, and that I shall have to struggle not in the light but in the imprisoning darkness of the problems I have raised.”
Have we, also, raised a problem that could imprison our thinking? There's so much more to delve into - the wonders of human imagination, the quest for knowledge, and its ties to love, beauty, and the cosmos. All these elements have culminated in the current AI revolution, an epochal shift happening once in many lifetimes. I'm considering a sequel to explore how our fascination with the stars connects to the creation and utilization of microchips, the backbone of today’s AI technologies. I'm eagerly awaiting some free time to pursue this (I penned this piece during work hours, too excited to revert to routine tasks, especially following a lengthy end-of-year break).
As always, a big thank you for reading! 😊
Best,
Ahmad Mobeen