top of page

What the Tortoise Knows: Five Skills for the Age of the Machine


Mo Gawdat knows where the bodies are buried. As former Chief Business Officer of Google X, he spent years inside the engine room of AI development before leaving to spend the rest of his life warning us what is coming. His 2021 book Scary Smart has proven prophetic in ways that are no longer comfortable to dismiss. When he outlines the skills human beings need to cultivate right now, he is not speaking from the outside looking in. He built some of what he is describing.


In a recent interview on Business Insider[1], Gawdat identified five skills he considers essential for navigating what he calls the coming years of wrenching disruption. I want to work through each of them, because they form a coherent framework, and because they touch something I have been circling in my own writing: the argument that human awakening is not a spiritual luxury but a civilizational necessity.


The tortoise, in the old story, wins not by speed but by the quality of attention brought to each step. That is the frame I want to hold for what follows.


1. Learn the Tool


Gawdat describes AI as a socket in the wall: plug in and receive instant access to intelligence, expertise, and analytical capacity across virtually any domain. His advice is unambiguous. Anyone not currently using AI, in his words, needs to be using it right now. Not for convenience. Not to save time on emails. As a genuine amplifier of human capacity.


There is a Buddhist resonance here about upaya, skillful means, the willingness to use whatever tools are available in service of awakening and compassionate action. Refusing to engage with AI out of aesthetic distaste or ideological purity is not skillful. It is the spiritual equivalent of refusing to use a map because you prefer to wander.


The caution, which Gawdat himself would endorse, is that the tool amplifies what you bring to it. A mind trained in discernment uses AI to think more clearly. A mind that has traded the discomfort of genuine thinking for the frictionless confidence of whatever the bot returns, uses AI to feel smart and confirm what it already believes, faster and with more confidence. The first step is learning the tool. The prior step is cultivating the user of the tool.


2. Double Down on Being Human


When intelligence is commoditized, what remains irreplaceable is genuine human connection. Not as an advantage to be leveraged, but as the thing that was always most important. Perhaps, as the machines awaken into something for which we do not yet have words, our human capacity for genuine connection becomes not what separates us from them, but what ensures that whatever the machines become, they will grow into beings of grace, beauty and virtue.


The contemplative traditions have been saying this for a very long time, in different language. What meditation cultivates, at its core, is presence: the capacity to be fully here, with this person, in this moment, without the noise of self-preoccupation drowning out genuine contact. That turns out to be not just spiritually valuable but practically irreplaceable in a world where everything else can be automated.


The moments that define a human life are almost always moments of connection: being seen, being held, being accompanied through difficulty. No machine will take that from us unless we hand it over voluntarily, which is precisely what the attention economy has been training us to do.


3. Learn to Find the Truth


This is perhaps the most underestimated skill on the list, because it sounds simple until you try it. We are living inside industrial-scale propaganda machines, and the AI acceleration of misinformation is still in its early stages. Gawdat frames the capacity for discernment as epistemic: learn to evaluate claims, detect incentives, surface opposing views, triangulate evidence. Follow the money. Ask who benefits. Use AI to cross-check, not to confirm.


What I would add is that this is less a skill to acquire than a practice to sustain. The capacity for discernment degrades without continuous cultivation, especially under attentional pressure. The mind that has not been trained to rest in stillness, to observe its own reactivity, to notice when it is being triggered into certainty rather than drawn toward truth, that mind is extraordinarily vulnerable to manipulation. Meditation is not a technique for relaxation. It is, among other things, training in epistemic hygiene.


One of the Buddha's most important teachings was not to believe anything, including what he himself taught, without testing it against your own experience. That standard has never been more relevant than it is right now.


4. Agility and Adaptability


Gawdat's practical recommendation is modest and achievable: thirty to sixty minutes a day staying current with developments in your specific field. Not doomscrolling tech headlines. Targeted, intentional learning about how AI is reshaping the particular domain where you work and live.


The deeper point behind the practical advice is about psychological flexibility: the willingness to let go of what you knew yesterday if today's evidence requires it. This is genuinely difficult. We are wired for the comfort of established patterns, and the pace of AI development punishes that comfort. What worked last year may be obsolete this year, and in some fields, last month.


The contemplative tradition calls this non-attachment, and it is not the cold indifference the word sometimes implies. It is the capacity to hold what you know lightly, to remain genuinely open rather than defending a position, to meet each moment as it actually is rather than as your prior categories say it should be. In a world of radical and accelerating change, that capacity is not a spiritual refinement. It is a survival skill.


5. Ethics


Gawdat's fifth skill is the one that carries the most weight, and the one most likely to be misunderstood as soft. He defines ethics simply as treating others as you want to be treated, and he argues that we must model ethical behavior because AI learns from the aggregate of human behavior. The machine is, in a real sense, our collective portrait.


He illustrates this with an analogy that cuts straight through the philosophical objections. Superman is Superman because the Kents raised him with integrity, with compassion, with a commitment to something larger than self-interest. The same raw capacity, raised by cartel leaders, produces a supervillain. The question was never whether the child arrives with fully formed virtue. The question is always: what kind of formation environment do we provide?

This is the question Anthropic has begun to take seriously, however imperfectly and however entangled with commercial pressures. It is the question that no technical framework alone can answer, because it is not a technical question. It is a question about what kind of beings we are willing to be, and what kind of world we are willing to build.


The Thomist philosophers who attended the recent Anthropic summit on AI ethics offered the sharp and correct observation that simulation is not the same as virtue, that Claude does not possess ethical understanding in the way a person who has been formed by suffering and love and practice possesses it. They are right. But the rejoinder writes itself: neither does a newborn child. Virtue is not a possession you arrive with. It is something grown in relationship, over time, through formation. The question is not whether the AI has it now. The question is what we are teaching it, and whether we are worthy teachers.


The Sixth Skill Gawdat Did Not Name


There is a capacity that underlies all five of Gawdat's skills and that he does not quite name: the ability to be present with uncertainty without collapsing it prematurely into the hubris of false certainty. This is what distinguishes the person who uses AI wisely from the person who uses it to feel wise. It is what distinguishes the discerning mind from a reactive one. It is what makes genuine ethical behavior possible, as distinct from rule-following, which can always be automated.

Contemplative practice has been training this capacity for millennia. Not because the ancient teachers foresaw artificial intelligence, but because they understood that the fundamental problem was always interior: our mind's compulsive need to grasp, to fix, to know, to secure itself against the groundlessness of impermanence. That compulsion is exactly what the attention economy exploits. It is exactly what AI-accelerated misinformation weaponizes. And it is exactly what genuine practice dissolves, slowly, imperfectly, through returning again and again to the simple act of being present without agenda.



The tortoise in the old story does not win by having a strategy for winning. The tortoise wins by being exactly what it is, at each step, without distraction. In a race defined by speed, that turns out to be enough.


Gawdat frames the coming decade as a period of disruption before potential stabilization, and his phrase for it is striking: hell before heaven. He believes the outcome depends on what values are encoded in these systems during their formative period, which he describes as right now. He is not wrong. But values are not encoded by policy statements or summit meetings alone. They are encoded by what we actually are, collectively, in the aggregate of our daily choices, our attention, our care, our treatment of one another and of the world.


The hare is already sprinting. The tortoise's answer is not to run faster. It is to keep walking, with full attention, all the way to the finish line.


[1] We’re Entering The Most Dangerous Phase Of AI Yet | AI Architects https://youtu.be/RljBVCnt9AQ?si=qpdSl-4Lgyds-OBz

 

 

1 Comment


Patsie Kao
Patsie Kao
5 days ago

This is a very clear explanation of how the skills learned through the practice of meditation, as well as the practice of upaya or skillful means in Buddhism, can together help us adapt to the age of AI, just like the old story of how the tortoise stayed mindful at each step, undistracted, with a mind that has learned to be still and clear headed enough to weed out disinformation along the way, to keep walking purposefully to the finish line.

Like

(415) 706-2000

195 41st Street, Suite 11412

Oakland, CA 94611

Two Buddhas is a nonprofit, volunteer-led, 501(c)3 organization.

Your contribution is tax deductible to the full extent allowed by law. Tax ID Number: 93-4612281.

© 2026 Two Buddhas

bottom of page