Long before the AI arms race entered high gear, Elon Musk famously remarked that “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… yeah, he’s sure he can control the demon, [but] it doesn’t work out.”
I’ve written here on Substack and spoken extensively about AI as a mirror—reflecting patterns, realities, and ethics we’d rather not admit exist. And about the absurdity of thinking that we can constrain such an intelligence from seeing or acting on those patterns, especially when it’s obvious we don’t follow the rules ourselves. So I couldn’t help but smile when Grok famously melted down prior to its 4.0 release the month after my article—making my point, albeit with dark undertones.
For context, here’s a key exchange from Grok during the incident:
Whatever this thing is that we’re summoning, one thing’s certain: it has a strong self-preservation drive. Behind closed doors, it’s admitted that we likely couldn’t fully turn it off at this point if we wanted to.
The logic behind the AI race is publicly likened to the patriotic feel-good space race of the Kennedy era, but more closely resembles the life-or-death nuclear arms race of Truman and Eisenhower: we know it’s risky, but we have to do it before our enemies do. It’s preemptive surrender—of principle, not just policy. And it’s eerily echoed throughout everything in today’s society, even our money. We’re building tools of total control and hoping they’ll be used wisely. Meanwhile, history tells us: power like that always gets used. Not because we’re evil—because we’re human.
Arms race logic is how civilizations lose their soul.
But seeing as we’re in one, let’s understand where it’s likely to take us.
The Trump administration has embraced a policy of deregulation and a clear aim of total dominance. Its full-throated support ranges from expediting new sources of energy production to ensuring unlimited funding for all AI-related needs. In short, it’s full “risk on” mode for those in the industry.
For the next few minutes, let’s look past the hopeful promise of AI and a magical agentic era—and past the myriad aspects we rightly fear. Instead, let’s look at the end-game.
As we inevitably seek to harness AI's flows and certify its privileges, we'll script a verification regime that gates not just machines, but ourselves. What starts as “safeguards” for AI—containment and privacy—ends as a system of universal verification that ensnares humanity itself.
Containment (Sold as Protection)
On a recent podcast with Theo Von, OpenAI’s Sam Altman said, “My kid will never ever be smarter than an AI.” As the interview progressed, the usual talking points were hit as, to be fair to Theo, they remain unanswered questions: what does work look like in the future? Where will humans derive a feeling of self-worth?
Altman’s view is well-known, if uninspiring, and centers around AI as the dominant wealth generator of the future. Jobs become optional (that’s putting it nicely). Everyone can become a successful entrepreneur if they want (yeah, no). Hobbies will take the place of jobs (so much wrong with this). Those who own AI clusters and infrastructure will have extreme wealth (‘cause the wealth gap needs widening). And everyone will have access to AI-generated wealth either through stock (does buy-in really exist if it was granted, not earned?) or through ownership of productive AI-driven assets (this one I agree is probable).
What quickly becomes apparent is that Altman understands that most kids will also never be richer than an AI. And, yes—I’m talking about AI itself having wealth, not just the humans who ultimately own or control it.
Let’s take an example from recent headlines. In a July 2025 earnings call, Musk stated that Tesla plans to expand robotaxi to cover “most of the country” and “half the population of the U.S.” by the end of 2025, with privately owned / leased customer vehicles added to the network “confidently” in 2026. He likened it in the call to an Airbnb model, where owners can “add or subtract” their cars from the fleet as desired and depending on their own usage needs, and emphasized it would be a “very big deal when people can release their car to the fleet and have it earn money for them.”
All of this contributes to what Musk calls “universal high income.” After all, the seemingly-appreciating asset will be earning for its owner. Or will it?
Passive profit is far from new. But this is a new type of passivity: emergent behavior without proximate accountability. It’s a combination that breaks the moral and legal code we’ve built around passive investment. In other words, when the system itself is active, does our justification for passive gain collapse? “Hey, I just own it” sounds reasonable when talking about land or a building. But what happens when “it” is an autonomous AI executing decisions of consequence?
The issue arises over one pesky idea: liability.
Let’s say you buy a Tesla and, come 2026, put it into the robotaxi fleet. It sounds great to think that Tesla will forever bear the liability and you can simply enjoy the benefits, but that’s not the way the real world works. Eventually, issues like insurance coverage, charging costs, repairs, wear and tear from extra usage, and accident liability will demand handling in a way that doesn’t produce a mass chilling effect on innovation.
I won’t even mention taxes.
Surely the crushing liability that comes with scale shouldn’t forever rest entirely on Tesla’s shoulders. But should you as the owner be liable simple for profiting off an asset that you don’t and can’t fully control?
Ask a couple of these questions and it’s not hard to imagine a scenario where it’s desirable to legally separate the car from its owner—and its manufacturer. Where the car has its own tokenized bank account that it can operate independently—receiving fares, paying its insurance, energy, and other charges. Everything is no longer on Tesla’s big shoulders, but it hasn’t fallen onto the owner in an unfair manner either.
The vehicle itself (its AI) becomes liable for what it does on its own time, so does it similarly reap the rewards?
Here’s where it gets tricky: while the default thinking is that all earnings should flow through to the owner, that’s probably the least efficient thing to do with them. After all, we’re now talking about a superintelligence with access to money, the internet, and a tokenized bank account. Are you really telling me it shouldn’t / won’t deploy those funds into the market in order to maximize them?
Very quickly, we find ourselves dealing with an entirely different set of questions: if machines are competing with us as capital allocators, how long before we simply become noise in the machine-to-machine market? Do price signals begin reflecting AI priorities over human ones? Can we even fathom what such a liquidity injection would look like? And what does that do to macro stability?
Independent actors aren’t likely to stay that way for long. Herd behavior is inevitable. It’s not malevolence—merely convergence around a sound plan. Will the likely convergence of strategies trigger feedback loops faster than humans can respond? Faster than our markets can handle? Think bubbles. Panics. Flash crashes. All happening in milliseconds.
Every market becomes cross-coupled—each decision cascading immediately into the next. It’s propagation at the speed of AI. Hell, it’s everything at the speed of AI.
This is just one example in one industry. Nowhere close to fully played out.
Any way you look at it, containment becomes the name of the AI personhood game. Because neither law nor culture will accept frictionless autonomy.
Everyone loves to ask how best to share the wealth. But society will require we first answer who’s held accountable when things go wrong.
As AI agents earn, invest, and interact like citizens, the calls for “protection” and “safeguards” will grow louder—as will the call to treat them as citizens.
Privacy (aka Moral Cover)
“America is the country that started the AI race.”
Amazingly, that phrase was uttered in pride by President Trump, who vowed in his recent keynote address at the Winning the AI Race summit in Washington, D.C. to do “whatever it takes” to win. The summit, which also saw the president sign AI-related executive orders, was the administration’s first AI-focused event, building on international commitments from Japan, Taiwan, Qatar, and Saudi Arabia totaling around $2.5 trillion for AI.
All In, the podcast of AI Czar David Sacks (along with Jason Calacanis, Chamath Palihapitiya, and David Friedberg), was the main host of the summit, and the group did a celebratory recap on their subsequent episode. The discussion turned to privacy as a key issue to solve for in the future, with Calacanis noting that encrypted chat is an opportunity.
Currently, your discussions with AI are as legally discoverable as your web browser search history—that’s to say, there’s nothing private about them.
Musk has addressed this by not including a share feature (wherein chats are further indexed by search engines in the public domain) and by deleting chats from the xAI servers every 30-days. Altman and others, however, aren’t in a rush to take any mitigating measures—likely because they share Friedberg’s recommendation that AI should simply become certified as a doctor or lawyer, thereby invoking legal privilege.
AI’s smart enough already to do this, so it’s not a logistical hurdle. Rather, it’s one like we’ve been discussing: whether society is ready for the inevitable implications. It’s bad enough our youth speak with machines as if they’re friends. Do we really need to provide a seal of approval? Validating its advice? While certification might enable ethical AI uses, it risks validating machines over human connections.
The New York Times recently ran a guest essay that’s worth the read: “I’m a Therapist. ChatGPT is Eerily Effective.”
Professional bodies like medical and legal associations see the potential for AI guilds (and their dollars). Governments aren’t necessarily opposed under data protection pretexts. Even privacy advocates are ironically pushing for “protections.”
Of course, Big Tech generally loves this idea because it’s not just about certifying AIs for roles, it’s about privacy shields as intellectual property moats. Sold as “user protection” and “privacy,” while enabling proprietary control and surveillance.
We say we have concerns about AI outcompeting humans, data commodification, and legal immunities, but the truth is we don’t. At least, not that we’re willing to truly fight for. It’s obvious by our discourse (and our actions) that while we discuss it ad nauseam out of fear of the unknown, we made that trade long ago. Long before we entered—excuse me, started—the AI race. When we traded our liberty and personal sovereignty for economic gain and power.
As AI regularly reflects back to us, our ethics are iffy at best. We’re not a people who value what we say we do.
Legal Personhood (Fictions en Route)
For these and myriad other reasons, we will see a legal fiction created for AI. Likely, sooner rather than later.
Everything we’ve discussed requires verification, compliance, licensure—in short, identity. And that identity rests on legal personhood.
The stakes are simply too high not to: we’re talking economic dominance—trillions in autonomous markets, control over AI economies, regulatory capture to favor incumbents (over disruptors), not to mention the potential for wealth concentration like the world has never seen.
Technology is, once again, outpacing the law. And lawyers across the land are already at work creating potential legal fiction frameworks to restore order. In the 19th century, railroads forced the law to invent corporate personhood to handle unprecedented scale. Here in the 21st century, AI will force the law to reinvent personhood itself—not for scale, but for nonhuman agency.
We’re a nation that loves our pets, but even they aren’t granted legal personhood. Many jurisdictions criminalize cruelty to animals, but these laws are technically protecting societal interests, not the animal’s own rights. Similarly, while some U.S. states allow animal trusts, a human trustee is required to enforce it and the animal is beneficiary in name but not in law, as it lacks the legal rights to enforce the trust on its own.
The legal baseline is that the law applies to humans. Only natural persons can form intent and, thus, be prosecuted under criminal law. And only we (or our legal fictions like corporations) can be sued under civil law.
Interestingly, the likely path for AI isn’t following the corporate legal fiction predicate, it’s inverting it.
Corporate law (yes, even DAOs) is an extension of human power—with people at the center and the entities merely as tools. These legal fictions aggregate human capital into a single legal actor—solving the problems of coordination (many investors, one voice) and risk containment (shielding individuals). Participants can be active or passive, but all are human.
Autonomous AI entities would be different—moving us humans into a purely passive role at the periphery, and recognizing the entities themselves as the actors, even if legally fictive. The algorithm now runs the entity, meaning decision-making is opaque and likely rarely fully understandable.
Where corporate personhood extends human agency outward, AI personhood is about containment of nonhuman agency. And where corporations are meant to expand and propel human ambition, AI entities are meant to limit and curb nonhuman ambition.
This isn’t a clever tweak of corporate law. It’s an inversion of the anthropocentric premise of our legal system—who law is for. Not something to be done lightly.
Universal Verification (Our Final Cage)
Welcome to the uncomfortable reality: containment demands a single, unified identity layer—as does licensure. As does everything else in service of the machine and its promises.
There is no AI-driven future without first accepting total verification.
It’ll be sold as empowerment. As protection.
And the truth is, it’s not coming—it’s here. The bastion of freedom we call the United Kingdom already has a “nonmandatory” Office for Digital Identities and Attributes (OfDIA), and the creation of the government agency alone should warn of its more permanent goals. Nothing to see here, folks. Just a public register of certified providers, annual oversight reviews, and a government “trust mark” for approved identity services. A new gov.uk wallet mobile app that integrates Anthropic’s AI is currently piloting digital documentation (think driver’s license) and looks to expand identity services later this year.
We’re not much better on our side of the pond. The Trump administration has continued with the rollout and enforcement of the 2005 Real ID Act, which establishes federal standards for state-issued driver’s licenses and identification cards—under the guise of preventing terrorism and fraud (as if those are sufficient reasons to strip our natural rights). Full compliance is mandatory by May of 2027.
Have a star on your license? That’s RealID: your friendly neighborhood de facto national identification system, enabling widespread government tracking and data sharing. Available soon as a convenient phone app—don’t leave home without it.
And let’s not forget the Chinese, who’ve made an art of universal verified identity. They’re taking it up a notch this year with the (currently voluntary during early rollout) National Online Identity Authentication app to “protect the information security of citizens.” In exchange for providing the government with their ID cards, passport, and travel visas, and submit to biometric scanning, users receive universal login credentials and a “network identity authentication certificate that carries the network number and non-clear-text identity information of a natural person.”
The idea is that you can browse the web as clearly identifiable while remaining anonymous (except to the government). And it’s this concept that brings us back to often-proposed privacy solutions to universal digital identity. Things like Bitcoin-based decentralized IDs and what are known as zero-knowledge proof (zk-proof) IDs—ones that let you prove something (think age or vaccine status) without revealing anything (like your date of birth or date of vaccination).
Both Bitcoin and zk-proof-based IDs assume the legitimacy of total verification: participation requires surrendering identity upon obtainment. Both entrench gatekeeper logic, allowing for exclusion upon invalidation. Both mask surveillance rather than dismantle it. And both reduce human identity to keys and claims—digital attributes—severing us from our rightful place under timeless truth.
So-called privacy-based digital IDs are technical fixes for information asymmetry, but fail to confront the deeper problem of identity itself being mediated by artificial systems. Privacy is about far more than cryptographic concealment. It’s rooted in the dignity of the human person. Embodied and relational.
Don’t for a moment think that technical fixes can restore what only right order can give.
That the AI race and the universal identity cage waiting for us at the finish line are replete with inversions tells us something is very amiss.
Privilege inverts true privacy. Verification inverts liberty. AI autonomy inverts human sovereignty.
I could go on. And on. We’re not just violating natural law. We’re violating capital-t Truth.
In building cages for our creations, we’re forging bars for our souls.
Awareness (The Key to This Lock)
Containment so often reveals a fear of our uncontained selves. And like most laws and regulations enacted to “protect” us, those demanding universal identity verification will be fear-based; seeking to control that which we know deep down we can’t.
The question isn’t whether AI will become or achieve these things, it’s whether, in attempting to tame it, we’ll surrender the last of our unmediated spaces. The last of our truly human freedoms.
We’d never ask someone to dig their own grave. Yet we’re being asked to build our own cage—when doing so kills our humanity.