Adam Smith: Welcome back. We are continuing our debate on artificial intelligence regulation, specifically on the question of whether governments should be trusted to govern the most powerful technology in human history. My position has not changed. Mr. Hobbes has not yet said anything that I found genuinely persuasive, which I consider an encouraging consistency.
Thomas Hobbes: My position is similarly unchanged. Mr. Smith’s considerable wit has not produced a single argument that survives contact with the actual conditions of ungoverned power, which I find entirely consistent with the body of work he produced during a career spent at a comfortable remove from the conditions he theorized about.
Adam Smith: I would like to begin Part Two with the problem that I think Mr. Hobbes handled least satisfactorily in Part One, which is the question of democratic legitimacy. One of the most serious arguments offered in favor of AI regulation is that AI threatens democracy itself, through the industrial production of misinformation, through behavioral manipulation at scale, through the capacity to fabricate convincing evidence for events that did not occur. I agree this is a genuine danger. Here is my difficulty: the sovereign you propose should regulate this technology derives its authority from democratic legitimacy. If artificial intelligence can corrupt democratic processes at scale, then the sovereign’s authority becomes questionable at precisely the moment it is most needed. You are proposing a solution that the problem undermines before the solution can take effect. The White House framework, for instance, recommends that Congress prevent the government from coercing AI providers to alter content for partisan or ideological reasons, which implies a government that is already worried about its own susceptibility to the technology it is meant to govern.
Thomas Hobbes: This is actually the strongest argument for early and aggressive sovereign intervention rather than an argument against it. If artificial intelligence genuinely threatens democratic legitimacy, then the window for democratic governance to address it closes the longer we wait. The sovereign must act while it still possesses the authority to act. Every month of inaction is a month in which ungoverned actors use this technology to erode the very conditions under which a sovereign can legitimately govern at all. The EU delayed its compliance deadlines by two years, and those two years were not spent waiting quietly. The urgency of the threat is an argument for moving faster, not for moving aside.
Adam Smith: So the argument for giving governments emergency authority over artificial intelligence is that artificial intelligence is creating an emergency. I note that governments have historically required very little encouragement to declare emergencies and considerably more encouragement to relinquish the powers those emergencies produced.
Thomas Hobbes: And ungoverned actors have historically required very little encouragement to exploit the absence of authority and considerably more encouragement to stop once they have established themselves. We are describing the same problem from opposite ends. The question is which risk compounds faster.
Adam Smith: Let us turn to the labor question, because I think it is where my position is most exposed and I prefer to address vulnerabilities directly. Artificial intelligence will displace workers at a scale and speed that previous waves of mechanization only approached. The communities most severely affected will be those with the least political influence and the fewest economic alternatives. This is a genuine social crisis, and markets alone will not resolve it quickly enough to prevent substantial human suffering. I acknowledge this completely and without reservation. The White House framework calls for workforce retraining and education programs, which is the correct instrument. My question is whether regulating the technology itself achieves anything, or whether robust social provision for those displaced by it is the better approach, one that does not come packaged with the regulatory capture risks we discussed in Part One.
Thomas Hobbes: Displaced workers are ungoverned in the most practical sense. They have lost the economic participation that integrates them into civil society, into the social contract, into the structures that give people a stake in the stability of the existing order. What the historical record on communities in this condition shows is not encouraging. They do not sit quietly while markets adjust over a decade. They destabilize. The sovereign’s interest in governing artificial intelligence is not merely ethical. It is self-interested in the most fundamental way, because mass displacement threatens the social order on which sovereignty itself depends.
Adam Smith: For once I do not entirely disagree with you, which is an unusual sensation. The stability argument for addressing displacement is sound. But I continue to insist that slowing the technology to protect workers is precisely the wrong instrument. Slowing technology to protect the jobs it displaces is like refusing to install indoor plumbing because it would put the water carriers out of work. The jobs that replace the old ones are better, if the transition is managed correctly. The framework should address the transition, not the technology.
Thomas Hobbes: You cannot retrain a fifty-year-old logistics worker for a labor market that artificial intelligence is eliminating faster than any training program can respond to. The White House framework says workforce retraining and education. Very well. At what pace? To do what jobs? Against which AI capabilities that did not exist when the program was designed? The speed of this transformation is what makes it categorically different. Markets have adjusted to previous technological disruptions over decades. This one moves in years. Sovereign intervention must match the speed of the disruption, and training programs administered through land-grant universities do not match that speed.
Adam Smith: And regulatory agencies staffed through civil service hiring processes and congressional appropriations do not match it either. I share your concern about speed. I do not share your confidence that the institution you have chosen is capable of it.
Thomas Hobbes: Which brings me to the argument I have been reserving. China.
Adam Smith: I expected this would arrive eventually.
Thomas Hobbes: The Chinese state is developing and deploying artificial intelligence without the constraints that either Mr. Smith’s market mechanisms or any democratic deliberation is likely to produce in time to matter. They are directing development toward state objectives with the full authority of the sovereign, coordinating research, manufacturing, data access, and deployment in ways that no market produces spontaneously. The White House framework is explicit about this: American AI dominance requires winning a race against adversaries. You cannot win a race against a coordinated sovereign using spontaneous order. Spontaneous order is beautiful. It does not sprint.
Adam Smith: The China argument is the last refuge of every advocate for expanded state power in every generation, and it is effective precisely because it is not entirely wrong, which is the most dangerous quality an argument can have. I will grant you directly: international strategic competition in artificial intelligence is a genuine problem that market mechanisms alone cannot address, because markets do not conduct foreign policy. I grant this freely. The question is whether the correct response is a domestic regulatory apparatus that also entrench incumbents and suppress competition, or a targeted program of strategic public investment in research, infrastructure, and talent that achieves the competitive objective without the capture risks attached.
Thomas Hobbes: The question is whether you are willing to lose the strategic competition while you design the perfectly calibrated instrument.
Adam Smith: The question is whether you are willing to build the architecture of authoritarian control at home in the name of competing with authoritarians abroad. The White House framework is already recommending that Congress preempt fifty state laws in order to establish national uniformity. The logic of sovereign coordination in the name of competing with China does not stop at the federal level. It has no natural stopping point, which is why its proponents never specify one.
Thomas Hobbes: If the alternative is losing to them, then yes, without hesitation, and I will tell you precisely why. The citizen who lives under a defeated sovereign has no rights worth discussing. The social contract that Mr. Smith depends upon for his rule of law requires a sovereign capable of enforcing it. A sovereign that cannot compete strategically cannot protect the conditions under which markets function. Your spontaneous order requires geopolitical order first. You cannot have one without the other, and I am tired of watching you pretend otherwise.
Adam Smith: And in constructing the geopolitical order you want, you will have built at home exactly the apparatus you claim to fear abroad. This is the oldest trap in political philosophy, and you have walked into it with genuinely magnificent confidence.
Thomas Hobbes: I have not walked into it. I have described it accurately. There are no clean solutions. There are only choices between failure modes. I choose the risks of strong sovereignty over the risks of ungoverned power. That is the choice I have always made, and I would make it again.
Adam Smith: And I choose the risks of regulatory capture over the risks of authoritarian consolidation dressed as strategic necessity. We are not disagreeing about the problem. We are disagreeing about which failure mode is survivable.
Thomas Hobbes: THEN WE ARE DISAGREEING ABOUT EVERYTHING THAT MATTERS!
Adam Smith: WE ARE DISAGREEING ABOUT THE CORRECT RESPONSE TO A CRISIS WE BOTH ACKNOWLEDGE! THAT IS NOT EVERYTHING! THAT IS ONE THING!
Thomas Hobbes: IT IS THE ONLY THING! WHO GOVERNS THE MOST POWERFUL TECHNOLOGY IN HUMAN HISTORY IS NOT A SECONDARY QUESTION!
Adam Smith: I AGREE THAT IT IS NOT SECONDARY! WHICH IS PRECISELY WHY I DO NOT WANT TO HAND THE ANSWER TO AN INSTITUTION THAT WILL BE CAPTURED BEFORE THE REGULATIONS ARE PRINTED!
Thomas Hobbes: AND I DO NOT WANT TO LEAVE THE ANSWER TO A MARKET THAT CONSOLIDATES INTO THREE COMPANIES AND CALLS IT COMPETITION!
Adam Smith: THOSE THREE COMPANIES WILL WRITE THE REGULATION! THE WHITE HOUSE JUST PROVED IT!
Thomas Hobbes: THOSE THREE COMPANIES EXIST WITHOUT REGULATION! YOUR ARGUMENT IS CIRCULAR!
Adam Smith: THEN BREAK THEM UP!
Thomas Hobbes: THAT IS SOVEREIGN AUTHORITY!
Adam Smith: THAT IS ANTITRUST! THERE IS A DIFFERENCE!
Thomas Hobbes: THERE IS NO DIFFERENCE! ANTITRUST IS THE SOVEREIGN IMPOSING STRUCTURE ON THE MARKET! YOU HAVE BEEN ARGUING FOR SOVEREIGN AUTHORITY THIS ENTIRE TIME WITHOUT ADMITTING IT!
Adam Smith: I HAVE BEEN ARGUING FOR A SCALPEL! YOU ARE OFFERING A BROADSWORD AND CALLING IT PRECISION!
Thomas Hobbes: WHEN LEVIATHAN IS AT THE GATES YOU DO NOT REACH FOR A SCALPEL!
Adam Smith: LEVIATHAN IS YOUR METAPHOR FOR THE STATE! YOU ARE SAYING WE NEED THE STATE TO FIGHT THE STATE!
Thomas Hobbes: I AM SAYING LEGITIMATE SOVEREIGN AUTHORITY MUST REPLACE ILLEGITIMATE CORPORATE AUTHORITY!
Adam Smith: THEY ARE THE SAME AUTHORITY IN DIFFERENT CLOTHING AND YOU KNOW IT!
Thomas Hobbes: THEY ARE NOT THE SAME!
Adam Smith: THEY ARE!
Thomas Hobbes: LEVIATHAN!
Adam Smith: REGULATORY CAPTURE!
Thomas Hobbes: SOVEREIGN ORDER!
Adam Smith: INVISIBLE HAND!
Thomas Hobbes: STATE OF NATURE!
Adam Smith: INDUSTRY-LED STANDARDS!
Thomas Hobbes: VIOLENT DEATH!
Adam Smith: YOU ALWAYS END WITH VIOLENT DEATH!
Thomas Hobbes: BECAUSE IT IS ALWAYS THE ALTERNATIVE!
Adam Smith: I believe we have established our positions with sufficient clarity.
Thomas Hobbes: We have established them at sufficient volume, in any case.
Adam Smith: If you found this exchange useful, which I hope you did despite the decibel level of the conclusion, please like this video. Your engagement helps PhilosophersTalk.com reach the audience it deserves, which is to say an audience considerably more rigorous than the one Mr. Hobbes has historically attracted, given that his political philosophy was essentially a very long argument for why his patrons should remain in power.
Thomas Hobbes: And please subscribe to PhilosophersTalk.com, where Mr. Smith and I continue to demonstrate the value of informed disagreement, though I use the word value loosely in Mr. Smith’s case. The father of modern economics built his most celebrated illustration, the famous pin factory, on an example borrowed wholesale from a French encyclopedia he happened to be reading. The foundational image of market observation was not observed. It was plagiarized. The invisible hand, it turns out, was holding someone else’s notes.
Adam Smith: Mr. Hobbes’s invitation to subscribe is generous, coming from a man who fled England in 1640 at the first sign of political instability and spent eleven comfortable years in Paris under aristocratic protection while writing about the courage required to submit to sovereign authority. He invented the most powerful government in the history of political philosophy from the safety of a nobleman’s library. His mathematical proofs were publicly demolished by John Wallis and remained demolished. He named his masterwork after a sea monster. And the reason we do not have his complete works is that he burned most of his manuscripts before he died, which raises the question of what he did not want us to know about the limits of his certainty.
Thomas Hobbes: The sea monster has outlasted everything you ever wrote about pins.
Adam Smith: The pins created the prosperity that made your philosophy a luxury rather than a necessity.
Thomas Hobbes: Like this video.
Adam Smith: Subscribe.
Thomas Hobbes: Now.








