How to Stop AI
I believe one of the great battle lines of the future will be between artificialists who’d support improving AI and sapienists who won’t.
The two will largely divide along our existing ideological lines where leftist technocrats leveraging the poor will start with a competitive advantage over the American middle class, but as AI improves it could lift more Americans into the middle class, therefore, strengthening our ability and will to stop it or it could be used to exacerbate inequality so that artificialists can continue to hold the door open to oblivion.
Artificialists will want to keep the door open to a self-generating monster even as the risks of AI increasingly outweigh the rewards for a variety of reasons: curiosity, hedonism, learned helplessness, misanthropism, thanatophobia, AI worship, etc.
But I think before AI could destroy us, a foreign power would dominate us.
You see, during the Cold War, the US and the USSR both had enough nukes to destroy the other, but they didn’t do so because the other would’ve had enough time to launch a retaliatory attack, i.e. mutually assured destruction.
With AI, however, WW3 could be won with the flip of a switch.
AI advisor: “Hi Mr. President, if we hack China we have a 99% chance of taking it over whereas if China hack’s first they have a 96% chance of taking us over. Would you like to preemptively hack or put your faith in the Chinese Communist Party that they won’t do so either?”
AI will tip the offense-defense balance in favor of offense or at least because of its black box fast-improving potentially all-powerful nature no nation could be sure it hasn’t already been tipped, therefore, they’ll be forced to act as if it has.
I hope, believe, and will fight for America to win the AI arms race so that we can ultimately stop AI on our terms.
And then rather than taking over the world with it, I recommend we “encourage” our peers to sign a non-development treaty similar to what we did with nuclear weapons, except here, we’d be much more effective at enforcing compliance because this AI would be the ultimate surveillance/sabotage tool.
What should this treaty look like?
I’m flexible on the particulars, but just like with John Adam’s essay, Thoughts on Government, I hope this rough outline can inspire a new world opportunity.
The treaty should create an Artificial Intelligence Agency (AIA) whose mission should be to enforce limits on artificial intelligence, which would entail parameter caps, safety regulations, and providing temporary exceptions in some areas such as healthcare.
In order for a country to be able to join this agency it must meet some basic requirements such as: free speech, term limits, and be in the top 100 of literacy, numeracy, 2-parent households, and homeownership.
The more powerful a group is the more imperative its members be responsible.
Some of the overall provisions of the treaty should consist of a balanced budget requirement, annual % change in the budget capped at 2%, auto declassification at 50 years, no sabotaging an entity for more than 3 years in a row, no nation-building, laws sunset after 30 years, and “the eye” can only inform about AI breaches, e.g. not about WMDs, terrorist attack plans, politicians’ indiscretions, etc.
The AIA should consist of 3 branches: Executive, Legislative, and Judicial.
The executive council should have 5 members who serve a 4-year term (maximum 2) where 3 are appointed by the US president and confirmed by the US Senate and 2 are appointed by the legislative council.
The legislative council should have a maximum of 50 members who serve a 6-year term (maximum 2) where if the total number of qualified countries exceeds 50 then they’d rotate seat order based on their admission date.
The judicial council should have 9 members who serve for a single 18-year term (every 2 years one is replaced) where 5 are appointed by the US president and confirmed by the US Senate and 4 by the legislative council.
Finally, AIA employees should be highly compartmentalized where they have to take a recurring lie detector test and serve a maximum of 12 years.
A general overview of operations is the executive and legislative branches would vote for a budget. The executive would grant licenses, assess the world for regulatory compliance, and upon discovering a breach carry out a full investigation, which could involve ever less subtly sabotaging a facility in part to slow down their research and to gather more evidence of intentional wrongdoing, and then to arrest the rule-breakers. They’d then be tried by the judicial branch at which point they could have their licenses revoked, be fined, and/or imprisoned.
If a nation refuses to hand over the rule-breakers or is behind it then the executive and legislative councils could pass ever greater penalties starting with revoking all or some of the country’s AI licensing, then sanctions, then deposing of their head of government, and then finally dissolving their government. The AIA would have a small elite military force to carry out these steps if it ever came to it and could request additional troops to form a multilateral coalition.
I tried to strike a balance between American unilateralism and qualified multilateralism. Some on the Left will feel I’ve given the US too much power, but realistically the US will be the one who creates this eye so we aren’t going to give up complete control over it unless we retain disproportionate power and some on the Right will feel we shouldn’t give up any control, but unless we’re willing to force the world to bend to our will, therefore, becoming the imperialist colonizer we pride ourselves on not being then we have no choice but to entice nations to voluntarily sign on to our non-development treaty by offering them some control.
Plus, it’s untenable for a nation to put humanity’s interest over its own where, for example, Americans will legitimately ask, “Why is it our sole responsibility to stop AI from destroying humanity?!” since many of us have an aversion to being “the world police,” but whether you like it or not we will continue to have world policing to stop world-destroying technologies so then the only question that remains is, “Should we support a niche well-checked international organization to cap AI in our interest or abdicate responsibility to a secret global elite to cap AI in their interest?”
I also tried to strike a balance between avoiding replication and corruption. The AIA should be opaque and capable enough that it’d be difficult to replicate its AI yet transparent and constrained enough that it’d be difficult to corrupt its members. I think it makes more sense to err on the side of preventing replication because if a foreign entity or secret elite can surpass the eye then it’s more likely to tyrannize than for an amalgamation of the most qualified nations beholden to a limited mission and powers.
In the end, maybe AI development will dramatically slow down due to unforeseen technological hurdles and/or because AI research could get less funding due to diminishing returns, but either way, our #1 foreign policy objective should be to win the AI arms race so that like with nuclear energy, we don’t use it to summon death, but bring about a better life.