5 Comments
author
Apr 6·edited Apr 6Author

Footnotes:

1. The balance budget requirement should include a robust rainy day fund, which could be unlocked in case of an emergency with a 2/3rd vote.

2. The AIA should be awarded to the U.S. state that most meets the aforementioned qualifications and be built as a large glass sphere within a large public park within a medium-to-large city to symbolize the importance of transparency and nature.

3. The U.S. doesn’t currently meet the 2-parent household requirement, which means if the AIA was implemented tomorrow it wouldn’t qualify for the legislative council. This would be just one more incentive for why we should increase 2-parent households. The reason I believe it’s an important qualification to include is because it more than arguably any other metric shows how pro-child a society is and therefore pro-human. Children desperately want to be raised by their biological mother and father. Yes, there are extenuating circumstances where this isn’t possible, but they should be rare.

4. There should be an auto continuing resolution (ACR) where if the budget doesn’t get a majority vote then they’d just fund the AIA at the previous year’s levels.

5. Budgetary constraints probably should be phased in slowly so that in the early years it can increase by more than 2% a year.

6. All laws and regulations must be read by elected officials. They must sign an affidavit swearing to have read the charter/laws before being sworn in and then again before voting on a new piece of legislation.

7. If the eye reports about something beyond an AI breach then whoever heard it must report it to the Inspector General and a full investigation must be had into how it came to be.

8. There should be strong whistleblower protections.

9. As I addressed in my previous video: the AIA should cap AI parameters, GPU, terabytes, etc. at XY% less than “the eye” and then degrade further based on improvements in quality of life metrics.

10. Auto declassification at 50 years, except related to the intellectual property of the eye itself.

11. No-nation building, but then what happens in the extremely unlikely scenario the AIA dissolves a government? Not the AIA’s responsibility to sort out.

12. The executive board would nominate to head the executive departments: Secretary General, Secretary of Assessment, Secretary of Research, Secretary of Justice, Secretary of Defense, and an Inspector General. The executive board would constitutionally be required to vote according to the aforementioned chart as well as on any substantial issue or conflict or on anything it so pleases where it could override a secretary’s decision with a majority vote.

13. The Secretary General would help oversee the overall administration and organize executive board meetings as a non-voting member. The Secretary of Assessment would oversee nations’ AI programs and licensing. The Secretary of Research would facilitate AI information similar to how the IAEA isn’t just anti-nuclear weapons, but pro-nuclear energy… the AIA would be for capping AI, but within the cap it’d be pro-safe-and-human-life-enhancing AI. The Secretary of Justice would oversee investigations, arrests, and prosecutions. The Secretary of Defense would oversee diplomacy and defense. The Inspector General would be internally focused via auditing and evaluations. I’m open minded on this front to the exact number and types of departments.

14. Cabinet secretaries should serve a maximum of 8 years so that they can’t serve longer than a board member, but since they’re also considered AIA employees they could serve another 4 years in a different Cabinet role or a lower level employee position.

15. Executive board meetings should at least be monthly or weekly.

16. Arrests must be made public & brought to trial within 90 days with a jury of their peers determining their guilt.

17. Ideally, in order to qualify for the AIA a country’s head of government must have term limits, but since there are many developed countries that de jure don’t, but de facto seem to (aka UK hasn’t had a prime minister that has served for more than 12 years in the last 200 years) then the qualification could be written as: “a country’s head of government cannot serve more than 12 years consecutively or else that country is automatically disqualified from the AIA.”

18. The reason countries would want to join the AIA is because they’ll get access to the most cutting-edge pro-human AI and get some power/insight over the AIA’s decision-making (the carrot) and if they don’t join then they’ll lose access to much of existing AI’s, not get AI development licenses, face increasing sanctions, be shamed for not doing their part to protect humanity from extinction, and for no country to join is to effectively argue for the US to retain monopoly control over its eye (the stick).

19. If 51+ countries qualify then rotate on the legislative council based on admission date + ensure regional diversity so that in any given year each region (Africa, Asia, Europe, North America, and South America/Oceania) has at least 9 senators.

20. Sanctions are binding, which means if the AIA enacts a 5% sanction on China then all member states are legally obligated to impose it.

21. No sabotaging for more than 4 years in a row. It should start as minimal and subtle as necessary to slow down the suspects and then slowly increase to build a case against them, but in order for the executive board to not indefinitely kick the can down the road they’d be capped at 4 years so that they’d then either have to make the arrest if they’ve compiled enough evidence to justify it or back off for at least 4 years before they could recommence sabotaging.

22. The executive council may need to keep their early operations (suspected breach, investigation, sabotage, arrest operation) secret from even the legislative and judicial council, but even this top-secret information must be shared with 3 senators and 3 justices (leg/jud seats randomly rotate every 4 years) whereby with a majority vote the “Super 6 ” could vote to share this information with the rest of the AIA if they believe something about it is unconstitutional or extremely reckless, therefore, in need of being checked.

23. Countries should contribute to the budget as a percentage of their population since virtually every human would be deriving the same benefit from the AIA.

24. If the AIA lowers the cap on AI, therefore, making some existing AI’s illegal then the AIA should compensate said company/government for some of its economic loss.

Expand full comment

This is quite a comprehensive proposal - almost a UN redux. Let me just speak to the AI technology aspects.

First, let's look at some of the context that will define the coming decades:

1) Human population is crashing already. Using human bodies to support industrial production is on the way out. China is planning to replace 1 in 50 workers with robots by 2025. That's 35 million new robots. Exponential growth follows. Robots are massively cheaper and more productive than humans - no need for cars, sleep, health care, vacations, unions, air, etc. The profits generated, even selling into shrinking human markets, will support social welfare for the dieoff of the last of the peak population group (like us). And it will support huge wealth increase for all concerned. The human population may shrivel without any aggression by the AIs.

2) The imperialism of the agricultural and industrial revolutions was driven by access to material resources. Today, the critical resources are technical and intellectual. Physical aggression is increasingly unprofitable due to the global nature of supply chains, as we see in the current war in Ukraine. Also, that massive steel tanks intimidating docile populations is a thing of the past. Will we reach a point of mutually assured drone destruction? I think so, in one way or another. And on to your proposal.

3) The AI technical revolution is advancing orders of magnitude faster than any institutional body could regulate. The arms race is already under way with TSMC building a plant in Airzona to reduce the impact and incentive of Taiwan falling to the Chinese. The AIs will be used predominantly for industrial and business functions. Yes, war will change dramatically, but will continue.

4) In 1608 Galileo set up his telescope in the town square so anyone could see that the earth was not the center of the universe. However, we persist in thinking that us humans are the pinnacle of intellectual evolution. Now we are finding out that there are an infinite number of possible intellects, some of which will be far smarter than humans on some degrees of measurement. Within a few years, we will be conversing with the elephants, whales, dolphins and even ants. Just like with Galileo, our world view is changing dramatically. Just as today we revel in the vastness of the universe, in the future we will revel in the multitudes of intelligences that will make all of us much, much smarter - and richer.

5) The AIs have no inherent motivation to kill off humans. They might well intervene to prevent us from killing off each other. They will realize, just like we have, that our existance on Earth is fragile and subject to some random disaster like the pole shift now taking place, so I predict they will turn their energy to populating the solar system, just as Musk is preparing for. In the meantime, we need to completely redesign our electrical grids, burying them underground for protection from the inevitable solar flare that could wipe out all electrical devices in a few hours. There is a lot to do and we need all the help we can get - or invent.

Be careful what you ask for Anthony... :)

Expand full comment
author

I appreciate your thoughtful reply. Some questions…

“Human population is crashing already,” but does it have to?

I get that this is the trend, but can’t the trend change? What if countries decide to incentivize more children, e.g. give more money to parents than it costs to raise them? Or more effectively, as society becomes more abundant remove the disincentives, e.g. if we’re wealthier with more home ownership then wouldn’t people want to have more kids again? It also seems as the population shrinks this gives ideologies that are more pro-procreation disproportionate power, e.g. Islam.

“The AI technical revolution is advancing orders of magnitude faster than any institutional body could regulate,” but it is being regulated more and more. I suppose your argument is that no regulation can effectively stop the revolution altogether, but what makes you so certain?

We live in the most rapidly changing period in human history so it may feel like “progress” is destiny, but if we were born during the Roman Empire’s collapse it might’ve felt like retrogression is destiny.

“The AIs have no inherent motivation to kill off humans,” how can you be so certain when you’re arguing that their intelligence will be so beyond us?

You theorize positive outcomes, but why not negative ones? How well do we treat our intellectual inferiors? Upon learning that we don’t wish to eat him, a rat may then assume he’s safe based on his own primal motivations only to learn the hard way that we want to experiment on him. If AI prevented us from killing each other then this in itself would be problematic as they could trap us in golden cages for “our own good.”

Ultimately, there are two central questions at play here… do you think AI will end humanity (whether by extermination or evolution) and do you want AI to end humanity?

My answer is… I hope not and no… whereas your’s seems to be yes and yes.

Expand full comment

Hi Anthony - Thanks for your challenging rebuttal. The issues you raise deserve a proper debate, which I would welcome. Since my last note, I have been preparing for my third "Mark Talk" on AI this Thursday, once again sold out. So I have been thinking more about these questions and will respond after the presentation.

In general though, the tech wizards (mostly young) are immersed in the tech, while the philosophers are lost in space, unable to grasp the consequences. The situation is so dynamic and chaotic that no one can accurately predict the social/political/religious/cultural future. However, all would benefit from a more wide-view discussion of the topics. If you are interested, let's discuss offline - mark@heyerscope.com, +507-6499-6074 WA.

Expand full comment
author

Sounds good! I look forward to your response. :)

Expand full comment