Discover more from Anthony Galli
When to Ban the Development of Artificial Intelligence
We should obviously better regulate AI with increased standards on transparency, explainability, privacy, safety, and accountability.
More regulation will slow AI down.
But I’d go two steps further…
Outside of healthcare, we should ban and then reverse the development of artificial intelligence.
After all, outside of healthcare, what could an ever-improving AI give us?
Okay, fair enough, but on the road to developing this genie lamp I believe we’d succumb to entropy, enslavement, evolution, and/or extinction.
After a certain point, the risks will outweigh the rewards, and the longer the time horizon you concern yourself with, perhaps, because you love your descendants or because we’ll live longer then the more weight you’ll give to the risks.
When should we ban it?
We can’t ban it until the US wins the AI arms race by unlocking an AI capable of global surveillance and sabotage.
Once we have this ultimate eye then we’ll “encourage” the world to sign a non-development treaty like we did with varying levels of success with nuclear weapons, except here we’d have the insight and power to enforce 100% compliance.
At this point, it’d then become a matter of where we with the international community draw the line?
At first, we may draw it at whatever happens to be the status quo, but we need not let happenstance dictate our indefinite reality.
In accordance with improving a variety of key metrics — obesity, literacy, suicide, longevity, wealth — we should slowly begin downgrading AI (outside of the eye).
This could involve capping AI’s parameters so that if the most advanced had 30 trillion we’d drop it to 20 trillion (ChatGPT 4 reportedly has 1.76 trillion parameters) and then over the next several years track the aforementioned key metrics relative to the global population to see if we should degrade further.
I’m not saying we degrade to Amish, but they do seem happy all things considered.
Ultimately, in order to preserve humanity, AI should at least be detectable, inferior, and non-sentient.
This might be an unpopular opinion, but I’m unapologetically pro-human!
AI should be detectable to a trained human eye or at least to AI.
AI should be inferior to the best human thinkers, strategists, writers, speakers, artists, etc.
The more critical a skill is to self-governance the more imperative it is we prioritize downgrading AI in that area. I’d argue one of the most critical is writing, which brought about the dawn of human civilization so let’s not let it be our downfall.
The pen is mightier than the sword. — Edward Bulwer-Lytton
Writing, the art of communicating thoughts to the mind through the eye, is the great invention of the world...enabling us to converse with the dead, the absent, and the unborn, at all distances of time and space. — Abraham Lincoln
There are only two powers in the world, saber and mind; at the end, saber is always defeated by mind. — Napoleon Bonaparte
Another critical area is strategy where we could use a game like chess as a proxy, therefore, it might make sense to ban AI capable of beating the top human players so that it doesn’t serve as a springboard to a more advanced AI strategist capable of replacing military generals whereby war becomes little more than AI vs. AI with humans downgraded to an afterthought in our global power dynamics.
AI should be non-sentient, e.g. fail the Turing Test. If we perceive it as sentient then this in itself would dramatically reduce the development of artificial intelligence because we wouldn’t want to torture them therefore more regulations would need to be put in place to protect them, but I’d prefer we never have to face this moral dilemma of whether and when to apply the Hippocratic Oath by simply stopping and degrading AI before then.
With the AI arms race won, criteria met, and quality of life metrics constant, I’d be content to leave the line wherever that may be in order to best ensure humanity can continue to spread our light of consciousness for centuries to come.