Hope in our Fear
Should We be Looking to AI for Ethical Help?
Without being dismissive of the individual lives lost – Alex Pretti, Renee Good – and affected in what’s becoming less of a crackdown on immigration than a reckoning on American ethos, particularly the patience for peaceful protest and willingness to take-a-breath after seeing the murder of an unarmed man or woman or child, we need to zoom out in a maybe last-ditch effort to understand our collective current inability to make decisions.
If we consider the incidences in which a woman was shot for driving too close to another man; a man was shot on the asphalt after being disarmed; a man was dragged from his home in subzero temps, wrapped only in a blanket, wearing only boxers, walking only in flipflops, it should be clear we are no longer capable of making measured, smart, rational decisions that benefit us, all of us, as a society.
Already this essay thought too measured, too careful. How much should we tolerate? Shouldn’t we be asking about payback, breaching into the mire of bare-knuckled teeth-bared exchange, come victory or death?
Because there comes a time for that. Doesn’t there? And if not now, when? Where’s the line?
Answers for another essay, perhaps. One that deals with elevated societies of the type we, the majority of the world, were blithely born into in the late twentieth century, societies that value life and longevity, that offer luxuries like therapy and philosophy, that embolden younger generations to feel entitled as never before.
But now. Now is about decisions.
We need to isolate a single decision. Because it’s the most ethically manageable of the previous examples, let’s take the incident of an elderly Hmong man who was dragged from his home in Minnesota in an inhumane manner, under-clothed, unwarranted.
Imagine the worst of this man. Pretend he was a, who cares, satanic child rapist. Ok. Worst of the worst. Got it? Good. If you’re a law enforcement agent after him, you may want him surprised by your entry. Ok. Batter down the door. After that, you find him napping on the couch in boxers. You might think he has a weapon somewhere. Ok. So you don’t let him get dressed of his own volition. But you’re also not his jury of one. So before you pull him outside, what’s the harm of handing him a sweatshirt? A coat? Shoes that cover his toes? You check the coat for weapons. You and your men already outnumber him. Already have your weapons trained on him. Already have him in a surprised state.
Any other decision that giving that man proper clothing in that moment is made for one of three reasons: spite (hatred for who he is or what he represents), terror (as an example to others that this-will-happen-to-you-too), or fear (you, yourself, lack the training and skills to properly manage the situation).
This is poor decision making by incapable decision makers enabled by unethical, un-American decision makers.
This is what tears at societies: individuals who lead poor decisions that cascade. Like driving on the interstate at high speeds, each person in his own car, careering, reacting, their training improper and insufficient. Were the highway full of bus drivers, much safer. Same with flying commercial. Why is it so safe? The people flying have a shitton of training and experience.
Much has been said about ICE agents not having the proper (i.e. any) law enforcement training. Commentary from a tactical point of view – crowd control, weapons, basic procedure. But the true training they lack – training that’s much harder to come by because it’s much more slippery, only grasped properly by those who have given the training years of their lives – is in ethics. Turns out, that’s all that’s been holding our society together, and our mortar has lost its hold.
There are really two choices – leapfrogging the tyrannical crises that awaits in Plato’s cycles of government, directly to rule by aristocracy (i.e. those who have been trained in metaphysics, ethics, psychology, sociology, epistemology) – or collectively agreeing to be ethically guided by a truly non-partisan entity, one that we’ve come to fear more than any other in the last five years: Artificial Intelligence.
Our only hope may be an an AI model that can instruct agents (federal, law enforcement, otherwise) how to act ethically and forcefully in any immediate situation. A model that can do those very complex philosophical calculations in an instant, calculations that could be wrong, but calculations that would ultimately mitigate danger and lead away from rather than toward civil unrest.
This type of AI model already exists, turns out.
For Anthropic’s Claude, a philosopher named Amanda Askell helped build the model in an ethics-first direction, rather than an intelligence-first direction, with ethics as an afterthough.
As she explains it, they created Claude by telling it: Here is what you are, who you’re interacting with, how you’re deployed in the world – and here’s how we would like you to act. Here are the reasons why we would like that. … if Claude gets a completely unanticipated situation, if it understands the values behind behavior, it’s going to generalize better than if we’d given it a set of rules.
It’s time to let professional pilots start flying our nation. One way or another. If not -- rule by democratic mob, especially when the mob is motivated to their worst selves, is set to destroy what once was a decent nation and a noble experiment. And destroy it rather quickly, it would seem.
