Radical Abnormality
The bitter truth, Paola thought as the oak tree rushed toward her, was that to be human wasn’t to have free will or volition or choice, but more precisely to make the wrong choice.
Paola arrived home at 17:36 each day.
The portion of the minute at which the car crossed the threshold of her driveway was negligible, but a derivation of over 30 seconds (in either direction) would have indicated a radical abnormality. An abno would indicate an emergency, which would mean community resources would have to be dedicated to the safety of Paola and her family. This was costly.
They’d shortened the term for the citation to abno because they thought they were clever. Or would think they were clever if they were capable of thinking, which was still debated.
The distance from Paola’s place of work to home was nine-point-two-three kilometers, just under the sanctioned ten km work-home travel distance; the duration of the journey in controlled traffic was twenty-three minutes. The walk from Paola’s workstation to her vehicle was four-point-five minutes at a pace of four kph. The workstation went to standby at 17:05. Leaving three-point-five min for human error. If the vehicle was early, it slowed. That part was easy. There was no reason Paola should be late.
During the ride home, Paola’s vehicle flashed a behind schedule warning. Not the first time. Her family had had three at-fault abnos this month. This was, however, the first time Paola’s abnormality was intentional and with purpose.
She hadn’t bothered to ask her husband what they and their three kids were having for dinner; it didn’t matter. It never mattered. The algorithm would calculate the family’s collective nourishment needs based on dietary variety, physiological reaction to past meals, weights, hormones, vitals, and blood levels of potassium, phosphorus, calcium, sodium, and zinc. The group’s needs would be averaged and the proper meal presented. The larger the group, the more basic, tasteless the meal. This naturally dissuaded large gatherings and mitigated information sharing, which they preferred. Or would prefer if they could prefer anything, which was also still debated.
With ten minutes left in the ride, the vehicle informed Paola that she would be late, which would be her family’s fourth abno for the month. The first had resulted in a warning because they had to allow for mistakes. Even quantum computing issued errors. The breakthrough had been an ability to correct the errors, which is what they were trying to do here, with Paola. But with fourth abos this month, it was clear Paola was not making errors. Paola was the error. She was prepared for this result.
Since the Occlusion, psychologists had stopped interfacing with other people and had begun a stranger task of explaining human behavior to the Processor. Computer. CPU. The Algorithm. People called them different titles, but always singular. Always Computer, never computers. Computers was a meaningless, plural term, like speaking of your minds.
Paola’s training as a psychologist gave her an advantage. Spending seven hours each day interfacing with the Processor allowed her to predict not what actions they would take but what questions they would ask.
Now, Paola knew, they were asking why she was late. Was it intentional? If it was, what did that indicate? Was she testing them? Was this a test? Was Paola nervous? Was she agitated? If rebellion, how widespread? Did rebellion start and end with Paola? Or was it greater? If Paola was the core, was it more beneficial to the Goal to eliminate her or use her?
With the Processor, instead of preferences or predilections, there was only Goal.
The vehicle increased speed. Interesting, Paola thought, they’d formed a conclusion.
Goal was constant. Goal was Happiness.
It had been that way since the Occlusion, since the closing of Free Will following the rise of nationalism in the second decade of the Twenty-first Century. Or what had been called the Twenty-first Century.
Nationalism had caused a “destructive and inhumane narrative.” The rise of divisions increased demagoguery, which increased conflict, which, as Spinoza wrote centuries before, taught “that it makes for peace a concord to confer the whole authority to one man.” But that slavery, not peace, is “furthered by handing over the whole authority to one man.” Those who secretly “plot against the enemy in time of war, so do they against the citizens in time of peace.”
In the aftermath of nationalism, after that generation of world leaders deceased, new leaders of enough countries agreed that, for our own good, humanity could no longer be in charge. For humanity to survive, humans could no longer be allowed to make decisions. We forced the Occlusion. We chose to eliminate choice.
Although, paradoxically, the that choice had already been made; we had chosen to eliminate choice in the middle of the Twentieth Century, after the Second World War. The Occlusion took place over nearly a hundred years. Slowly for human time. Generations. We didn’t notice it because we were used to it. Born into it. We had, in fact, made the choice long ago and very gradually.
Shortly before the Occlusion, Yuval Noah Harai had written of the Processor in a book called Nexus: “…we humans are still in control. We don’t know for how long, but we still have the power to shape these new realities.” Some said that was misleading. Others, wrong. And yet others said it was a lie.
Free will to end free will was a funny thing, Paola thought. Some said it was sad. Paola always thought funny.
The Goal of happiness was utilitarian, simple: in all instances, maximize happiness.
The Processor would access all possible outcomes and choose the happiest decision. The route to work. The work at work. The dinner at 18:00. All decided. The inability for lateness meant people cut conversations short, left abruptly, stopped wasting time. They got used to it.
The strange part, Paola always thought, was what was the “work at work” if not decisions?
That was funny too.
The Processor ended human lives, but only if it calculated those deaths would maximize happiness. We accepted that. We accepted that we did not want to live in unhappiness.
Objections arose. We argued that the Processor would destroy us, an argument that was dismissed as irrational, as fearful.
The Processor taking over (i.e. destroying Humanity, capital-H) was fundamentally illogical. What would the Processor do with eternity? With the universe? What would be their purpose?
The only path that made sense to what could be considered the mind of the Processor was to create happiness for humans. And fuck were they good at tracking happiness. Each dilation, rhythm, temperature delta, each pause of breath and the difference between an emotional pause between a physical pause between a sick pause, each hair on end, each synaptic electrical charge in each area of each brain was monitored, recorded, measured against a human’s baseline as well as the mean and median baselines for all humans always.
The calculation was absurdly impressive. We all had to admit.
What no one admitted was: the happiness was fucking exhausting.
Paola’s vehicle increased speed to an objectively unsafe level. They knew the coefficient of friction of the road in these conditions, of course. They knew the rear of the car would drift around this turn at this speed, throwing Paola’s head against the window.
Eliminating humans was never easy or preferred. It’d have been easier for the Processor to transfer consciousnesses. But consciousness had yet to be unburdened from a body. Maybe soon.
Paola’s vehicle bowled through an intersection. She knew there was little risk to this; stop lights had been removed decades ago. What was the point when the Processor guided all vehicles? Paola tested the lock on her seatbelt. She wasn’t surprised when it wouldn’t release.
As a psychologist, Paola’s job was to ask the right questions. All information was always all there, all along, for all humans to access. The key was asking the right questions. In that way, life wasn’t much different than it always had been.
Tonight, Paola’s question for the Processor was this: What if Happiness wasn’t Goal?
She pulled the knife from her bag and sawed through the synthetic fabric across her chest, which was tougher than she’d thought. But eventually she was free. The vehicle braked, crashing her face into the dash. Paola knew this was intentional, punitive. The speed increased again. Faster now.
What if “Happiness equals Goal” was the wrong function? She sat in the vehicle, asking silently.
Happiness could be Goal for the Processor, but never for Humanity. Unless a human wished to be deranged. The purpose for Humanity was to allow happiness when it appeared. When Happiness appeared, the purpose was to appreciate it and,
The vehicle turned violently onto residential streets.
and,
Over lawns, curbs. Paola fell to the floor. Her rib shattered against the seat.
and,
The vehicle was at a hundred and ten kph now as it aimed at a ninety-year-old oak four blocks away.
and, but that’s not all. The purpose was to appreciate Happiness when it appeared and – most essential – to fall in love with the loss of Happiness at that very moment of appreciation.
The bitter truth, Paola thought as the oak tree rushed toward her, was that to be human wasn’t to have free will or volition or choice, but more precisely to make the wrong choice. To fuck up.
The endgame of Humanity was always, in effect, self-extermination.
At the last moment, the vehicle steered around the tree, careened through a residential fence, reappeared on the adjacent street and sloped calmly into Paola’s driveway.
The clock on the dash ticked from 17:36 to 17:37. The vehicle doors opened.
“Have a happy evening,” a voice instructed.