A seven year old girl and a six year old boy — best friends who live in the same apartment building — set out together for their neighborhood bodega to buy some ice cream. In the building’s elevator, a man suddenly and brutally attacks them with a knife, killing the boy and wounding the girl. He flees. In the aftermath, as bloodstains are photographed, police assure gathered crowds of neighbors they are diligent and inexorable. At a nearby hospital, the boy’s mother falls to her knees in the street and cries aloud to God.
This crime is neither abstract nor hypothetical; it occurs on Schenk Avenue in Brownsville on the sunny first of June, 2014. The victims are PJ Avitto and Mikayla Capers (Celona and Wilson, 2014).
Within a week, the NYPD arrest the man responsible: Daniel St. Hubert, whom tabloids dub the Brooklyn Ripper. He is quickly implicated in two more stabbings, a homeless man and a teen girl. His history is long, violent, and marginal, and his family say they have tried without success to find him psychiatric help, even as he attacks them and others. As courts debate his fitness to stand trial, he displays no remorse, and menaces personnel at every facility in which he is confined. He claims to hear the voice of the devil. His only apparent motive seems to have been quieting the children down (Santora, 2014).
What are we to make of Daniel St. Hubert? What are we to do with men like him? The general question is, perhaps, the more tractable, but the particular drives home the stakes. Legal and medical systems intervened in his life at several points, yet their failure is carved in the bodies of children. Definitive confinement has come at enormous risk and expense, compounded by ambiguity about what constitutes an appropriate setting. What assumptions underlie this state of affairs?
The phrase mens rea — guilty mind — is at least as old as Augustine, and the concept is older still: as far back as Hammurabi’s laws, many punitive legal traditions have maintained distinct punishments for similar crimes, conditioned on the criminal’s mental state. The shades of capacity and intent considered relevant are shaped by prevailing local philosophy, but also inevitably by the practical limits of what it is possible to know about another person’s mind. Thus we face the unenviable prospect of engaging with the the interior life of a child killer.
It’s worth noting that other traditions have simply sidestepped these questions — it is entirely possible to run a legal system that makes no such distinctions at all. What might that look like?
On that night in 2014, the police were not the only ones who spoke to the neighborhood. A block down Schenk Avenue, another group of men in matching jackets gave a remarkably similar series of assurances — this is our home, this will not stand, we will not stop until we find the man who did this. They were called the Tomahawks: a gang with deep roots, and many friends, in the area. Their exact words were not committed to print, but it does not take an enormous amount of imagination to predict what they would have done to Daniel St. Hubert if they had identified and apprehended him before the police did.
Their brand of justice is the human norm. That doesn’t quite make it a default — the null case in any individual act of violence is simply escape, and the general result a monster-haunted world. This is the problem that the notion of licit punishment exists to solve, and even the most elaborate civilizations ultimately rely on the same basic threat as any Dunbar-sized tribe: if you transgress, a group too big for you to fight alone will find you and compel what they deem proper. Mob violence isn’t a fundamentally different system from policing in this sense — just a simpler one. In one primally satisfying punishment, it provides vengeance, prevents the target from offending again, and deters similar acts in the future. Courts, cops, and corrections facilities exist to accomplish the same things, if somewhat less efficiently.
This is, of course, a patently unfair comparison on a number of levels. The American justice system that captured and tried St. Hubert is laden with centuries of hard-earned lessons, safeguards against the vagaries of hasty retaliation. Yet scratch the patina of 21st-century social attitudes a bit and it’s quite recognizable as a hand-me-down from Britain, which was not at all shy about hangings in the era when America forked its common law; as late as the 19th century it prescribed death for such crimes as treason, pickpocketing, bridge defacement, and poaching. Dig further and that system, before it matured and hypostatized to the point it could execute kings, was a hodgepodge amalgam of antique Roman and tribal Germanic traditions. The latter, interestingly, were pointedly unsympathetic to mens rea distinctions, mandating identical punishment not only for the mad and the sane but for the intentional and the accidental — like the Tomahawks, the question of prior intent seemed beside the point to them.
Are the courts of Brooklyn today historical accidents? They attempt to distinguish murder from manslaughter from negligent homicide, and to tease apart deliberate malice from passionate lapses from the incapacity to deliberate at all. One can imagine history going differently, or point to parts of the world where different paradigms prevailed — in many, we may imagine, Daniel St. Hubert would have been executed years before for far pettier crimes, in others confined more decisively, in others even somehow rehabilitated before his atrocity. Was the modern form entirely latent in the ancient ones, or did it break away somewhere along the line? One may as well ask when St. Hubert himself passed the point of no return — it is the same fundamental question. Did he decide to stab children on the spot, or plan it? Did voices in his head compel him, and if so where did they come from? When, exactly, did tragedy become inevitable?
The hard truth, of course, is that it always was. There is only one world, and the Brooklyn Ripper is an inextricable part of it; any counterfactuals we construct are maps of a territory that does not exist.
For 18th-century philosophers dreaming of just republics, it was not outside the realm of reasonable speculation to suppose that free will really meant Free Will — the poetic ideal of freedom, the freedom of an immaterial soul intervening divinely with base matter, formally causeless and utterly unfettered by any force save conscience. We no longer have this luxury; wishful substance dualism died an ignominious death even as Descartes gestured vainly at pineal glands. Every scrap of evidence accumulated since has led us inexorably to conclude that nature is rule-governed in ways that absolutely do not allow for causal chains to begin ab nihilo with a human mind.
Yet the mirage dies hard. Despite the conspicuous absence of Free Will, most of life seems to run perfectly well on prosaic, everyday free will. The concept is inherently useful; attributing most actions to conscious choice is pragmatically sufficient to explain most of what the people around us do. Pursue their decisions further upstream and you run into infinite regress, as well as the dehumanizing prospect of total irresponsibility — why punish the killer if he could not possibly have done otherwise? If the degree of backward extrapolation is arbitrary, we might just as easily condemn the jailers who freed him, the parents who raised him, or the entire past history of the universe in aggregate, since none of those could have done otherwise either. As we do not exist in a vacuum, we draw those lines in different places for different reasons, depending on the circumstances; they are indeed historical accidents, not objective universals, and it bears remembering that they change over time.
The strain of thought known as compatibilism aims to rescue colloquial free will without challenging material determinism, pointing out that our apparent freedom to act is more a matter of pragmatics than metaphysics, and thus not much affected by scientific revelation. On reflection, though, it is somewhat suspicious that compatibilists so often salvage exactly the consequences of free will that they already happened to enjoy, without having to revise existing attitudes about anything really surprising. Epictetus argued compatibilism (cf. Bobzien 2001), and wound up arriving at roughly the same virtue ethics his pagan forebears already endorsed on more mystical terms; Augustine did the same (cf. Couenhoven 2007), and derived from first principles the same Christianity he had already practiced. Is it any wonder, then, that more modern apologia steer unerringly for the same ideas — rights, agency, responsibility — that so fascinated modernity’s immediate predecessors, and on which they founded institutions we still use?
The impulse to defend free will is instinctively understandable. As modern compatibilist philosopher Daniel Dennett puts it, “the distinction between responsible moral agents and beings with diminished or no responsibility is coherent, real, and important.” It’s also at least partially inborn — infants begin to distinguish their own movements from those of others before they are a year old (Sodian 2016). We all know what it feels like to plan and act, and what it feels like to be constrained. It is not intuitive to reconcile the apparent independence of human action with the truth that we are, fundamentally, machines. It behooves us to examine what that apparent independence is really made of. In what sense, if any, was Daniel St. Hubert responsible for his actions? Did he plan them rationally, acting as though free, or did he act as though compelled? Do these questions even make sense?
Dennett, to his credit, does delve into the areas where evidence diverges from praxis. Where libertarian incompatibilists tend to fall back on an atomic, indivisible consciousness in the general humanist mien of the past several centuries, he contends that modern neuroscience shows us consciousness is not unitary. In his own words (1984):
“The model of decision making I am proposing has the following feature: when we are faced with an important decision, a consideration-generator whose output is to some degree undetermined, produces a series of considerations, some of which may of course be immediately rejected as irrelevant by the agent (consciously or unconsciously). Those considerations that are selected by the agent as having a more than negligible bearing on the decision then figure in a reasoning process, and if the agent is in the main reasonable, those considerations ultimately serve as predictors and explicators of the agent’s final decision.”
These are not the terms with which we usually debate intent. They are cumbersome and unintuitive, but have the singular advantage of being entirely descriptive — an excellent habit. They are also implicitly relative to the modeler — if something about the human mind is “to some degree undetermined,” the immediate question that comes to mind is “undetermined by whom?” By abolishing subjunctive teleology, we are forced to recognize that when we talk about free will, we are often really talking about prediction.
Much of what a brain accomplishes is fundamentally predictive. We know we tend to initiate actions before any conscious decision is made, and justify them after the fact (Libet et al. 1983). We know that our perceptions rely on internal models, and have some idea how this is implemented chemically (Corlett et al. 2009). We know that we make snap judgements of our own agency based on how well our sensory feedback matches our predictions — and that sometimes those judgements are wrong, as with the famous rubber hand illusion. We know that mechanisms to minimize predictive error over time are built into our brains at every conceivable level (Clark 2013), and that we share most but not all of them with other species.
Without a metaphysics of the soul to fall back on, it’s harder to pin down exactly what makes us unique (the general ability to adapt to a recursive reward schedule, perhaps?) but certainly we are better able to perceive and exploit subtle patterns in nature than any other species we’ve met, hence our monopoly on things like agriculture and medicine and atomic bombs. Humans are, in fact, so good at prediction that they become very difficult to predict — we can improvise, lie, even suss out another human’s predictions about our own behavior and deliberately violate them.
The same faculties underlie our ability to trace causes backwards — mental time travel runs both ways. However well we may know that causal chains stretch back indefinitely, the evidence for them is never perfect, and the trail usually runs cold in a human mind. PJ Avitto died because he was stuck with an eight-inch knife; the knife was propelled by an arm muscle, triggered by a motor neuron, triggered by… what, exactly? Other neurons, of course, but that doesn’t really tell us much. We can speculate about the wielder’s desires, his self-control, his upbringing, but it’s difficult to say for sure. In practice we cannot read his mind, or trust his explanation — whether or not he really hears command hallucinations, “the devil made me do it” was not a get-out-of-jail-free card even in places and times when demonic possession was taken quite seriously. The predictive value of any explanation we can come up with falls off a cliff at the point where we have to speculate about his invisible, internal state of mind — not because that state of mind came from nowhere, but because we can’t reliably trace causation further upstream yet.
Can we articulate a prediction-based account of human violence in general? The proverbial Martian anthropologist might note that, although we cannot seem to eliminate it, we have tended, over time, to put a lot of effort into making it less surprising. In situations where we are suddenly attacked, we agree on counterattacks: that guy keeps mugging people, so let’s get the whole village together and beat him up. Where the counterattacks themselves become unpredictable, we agree on standards: we sometimes beat up the wrong guy, so let’s write down the rules and demand explicit evidence. Now the muggers can accurately predict punishment and avoid it by not mugging people, and the rest of us can walk down the street with a justifiable expectation of safety. So far so good.
The problem is that some violence is inherently unpredictable. We can understand the mugger — he wants money. We can put ourselves in the mugger’s shoes, model the mugger’s incentives, and act accordingly. But what is the child killer’s incentive? How could anyone know ahead of time not to get into an elevator with him?
We declare some violent behavior insane, then, because we cannot predict it — equivalently, we cannot assign any narrative in retrospect that makes sense to us. Worse still, we also cannot predict whether we will suddenly become insane ourselves, and so in recent centuries we have set up a system to treat us as we would wish to be treated if we did: more as patients than as prisoners. Much as criminal courts have gradually accumulated methods of establishing motive, mental hospitals have painstakingly taxonomised dysfunction and tried to establish common etiologies.
Daniel St. Hubert was originally diagnosed with paranoid schizophrenia, but beyond this wide and potentially outdated categorization little information is publicly available. If anyone wants to trace the roots of his crime in Brownsville deeper than the moment of its commission, they will have to do it the hard way — get in a room with him, learn what they can about his life, and add a grain of evidence to the common pile. Taking the time to study motivation and pathology in such matters, rather than operating solely on deterrence and restraint, is what lets us intervene in novel ways in similar future cases. This occasionally leads to spectacularly effective society-wide heuristics, such as “don’t hit your kids” and “use lithium to dampen mood swings.”
Such a project is, of course, hard to reconcile with the fact that brutal retribution is enormously, eternally popular. When St. Hubert was frog-marched out of the 75th Precinct, a crowd was there waiting for him, chanting the word “Death!” He did not appear perturbed by this in the least.
Shakespeare wrote in a world where criminals were publicly tortured to death. Would the prospect of drawing and quartering have stopped the Brooklyn Ripper? It’s impossible to say, but the existing threat of incarceration clearly wasn’t enough. We could speculate that he simply couldn’t think far enough ahead for any kind of threat to make a difference, or lacked the executive function to restrain himself if he did. It’s also possible he thinks more clearly than he lets on, acted pragmatically in service of a monstrous goal, and simply didn’t judge the consequences to be worth worrying about. Humans do not have an easily-readable utility function, which limits how useful a model utilitarianism is for real human behavior.
If we are to operate on a consequential basis, then we can only judge our violence by its outcome. If we, as a culture, are using confinement effectively, it should make violence more predictable, and that predictability should make it rarer. If we are using it ineffectively, the expected result is more unforeseen attacks. It is tempting to think some evidence-based technocracy could craft policy on this basis alone, judging the results dispassionately and making adjustments.
The problem is, consequentialism is a black hole. If, for example, we admit that unpredictable violence is even a partially heritable tendency, we cannot ignore the prospect that our own society owes what humaneness it has directly to the millennia its predecessors spent executing the most impulsively violent portion of every generation — it would be hard to argue this had no effect on population genetics. Game-theoretically the picture is even darker: endemically high trust may inevitably incentivize enough defection to bring the institutions that foster that trust crashing down.
Put another way, what would you think of a justice system that forcibly sterilized the families of criminals, regardless of whether they participated in any crime? How about one that pre-emptively imprisoned children they could prove had a high risk of violent behavior in the future, or one that doped the public water supply with psychoactive drugs?
If you want to make consequential arguments against those, you have to appeal to something like relative suffering — you have to predict such systems would result in more misery than they cured. Are you sure, though? Have you measured? Would you precommit to supporting one such regime if someone presented you sufficient evidence (cf. Blüml et al. 2013) that it would, in fact, lead to a more eudaimonic future, one that would outweigh any measurable costs? Or are there means no end can justify? Deontology is out of fashion, but in practical terms most of us operate on virtue ethics — if we didn’t, those potentially dystopian scenarios would not give us pause before we saw hard numbers.
If there is a criticism of our justice system inherent in the gruesome story of Daniel St. Hubert, it is that it sometimes errs on the side of ignoring available evidence. His victims did not have enough information to conclude with any confidence that he was dangerous — but civil authorities surely did. His pathology was known and his pattern of psychotic violence very well-established when he was released from prison nine days before the elevator murder, with no medication and no psychiatric referral. His arrest record was litany of brutality; he had strangled his own mother with an electrical cord. His parole officer recommended he be committed, but was ignored. Neighborhood cops knew about him, but could not intervene until the deed was done. Were some reasonable individual confronted with the same evidence, and somehow given sole and unanswerable responsibility for his disposition by fiat, they might well have concluded that he was overwhelmingly likely to do harm, and that this far outweighed the injustice of confinement or exile or even execution, due process be damned.
Our modern legal system is not built this way, precisely because of the problems arbitrary case-by-case penalties once created. Individuals are capricious and corruptible, so we have gradually separated the roles of judge, jury, and executioner, not to mention lawyer and bailiff and peace officer and forensic psychologist. That they hew collectively to any standards makes the personal violence of justice more predictable in most situations, but those standards can conflict and backfire on edge cases.
It is famously impossible to derive ‘ought’ from ‘is’ — oughts must be plucked from the evolutionary winnowing of received tradition, from pure aesthetic preference, or from the dreams of conscience. Few would not prefer to inhabit a world where PJ Avitto and Mikayla Kaypers had lived their childhoods, but such a world does not exist — it is a dream. That doesn’t make it useless; contemplating unreality may lead us to dwell on a past we cannot change, but it also helps us predict the future. How likely is it that somewhere downstream of here we will resolve a few institutional scleroses, and a similar tragedy will be averted by a person with enough information to evaluate the stakes, enough authority to act decisively, and enough proper incentive to exercise it? We cannot know for certain — all we can say is that, unlike any map of a world where PJ had a seventh birthday, we cannot yet rule it out.
Cold comfort indeed. Is there no firm ground anywhere in this nightmare? To name one is to editorialize, beyond the bounds of any detachment or objectivity; let that be explicit.
Mikayla survived the attack, and eventually recovered from her wounds. She survived because PJ died first — because, in the last seconds of his life, he interposed his body between hers and the knife. It is unseemly to quibble over how and whether he ‘chose’ in the scant time he had — such abstractions are for his killer, and for us, not for him. They are mechanical blueprints, operating diagrams for practical safety, utterly insufficient to describe the profundity of his ἀρετή. His fate, and all fates, were sealed at the moment the universe began — but what an incomparable honor it is to live in a universe where a six-year-old boy laid down his life for his friend.
Blüml, V. et al. (2013). Lithium in the public water supply and suicide mortality in Texas. Journal of Psychiatric Research, 47(3), 407-411.
Bobzien, S. (2001). Freedom and That Which Depends on Us: Epictetus and Early Stoics. Determinism and Freedom in Stoic Philosophy, 330-358.
Celona, L. and Wilson, T. (2014, June 1). Maniac wielding butcher knife kills child in elevator. The New York Post, 1/5.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36, 181–253.
Corlett, P. et al. (2009). From drugs to deprivation: a Bayesian framework for understanding models of psychosis. Psychopharmacology, 206(4), 515–530.
Couenhoven, J. (2007). Augustine’s rejection of the free-will defence: an overview of the late Augustine’s theodicy. Religious Studies, 43, 279–298
Dennett, D. (1984). Elbow room: the varieties of free will worth wanting. Cambridge, Massachussetts: MIT Press.
Libet, B. et al. (1983). Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential) – The Unconscious Initiation of a Freely Voluntary Act. Brain, 106, 623–642.
Santora, M. (2014, June 6). Before Arrest, a Long String of Violent Acts: Daniel St. Hubert Served Prison Time for Attacking His Mother. The New York Times, A17.
Sodian, B. (2016). Understanding of Goals, Beliefs, and Desires Predicts Morally Relevant Theory of Mind: A Longitudinal Investigation. Child development, 87(4), 1221-1232