On Salience, Semiotics, and Schizophrenia

You’re laying in your bed, drifting off to sleep, when suddenly you realize there’s someone in the room with you. “Surely not,” you think to yourself in semisomnolent confusion — “I’m alone in the house!” But there in the corner of the room, shrouded in shadow, is a tall thin figure, and it is reaching towards you.

Suddenly you’re sitting bolt upright in bed, heart pounding, wide awake, poised to sprint for the door — or, if you’re brave, to fight off this mysterious home invader. That’s when you hear the noise of a passing car and realize that what you’re looking at is not in the room with you at all: it’s only the shadow of a tree, moving as the headlights beyond pass by. You mutter an obscenity or two about Plato’s cave, close the curtains, and drift off back to sleep just as soon as your heart slows down.

You probably remember an incident or two like this, an adrenaline rush followed hard upon by chagrin at your own mistake. What exactly happened to you? You thought you were seeing one thing, and subsequently realized you were really seeing another — but what does that process actually entail?

You saw a shadow moving, and the longer you watched it, the more you knew about it. When you had a little information, you concluded it was Freddy Krueger coming to get you, but when you had more information you concluded it was a tree. Simple! All we have to do is define ‘concluding.’ As it happens, this is not as simple as it sounds.

Let’s start again. You see the shadow moving — this is pure perception, straight from the eyeballs to the brain. As time goes on, all that changes outside your head is that you see more of the shadow moving. If you were some sort of an idealized empirical thinking machine, you’d start out completely agnostic, gather data from your senses, form a hypothesis, and test it against new data as it comes in.

On reflection, though, that’s not how it feels: you start out convinced there’s someone in the room with you, and a moment later, with barely any more information, you are suddenly convinced you’re looking at photons beamed out of a moving vehicle as partially blocked by the limbs of a tree. It doesn’t happen gradually at all, it’s as sudden as when the old lady’s nose becomes the young woman’s chin in the famous optical illusion.

The missing piece here is the notion of a schema. If sensory input were all you had to work with, you wouldn’t see Freddy Krueger or a tree; you would only see arbitrary patterns of light and shadow. But you already have a model in your head for two different sorts of spindly half-lit limbs whose shadows you might see on your wall — instead of building a theory on top of your sensory impressions, you’re matching your perception to the models that you’ve already built up.

So we’ve got two different systems in play: a bottom-up system that’s looking at black shapes on a white background right now, and a top-down system that’s recognizing letters and words you already have models for — dpseite teh fcat taht semotmies tehy aner’t a preefct mtach ot teh mdoels.

Psychologist Richard Gregory recognized that top-down judgements were important to perception in the 1970s, challenging the then-dominant paradigm of direct realism — the notion that we experience the world pretty much as it is — with the notion that we’re actually experiencing hypotheses.

After a close study of optical illusions, he concluded that what we see is about 10% actual visual stimulus and about 90% deductions made from memory; one could haggle over the precision of those numbers, but subsequent research has generally borne out the basic idea.

Thus, when you saw the shadows moving on your wall in low light, you didn’t have much information to work with, so you filled in the gaps with your memories of what human figures and trees, in general, look like. Mystery solved.

But wait. Why both? Nothing in that story accounts for the fact that you managed to make the switch. The top-down and bottom-up systems already got their accounts close enough to “shake hands” and agreed on a map of reality with Freddy Krueger in it; how did that transmogrify into a map with a tree and a passing car in it?

It’s not just that you learned, it’s that you judged your own learning — otherwise you would remember seeing Freddy Krueger turn into a tree. The second schema is nothing like the first, so our story is incomplete — we’ve accounted for perception and cognition, but not metacognition. How did you manage to conclude the prior map was just a tree-related illusion (dendrapophenia, perhaps) and update it?

If you felt your schema-sense tingling at the word ‘prior’ you’re probably familiar with Bayesian analysis, that keystone of conditional probability and scourge of undergraduate statistics students everywhere. If not, content yourself with knowing it’s a mathematical formalization of the idea that to get an accurate model of the world you have to take into account both prior knowledge and current experience, adjusting the former as you go (as with nearly everything here, this is a gross oversimplification).

Even if you see a shadow on your wall that looks about halfway between a tree and a bogeyman, that doesn’t imply there’s a 50/50 chance there’s a bogeyman in your room, because you have prior knowledge that there are lots of trees and very few bogeymen. This is the metacognitive judgement you made when you realized the likeliest explanation for what you were seeing was that it was a tree you mistook for something else.

It turns out human brains are superb at matching what they see to existing schemas, but hilariously terrible at judging the prior likelihoods of those schemas and adjusting them when they aren’t making sense, especially when they are asleep.

Yale researcher Philip Corlett thinks the human brain implements Bayesian reasoning on perception in a fairly direct chemical fashion. In his model, bottom-up processing depends on AMPA glutamate receptor activity, top-down processing depends on NMDA receptor activity, and dopamine codes for the level of prediction error — the amount of difference between the NMDA-modulated information about the map and the APMA-modulated information about the territory. He makes a convincing case that the cognitive effects of several psychoactive drugs fit this paradigm, noting for instance that PCP, which blocks NMDA receptor transmission, gives you exactly the sort of delusions and perceptual weirdness you might expect under such a paradigm.

This is an elegant framework, if it holds up — time will tell. Yet it still doesn’t do much to explain how we consciously update our priors. How is a tricky question, but there’s a fairly good bet as to where.

The anterior cingulate, which collars the front of the corpus callosum, seems to be key in making such judgements about perceptual model fit and adjusting accordingly. It also happens to dampen some of its activities during sleep.

This could account for your tendency, when dreaming, to perceive an immersive environment as real despite it having features whose prior likelihood should constitute a dead giveaway that the world in which you find yourself is not, in fact, real — sudden ability to fly, extra rooms in your house you’ve never noticed, highly improbable sex, &c. We can navigate our dreams, and cogitate a bit about what’s happening, but we never seem to evaluate how strange it all is until we wake up.

And there’s your bedroom monster: you hadn’t quite woken up yet, so the prior-adjusting part of your Bayes loop is out of whack, but the rest of your chain of reasoning is intact. You’re awake enough to identify a complicated visual pattern as something that might be a home invader and might be a tree, but not yet awake enough to realize that the former is so much less likely it’s not worth getting bent out of shape about.

So, if all this holds water, why should our reason be organized in this particular way? Why have a distinct bit of brain for evaluating models if it’s so easy to turn off? Why the separation of function?

One explanation might lie in the classic parable explaining the prevalence of anxiety. Three ancestral hominids are walking across the Serengeti when they spy a beige rock in the distance. 99 times out of 100 the beige rock is just that, but the 100th time it’s actually a lion waiting to pounce.

The first hominid is a Panglossian optimist, and always assumes it’s a rock; he’s right 99% of the time. The second is a perfectly calibrated Bayesian, and judges correctly that there’s a 1% chance it’s a lion and a 99% chance it’s a rock. The third is a nervous wreck, and assumes it’s a lion every time — he’s wrong 99% of the time. Every single human being alive is descended from the third hominid, the others having been eaten by lions, so we have inherited a tendency to spook easily.

Pat though that may be, it bears considering that it’s actually quite a feat for the third hominid to maintain their ability to perceive the rock bottom-up, make a snap top-down judgement that it conforms to their model of a lion, and yet never revise that model to let their guard down despite the fact that 99 out of every 100 memories they have of similar incidents involve them panicking over nothing at all. They would have to notice the rock and find it salient every time — implying a notable distance between their top-down schema and their bottom-up perception — and yet somehow avoid letting their sky-high error rate decondition their response to it over time. That sounds like a little more than a simple leophobia.

As it happens, there is a whole class of human beings notorious for their inability to update their prior models of the world: schizophrenics. Paranoid schizophrenics in particular famously suffer from intractable delusions of reference, believing strange things despite overwhelming evidence to the contrary. They seem partially unable to distinguish between symbols of reality and reality itself — they tend to confuse the thought of a voice with the sound of one, and will often fixate on seemingly irrelevant objects or phenomena and impute profound meaning to them. They also tend to have too much dopamine, hypofunctioning NMDA receptors, and abnormalities in their anterior cingulate cortex: adjust your prior models accordingly.

Could the cluster of symptoms in schizophrenia represent an archaic or atavistic form of consciousness? Psychologist Julian Jaynes explored this idea in depth, and put forth a theory too fascinating not to mention.

In his account, humans were schizophrenic by default until the late Bronze Age, with societies generally organized either as small hunting bands or into literate theocracies through which they moved as though in a waking dream, their actions in daily life dictated and coordinated by shared command hallucinations that they attributed to the voices of their ancestors and their gods — heady stuff.

Among many other points, he cites as evidence the (apparently) universal lack of metacognition in ancient literature, the privileged role accorded (apparently) schizophrenic prophets and sibyls in subsequent centuries, and the (apparent) pattern of Norman-Bates-esque corpse-hoarding in ancient Mesopotamia evolving via the veneration of dead kings to the worship of deities.

Several of his predictions have not worn well — he was convinced, for instance, that metacognition was predominantly an innovation in inter-hemisphere communication in the corpus callosum, and conceived of schizophrenia as something more reminiscent of split-brain epileptics — but it’s interesting enough just to think of the possibility there was a time before and after which the capacity to be un-schizophrenic existed.

A deeper evolutionary timescale might make more sense than positing self-reflective consciousness came from some kind of speech-catalyzed plague of hypertrophied cingulates (The Cingularity, or, Buscard’s Murrain) so recently as Jaynes argued, but the idea that we’re evolving away from apophenia both culturally and genetically deserves close examination.

Although their order and overlap are open questions, there likely exist a point before which animism was still the universal norm, a point before which no vocabulary to describe consciousness existed, and a point before which the neurological capacity for consciously correcting a false belief was simply not physically present, in whatever form.

The beings that lived without those things were either outright human or close enough for government work, and we catch tantalizing glimpses of how they must have experienced the world when our capacity for reflectivity is occluded in illness, on the edge of sleep, in mystic states, and in our childhoods.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s