Bayes’d and Con-fused

On Salience, Semiotics, and Schizophrenia

You’re laying in your bed, drifting off to sleep, when suddenly you realize there’s someone in the room with you. “Surely not,” you think to yourself in semisomnolent confusion — “I’m alone in the house!” But there in the corner of the room, shrouded in shadow, is a tall thin figure, and it is reaching towards you.

Suddenly you’re sitting bolt upright in bed, heart pounding, wide awake, poised to sprint for the door — or, if you’re brave, to fight off this mysterious home invader. That’s when you hear the noise of a passing car and realize that what you’re looking at is not in the room with you at all: it’s only the shadow of a tree, moving as the headlights beyond pass by. You mutter an obscenity or two about Plato’s cave, close the curtains, and drift off back to sleep just as soon as your heart slows down.

You probably remember an incident or two like this, an adrenaline rush followed hard upon by chagrin at your own mistake. What exactly happened to you? You thought you were seeing one thing, and subsequently realized you were really seeing another — but what does that process actually entail?

You saw a shadow moving, and the longer you watched it, the more you knew about it. When you had a little information, you concluded it was Freddy Krueger coming to get you, but when you had more information you concluded it was a tree. Simple! All we have to do is define ‘concluding.’ As it happens, this is not as simple as it sounds.

Let’s start again. You see the shadow moving — this is pure perception, straight from the eyeballs to the brain. As time goes on, all that changes outside your head is that you see more of the shadow moving. If you were some sort of an idealized empirical thinking machine, you’d start out completely agnostic, gather data from your senses, form a hypothesis, and test it against new data as it comes in.

On reflection, though, that’s not how it feels: you start out convinced there’s someone in the room with you, and a moment later, with barely any more information, you are suddenly convinced you’re looking at photons beamed out of a moving vehicle as partially blocked by the limbs of a tree. It doesn’t happen gradually at all, it’s as sudden as when the old lady’s nose becomes the young woman’s chin in the famous optical illusion.

The missing piece here is the notion of a schema. If sensory input were all you had to work with, you wouldn’t see Freddy Krueger or a tree; you would only see arbitrary patterns of light and shadow. But you already have a model in your head for two different sorts of spindly half-lit limbs whose shadows you might see on your wall — instead of building a theory on top of your sensory impressions, you’re matching your perception to the models that you’ve already built up.

So we’ve got two different systems in play: a bottom-up system that’s looking at black shapes on a white background right now, and a top-down system that’s recognizing letters and words you already have models for — dpseite teh fcat taht semotmies tehy aner’t a preefct mtach ot teh mdoels.

Psychologist Richard Gregory recognized that top-down judgements were important to perception in the 1970s, challenging the then-dominant paradigm of direct realism — the notion that we experience the world pretty much as it is — with the notion that we’re actually experiencing hypotheses.

After a close study of optical illusions, he concluded that what we see is about 10% actual visual stimulus and about 90% deductions made from memory; one could haggle over the precision of those numbers, but subsequent research has generally borne out the basic idea.

Thus, when you saw the shadows moving on your wall in low light, you didn’t have much information to work with, so you filled in the gaps with your memories of what human figures and trees, in general, look like. Mystery solved.

But wait. Why both? Nothing in that story accounts for the fact that you managed to make the switch. The top-down and bottom-up systems already got their accounts close enough to “shake hands” and agreed on a map of reality with Freddy Krueger in it; how did that transmogrify into a map with a tree and a passing car in it?

It’s not just that you learned, it’s that you judged your own learning — otherwise you would remember seeing Freddy Krueger turn into a tree. The second schema is nothing like the first, so our story is incomplete — we’ve accounted for perception and cognition, but not metacognition. How did you manage to conclude the prior map was just a tree-related illusion (dendrapophenia, perhaps) and update it?

If you felt your schema-sense tingling at the word ‘prior’ you’re probably familiar with Bayesian analysis, that keystone of conditional probability and scourge of undergraduate statistics students everywhere. If not, content yourself with knowing it’s a mathematical formalization of the idea that to get an accurate model of the world you have to take into account both prior knowledge and current experience, adjusting the former as you go (as with nearly everything here, this is a gross oversimplification).

Even if you see a shadow on your wall that looks about halfway between a tree and a bogeyman, that doesn’t imply there’s a 50/50 chance there’s a bogeyman in your room, because you have prior knowledge that there are lots of trees and very few bogeymen. This is the metacognitive judgement you made when you realized the likeliest explanation for what you were seeing was that it was a tree you mistook for something else.

It turns out human brains are superb at matching what they see to existing schemas, but hilariously terrible at judging the prior likelihoods of those schemas and adjusting them when they aren’t making sense, especially when they are asleep.

Yale researcher Philip Corlett thinks the human brain implements Bayesian reasoning on perception in a fairly direct chemical fashion. In his model, bottom-up processing depends on AMPA glutamate receptor activity, top-down processing depends on NMDA receptor activity, and dopamine codes for the level of prediction error — the amount of difference between the NMDA-modulated information about the map and the APMA-modulated information about the territory. He makes a convincing case that the cognitive effects of several psychoactive drugs fit this paradigm, noting for instance that PCP, which blocks NMDA receptor transmission, gives you exactly the sort of delusions and perceptual weirdness you might expect under such a paradigm.

This is an elegant framework, if it holds up — time will tell. Yet it still doesn’t do much to explain how we consciously update our priors. How is a tricky question, but there’s a fairly good bet as to where.

The anterior cingulate, which collars the front of the corpus callosum, seems to be key in making such judgements about perceptual model fit and adjusting accordingly. It also happens to dampen some of its activities during sleep.

This could account for your tendency, when dreaming, to perceive an immersive environment as real despite it having features whose prior likelihood should constitute a dead giveaway that the world in which you find yourself is not, in fact, real — sudden ability to fly, extra rooms in your house you’ve never noticed, highly improbable sex, &c. We can navigate our dreams, and cogitate a bit about what’s happening, but we never seem to evaluate how strange it all is until we wake up.

And there’s your bedroom monster: you hadn’t quite woken up yet, so the prior-adjusting part of your Bayes loop is out of whack, but the rest of your chain of reasoning is intact. You’re awake enough to identify a complicated visual pattern as something that might be a home invader and might be a tree, but not yet awake enough to realize that the former is so much less likely it’s not worth getting bent out of shape about.

So, if all this holds water, why should our reason be organized in this particular way? Why have a distinct bit of brain for evaluating models if it’s so easy to turn off? Why the separation of function?

One explanation might lie in the classic parable explaining the prevalence of anxiety. Three ancestral hominids are walking across the Serengeti when they spy a beige rock in the distance. 99 times out of 100 the beige rock is just that, but the 100th time it’s actually a lion waiting to pounce.

The first hominid is a Panglossian optimist, and always assumes it’s a rock; he’s right 99% of the time. The second is a perfectly calibrated Bayesian, and judges correctly that there’s a 1% chance it’s a lion and a 99% chance it’s a rock. The third is a nervous wreck, and assumes it’s a lion every time — he’s wrong 99% of the time. Every single human being alive is descended from the third hominid, the others having been eaten by lions, so we have inherited a tendency to spook easily.

Pat though that may be, it bears considering that it’s actually quite a feat for the third hominid to maintain their ability to perceive the rock bottom-up, make a snap top-down judgement that it conforms to their model of a lion, and yet never revise that model to let their guard down despite the fact that 99 out of every 100 memories they have of similar incidents involve them panicking over nothing at all. They would have to notice the rock and find it salient every time — implying a notable distance between their top-down schema and their bottom-up perception — and yet somehow avoid letting their sky-high error rate decondition their response to it over time. That sounds like a little more than a simple leophobia.

As it happens, there is a whole class of human beings notorious for their inability to update their prior models of the world: schizophrenics. Paranoid schizophrenics in particular famously suffer from intractable delusions of reference, believing strange things despite overwhelming evidence to the contrary. They seem partially unable to distinguish between symbols of reality and reality itself — they tend to confuse the thought of a voice with the sound of one, and will often fixate on seemingly irrelevant objects or phenomena and impute profound meaning to them. They also tend to have too much dopamine, hypofunctioning NMDA receptors, and abnormalities in their anterior cingulate cortex: adjust your prior models accordingly.

Could the cluster of symptoms in schizophrenia represent an archaic or atavistic form of consciousness? Psychologist Julian Jaynes explored this idea in depth, and put forth a theory too fascinating not to mention.

In his account, humans were schizophrenic by default until the late Bronze Age, with societies generally organized either as small hunting bands or into literate theocracies through which they moved as though in a waking dream, their actions in daily life dictated and coordinated by shared command hallucinations that they attributed to the voices of their ancestors and their gods — heady stuff.

Among many other points, he cites as evidence the (apparently) universal lack of metacognition in ancient literature, the privileged role accorded (apparently) schizophrenic prophets and sibyls in subsequent centuries, and the (apparent) pattern of Norman-Bates-esque corpse-hoarding in ancient Mesopotamia evolving via the veneration of dead kings to the worship of deities.

Several of his predictions have not worn well — he was convinced, for instance, that metacognition was predominantly an innovation in inter-hemisphere communication in the corpus callosum, and conceived of schizophrenia as something more reminiscent of split-brain epileptics — but it’s interesting enough just to think of the possibility there was a time before and after which the capacity to be un-schizophrenic existed.

A deeper evolutionary timescale might make more sense than positing self-reflective consciousness came from some kind of speech-catalyzed plague of hypertrophied cingulates (The Cingularity, or, Buscard’s Murrain) so recently as Jaynes argued, but the idea that we’re evolving away from apophenia both culturally and genetically deserves close examination.

Although their order and overlap are open questions, there likely exist a point before which animism was still the universal norm, a point before which no vocabulary to describe consciousness existed, and a point before which the neurological capacity for consciously correcting a false belief was simply not physically present, in whatever form.

The beings that lived without those things were either outright human or close enough for government work, and we catch tantalizing glimpses of how they must have experienced the world when our capacity for reflectivity is occluded in illness, on the edge of sleep, in mystic states, and in our childhoods.

Miscellany

How should we categorize intelligence?

A useful high-level division of species by category would be one that reflected both evolutionary and behavioral reality well enough to make valid predictions. Since behavior is immediately observable and evolutionary history generally involves more indirect inference, it makes sense to categorize behavior first and then look for evolutionary conditions necessary to produce it.

The first and most obvious line that may be drawn is between species with and without intra-generational learning, which is to say with and without neurons. The behavior of species without neurons depends on genome and circumstance — two (e.g.) sea cucumbers or with identical genomes in identical circumstances will behave identically, and large changes in behavior can only be produced over multiple generations by natural selection. In contrast, species with neurons are capable of learning — their behavior is mediated by long-term potentiation of neurons in response to past events, such that two (e.g.) dogs with identical genomes in identical circumstances may respond differently to the same stimulus if they have received different conditioning.

Although creatures with more developed brains have more nuanced heuristics available, this capacity for learning is broadly evident even in species with extremely simple nervous systems, like cockroaches (Watanabe and Mizunami, 2007). This suggests two categories, or more properly a category and a subcategory: life, and neuronal life.

Within the neuronal subcategory, adult modern humans use complex language that can direct and influence the behavior of other humans, including those not immediately present. They are capable not just of associating an arbitrary symbol with an object, but of distinguishing symbols as a category from objects as a category. This requires a theory of mind — for a human to understand that a novel series of symbols will be interpreted correctly by another mind, it is necessary that they understand both that other humans are similar enough to them to interpret the same symbols in the same way, and also that other humans are different enough from them to lack information they have or have information they lack. These abstract linguistic capacities appear to be unique to humans, and so humans can be placed in a third subcategory within neuronal life: conscious linguistic life, a set which currently contains only the human species.

Although complex language and theories of mind appear to be unique to adult humans, they do not develop immediately. Children fail to verbally identify differences in objects present in their own visual fields versus those of other people until they are around 6 years old (Piaget, 1969), do not begin to use complex elaborative syntax until they are around 2 years old, do not use simple word labeling until they are around 1 year old, and do not engage in communicative coordination of regard with another person and an external object until they are about 6 months old (Striano and Reid, 2006).

However, even before 6 months they are capable of protoconversations, mirroring the expressions on other human faces at a delay and coordinating the length of pauses between facial shifts (Beebe, 2014). This behavior implies both that the infant must be storing some kind of representation of another person’s face for the length of the delay and also that they can map this representation to their own face in order to mimic it. Do these pre-linguistic capacities exist in any other species?

Great apes become mobile much more quickly than humans do, and so infant great apes do not spend much time on the face-to-face protoconversations that immobile human infants engage in. However, they are able to pass mirror tests, which involve looking at their reflection and deducing the presence of a mark on their own forehead, about as well as human infants under the same circumstances (Bard et al., 2006). This strongly implies that they must also possess enough of a self-representation to map their own movements to observed movements over time, since they must determine that the movements of the ape in the mirror correspond exactly to their own and are not simply produced by another ape behind glass.

Great apes can also follow gaze and understand opacity (Povinelli and Eddy, 1996) in a manner reminiscent of human infants, and can use this to preferentially steal food that they can tell another ape is unable to see (Hare et al., 2000). Other primates can preserve abstract representations of sequence that simple stimulus-response chaining is inadequate to explain (Terrace 2005). The vast majority of animal species do not display these capacities.

For this reason it makes sense to posit a fourth behavioral category, within neuronal life and containing adult humans, which also contains human infants and arguably contains some other primates and hominids — a preconscious or semiconscious category, with great apes on the low end, human infants on the high end, and extinct hominids in the middle. In this category, organisms can store persistent representations and map their perceptions to internal models, but are unable to produce language or model differences between the states of knowledge of multiple individuals: they have some heuristics available for primary intersubjectivity, but none for secondary intersubjectivity (Beebe, 2003).

Do these four putative nested categories — organisms, neuronal organisms, semiconscious organisms, and fully linguistic organisms — correspond well to the evolutionary record? They appear to map to known clades; all species share a common ancestor, all species with brains share a more recent common ancestor, primates one more recent still, and humans one more recent than all of the others.

To the extent that ontogeny recapitulates phylogeny, therefore, the putative semiconscious category should predict a long period of time in which the human lineage developed and elaborated on preconscious representational abilities already partially present in apes, but did not display the abilities of modern humans to use complex language or elaborate theories of mind. It should also predict that the appearance in non-primate clades of any behaviors that appear to imply stored representation beyond simple behavioral conditioning but do not produce complex language will be produced by broadly similar evolutionary conditions. Moreover, if the structural capacity for speech and theory of mind evolved separately and significantly later than the capacity for symbolic representation, it should be possible to disrupt the former in adult humans while leaving the latter intact.

Dyadic interactions can be described conservatively as imitation at a delay. Infants are capable of initiating complex protoconversations with their mothers by around 6 months, of engaging in protoconversations initiated by the mother by around 3 months (Striano and Reid, 2006), and of subtle but measurable matching adjustments of facial expressions over time almost immediately after birth (Meltzoff and Moore, 1994). Protoconversations are characterized by two-way coordination of facial expressions and vocalizations, with infants first responding to and soon after initiating exchanges of mimicry that involve both partners matching not only their movements but their pace (Beebe).

This implies the capacity to store an abstract representation of a face in working memory, such that observed movements and timing can be recapitulated without an immediate stimulus. Unresponsive faces trigger distress behaviors in infants, which implies the ability to predict a mirrored movement on the part of another and register deviations from that prediction. Dyadic interactions require an ability to detect differences between sensory data and a stored model, for which simple behavioral conditioning cannot account.

As early as 6 months, infants are capable of sustaining mother-initiated interactions involving a third object, alternating attention between the mother and the object; by around 9 months they are initiating such interactions (Striano and Reid, 2006). They not only follow gaze — projecting a ray through three-dimensional space based on the mother’s eye movements and fixating on an object in that direction — but also check back after looking in the same direction, refocusing their attention if they have picked the wrong object as indicated by the mother’s use of indicative gestures or simple verbal labels (Baldwin, 1991). Multiple acts of gaze-following over time require not just a persistent representation of a face or body plan that maps to their own, but of a mind whose state may differ from the state of their own.

Triadic interactions require both the ability to note differences between sensory data and a stored model and the ability to adjust existing models on the fly to match a different model in someone else’s mind, for which simple model storage based on past experience cannot account. The latter capacity is a necessary prerequisite for complex language, as opposed to simple labels, because novel messages are ineffective if the other party cannot be relied upon to understand them; unsurprisingly, triadic proficiencies correlate with later language proficiencies (Brooks and Meltzoff, 2008). Infant-initiated triadic interactions may involve gaze redirection with deliberate communicative intent (Stern, 1971), implying that triadic infants have some capacity to model other humans as lacking information they have or having information they lack. The ability to compare a model of self with a substantially different model of another does not appear to be present in any other species.

Wallace questioned how Darwin’s theory of natural selection could account for human language and consciousness, given that only humans possess these features and that human minds seemed to him much more powerful than could be accounted for by simple selective pressure. After posing this question, he became a spiritualist and concluded that providence had intervened in evolution three times: once to produce multicellularity, once to produce brains, and once to produce human consciousness.

It is no longer considered prudent to speculate on divine intervention in evolutionary history, and so Wallace’s Problem boils down not to whether but to how the physical capacity for language evolved. Our current picture is incomplete, but seems to involve two major leaps in cognitive capacity — one from mirrored representations to differential representations, and one from differential representations to full-fledged language. To address Wallace’s Problem in substance requires us to explain what specific selective pressures produced these developments, what accounts for their apparent sudden appearance, and why they did not occur elsewhere in nature.

The environment of humanity and its immediate antecessors went through several major ecological changes in relatively short succession. The first of these, the deforestation of East Africa around 4 million years ago, produced bipedalism; this major anatomical shift can be explained in the traditional model of gradual change, as incrementally more bipedal individuals would gain incremental advantages in food-gathering by increasing their range and decreasing their energy expenditure (Rodman and McHenry, 1980). This incremental shift is well-attested in the fossil record, and occurs around the same time as the split between the Pan and Australopithecus lineages.

In addition to opening up new frontiers in foraging, bipedalism produces narrow pelvises through which it is difficult to pass an infant. Increased bipedalism therefore tends to produce infants born incrementally earlier in development, which require longer periods of care before being able to feed themselves. This means more pressure to find novel foraging strategies in order to feed infants, which in turn advantages infants born with even larger brains born even more helpless.

This feed-forward loop is enough, all on its own, to eventually produce the most premature possible infant with the largest possible brain; every time a population with slightly larger brains managed to secure more food, that would remove some of the metabolic pressure to keep brain size low, resulting in a population with even larger brains, resulting in pressure to find even more novel methods of securing food. The development of abstract representation more advanced than that displayed by the great apes and the subsequent development of language both occurred within the context of this ongoing process, and accelerated it by temporarily removing some food pressure, allowing smarter and more premature infants to be born and drive food pressure back up again, necessitating further development of novel scavenging and later hunting strategies.

One objection Wallace might have raised to this model is that it posits the sudden emergence of new and complex behaviors without a correspondingly sudden anatomical change — skull size increase was gradual, but the emergence of alloparenting and long-distance hunting were not. Where are the sudden anatomical changes to match the sudden behavioral changes? There are two answers to this, the simplest being that such anatomical changes did occur, but in soft tissue, which does not show up in fossils and for which the only preserved proxy available is skull size.

On reflection, however, there is a more basic explanation: the principal evolutionary advantage of having neurons at all is that they allow an organism to adapt to change faster than trial-and-error by reproduction can allow. The range of potential behaviors that a particular critical mass of neurons can allow for is necessarily much, much wider than the range of behavior it has produced to date — the capacity must evolve before the behavior can emerge. Canids existed for a long time before anyone taught them tricks, and humans were anatomically capable of building steam engines long before it became common behavior for them.

When Erectus developed alloparenting it had already been around for ~800 thousand years, but then suddenly had marked increase in foraging efficiency and therefore calories available to further maximize brain size and prematurity. If the model is correct, the rate of change in skull size between Australopithecus afarensis and Homo erectus should be less than the rate of change in skull size between Homo erectus and Homo sapiens. The fossil record supports this: from afarensis to erectus, cranial capacity increased from an average of 430 to 850 cubic centimetres over roughly 2 million years, and from erectus to sapiens average cranial capacity increased from 850 to 1400 cubic centimetres over about the same span of time.

So the two answers to Wallace are first that selective pressures do, in fact, account for the development of capacities known to underlie language once anatomical feed-forward loops are taken into account, and second that large brains are physically capable of implementing new behaviors long before those behaviors actually appear, such that they may emerge spontaneously and reproductively privilege the individual in which they occur.

Moreover, the ability to imitate observed behaviors (which likely emerged with alloparenting) and the ability to communicate novel ideas by combining existing words (which possibly emerged with big game hunting) both enable a given technique to spread to other individuals with the same cognitive capacities immediately, rather than privileging only the offspring of the individual who invented them, further accounting for the sudden emergence and spread of things like tool cultures on sub-evolutionary timescales.

The principal similarity between computer memory as currently implemented and biological memories is that information and methods of processing that information are stored in the same medium. In a computer data and instructions are stored in the same medium — any string of bytes could represent either a program or data or both, depending on context — and in brains memories seem to be stored and retrieved in a fashion inextricable from processing context.

In most other respects, computer memory is more reminiscent of the operation of individual cells than of any inter-cellular process like a brain. In a computer, a program composed of a pattern of binary bits — 1s and 0s — is copied from storage into working memory, interpreted by a processor, and outputs data that in turn can sometimes affect the program’s own future execution somewhere down the line; in a cell, a gene composed of a pattern of quaternary nucleotides — A’s C’s T’s and G’s — is copied from DNA to RNA, interpreted by a ribosome, and outputs a protein, sometimes a protein that in turn can sometimes affect the DNA’s own future structure somewhere down the line. The original abstract conception of computation (Turing, 1936) — an interpreter which iterates along an infinitely long two-symbol tape — bears more than a passing resemblance to the operation of ribosomes reading four-symbol sequences from a 3-billion base-pair long genome.

Biological memory as implemented in neurons differs in that there appear to be no atomic engrams — no one has isolated a quantum of change in brains equivalent to a single-base-pair mutation or a single-bit flip. The simplest form of neuronal memory is behavioral conditioning, which is demonstrable by long-term potentiation in response to repeated stimuli even in extremely simple nervous systems. This preconscious neuronal learning is entirely nonsymbolic, and behaviors produced are generally entirely predictable from the conditioning stimuli, but every retrieval of a response via a stimulus changes the action potentials involved — computer memory becomes ‘sticky’ like this only in cases of extreme malfunction.

In human memory, there is a second system in play, one that maps stored representations onto perceptual input. It operates in a way that bears some resemblance to hypothesis testing, in that low levels of difference between the internal model and the sensory data result in the model being projected onto the data to fill in any gaps, and high levels of difference result in behaviors associated with salience and surprise. Bottom-up processing appears to depend on on AMPA glutamate receptor activity, and top-down processing on NMDA receptor activity; dopamine codes for the level of predictive error (Corlett, Firth, and Fletcher, 2009).

The cognitive effects of several psychoactive drugs fit this paradigm — for instance, PCP, which blocks NMDA receptor transmission, gives you exactly the sort of delusions and perceptual apophenia you might expect under such a paradigm. Dyadic human infants are already capable of this two-system comparison between reality and stored representations, and the fact that some primates can be shown to store representations (Terrace, 2005) suggests they have some glimmering of the same capacity. This is not semiotics in the sense the term is normally used, because it does not require explicit communication between organisms, but it does allow for a feedback loop that can generate novel behaviors by projecting abstract representations onto perceived reality in a manner more complex than conditioned memories in simpler animals can manage.

Computer memory does not behave like this on small scale, but large networks of computers can implement somewhat analogous processes whose deficits can point to the necessity of other systems to explain adult human memories. Google Deep Dream, an experimental computer vision project, was built to recognize objects in videos by projecting pre-existing internal models based on a very large dataset of categorized images it trained on. The system quickly became famous for its apophenia — for instance, after associating millions of pictures of dogs from various angles with the general shape of a dog, it started seeing dogs everywhere, mapping them onto vaguely dog-shaped objects in the scenes with which it was presented.

Missing from this two-system picture, in a Bayesian sense, is the capacity to update prior models reliably. That capacity is fundamental to triadic interactions in human infants — they are constantly checking back with their mother to see if they are schematizing the external object to her satisfaction. It is also notably impaired in adult humans with damage to their anterior cingulates — they retain the ability to judge whether what they are seeing matches their internal schema, but they have trouble updating the schema (Mars, Sallet, and Rushworth, 2011). If this function was not present in early hominids, patients with this kind of brain damage may be engaging in essentially atavistic cognition, the cognition of the semiconscious category. Updating priors based on input from another mind requires storing an abstract representation of minds sufficiently complex to account for differences in knowledge between them.

Schizophrenics, and paranoid schizophrenics in particular, famously suffer from intractable delusions of reference, believing strange things despite overwhelming evidence to the contrary. They seem partially unable to distinguish between symbols of reality and reality itself — they tend to confuse the thought of a voice with the sound of one, and will often fixate on seemingly irrelevant objects or phenomena and impute profound meaning to them, or hallucinate things they have schemas for onto sensory data that doesn’t really match it very well. They also tend to have too much dopamine, hypofunctioning NMDA receptors, and abnormalities in their anterior cingulate cortex (Coyle, 2006), all in line with the interrupted Bayesian model.

Schizophrenics have arguably lost secondary intersubjectivity and retained primary intersubjectivity — they can no longer verify their perceptions of an external object with another person, but can still manage to store persistent representations of abstract schemas (often to the point where it takes years to convince them their schemas are wrong). They also retain language, which might seem to imply that secondary intersubjectivity actually developed long after language did (Jaynes, 1976) — however, schizophrenia develops late in life after secondary intersubjectivity and language have already been present for years, so it is impossible to tell whether they could have developed language if they had developed without secondary intersubjectivity from birth. It is not necessary to posit a speech-catalyzed plague of hypertrophied cingulates to explain the leap from primary to secondary intersubjectivity in primates.

If there were some other clade whose history included habitual bipedalism, rapid adaptation to major ecological change, a reproductive bottleneck leading to helpless infants, complex social behaviors, and the dyadic ability to mimic complex sequences at a delay, it would be possible to argue that parallel evolution puts them in the same preconscious category as human infants and early hominids — that is, to present a case that they may possess some potential for primary but no potential for secondary intersubjectivity.

Certain avians fit these criteria: they are descended from dinosaurs which independently developed bipedalism in the Triassic, survived major climatic change and migrated to many novel climes, lay eggs which are necessarily small enough not to preclude flight, hatch unable to fly or feed themselves, and are capable of mimicking complex songs and sometimes human speech at a delay and with conversational pacing. Some even alloparent (Anctil and Franke, 2013).

Corvids in particular are capable of solving very complex puzzles — they also pass the mirror test, use simple tools, and will re-hide cached food when they notice another bird watching them if and only if they themselves have stolen a cache in the past (Clayton and Dally, 2007). This strongly implies they are capable storing models of the world and comparing them with current sensory input in an analogous way to semiconscious primates.

There are, broadly speaking, two forms of language: single-word associations, and complex recursive syntax. The former is already present in great apes, which can readily be conditioned to associate a hand sign or computer symbol with a particular object. However, these symbols are always imposed from the outside — apes do not generate new symbols or sequences of symbols. The particular advantage of language is not in the ability to use labels, which apes and dogs do in ways that can be explained by simple behavioral conditioning, but in the ability to generate arbitrary new labels.

Human words are different from, for example, variegated warning calls specific to particular predators as seen in e.g. Campbell’s monkeys (Schlenker et al., 2014), in that new ‘calls’ for new referents can be generated at will and spread among a group, rather than standardizing over evolutionary timescales. This capacity is a prerequisite for more complex grammatical language, and requires secondary intersubjectivity. Secondary intersubjectivity involving the representation of multiple differing states of mind is thought to have emerged with Homo erectus roughly 1.2 million years ago, on the basis that a sudden increase in the efficacy of scavenging could be attributed to alloparenting, which would at once allow more adults to engage in foraging unencumbered by infants and privilege infants capable of making distinctions between caregivers (Hrdy 2009).

Alloparenting exists in many species, including some primate species, but not in any of the great apes — for it to emerge in the human lineage so quickly suggests that the behavior in this case was a neurological innovation and not a genetic one, an innovation made possible by the relentless feed-forward loop of bipedalism and extra cranial capacity. Somewhat contra Jaynes, the triadic capacity likely preceded and was necessary for language to begin to develop beyond simple labels — sentences with recursive grammar communicate novel ideas, and to transmit a novel idea by a series of symbols implies a persistent model of another mind with a notably different state of knowledge. To make an argument about when such nontrivial language emerged along the same lines as the argument for alloparenting would require describing a behavior that could not be accomplished without nontrivial language.

Speculatively, long-distance persistence hunting, which emerged later than simple group scavenging, might be a candidate: bipedalism is great for endurance running, but extends the range so much that the hunters might wind up very far away from the band they intend to feed, and it would make much more sense to send someone back to fetch the band than to drag a large kill back to them. That would require communicating novel information in a relay. Sending a messenger would arguably require at least enough language to relate something like “the others sent me to come bring you to [a place you have never been]” — the messenger would need to convey the state of mind of the hunters still with the kill to the remainder of the band, and hold that message in a form persistent enough to survive a lengthy journey reliably.

This is somewhat analogous to the difference between triadic but preverbal infants and verbal children — the triadic infant can distinguish between two minds, but has little means available to convey a thirdhand message about an absent party. One present, language would also allow tighter social coordination of hunting behaviors and enable less primitive forms of hunting to emerge.

Anctil, A., & Franke, A. (2013). Intraspecific Adoption and Double Nest Switching in Peregrine Falcons (Falco peregrinus). Arctic, 66(2), 222-225.

Baldwin, D. (1991). Infants’ Contribution to the Achievement of Joint Reference. Child Development, 62(5), 875-890. doi:10.2307/1131140

Bard, K. A., Todd, B. K., Bernier, C., Love, J., & Leavens, D. A. (2006). Self-Awareness in Human and Chimpanzee Infants: What Is Measured and What Is Meant by the Mark and Mirror Test?. Infancy, 9(2), 191-219. doi:10.1207/s15327078in0902_6

Beebe, B. (2003). A Comparison of Meltzoff, Trevarthen, and Stern. Psychoanalytic dialogues, 13(6), 777-804.

Beebe, B. (2014). My journey in infant research and psychoanalysis: Microanalysis, a social microscope. Psychoanalytic psychology, 31(1), 4-25. doi:10.1037/a0035575

Brooks, R., and Meltzoff, A. (2008) Infant gaze following and pointing predict accelerated vocabulary growth through two years of age: a longitudinal, growth curve modeling study. Journal of Child Language, 35(1), 207-220.

Clayton, N. S., Dally, J. M., & Emery, N. J. (2007). Social cognition by food-caching corvids. The western scrub-jay as a natural psychologist. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 507–522. doi:10.1098/rstb.2006.1992

Corlett, P. R., Frith, C. D., & Fletcher, P. C. (2009). From drugs to deprivation: a Bayesian framework for understanding models of psychosis. Psychopharmacology, 206(4), 515–530. doi:10.1007/s00213-009-1561-0

Coyle, J.T. (2006). Glutamate and Schizophrenia: Beyond the Dopamine Hypothesis. Cell and Molecular Neurobiology, 26, 363-382. doi:10.1007/s10571-006-9062-8
Hare, B., Call, J., Agnetta, B., and Tomasello, M. (2000). Chimpanzees know what conspecifics do and do not see. Animal Behaviour, 59(4), 771–785.

Hrdy, S. (2009). Mothers and Others: The Evolutionary Origins of Mutual Understanding. Boston: Harvard University Press.

Janyes, J. (1976). The Origin of Consciousness in the Breakdown of the Bicameral Mind. Boston: Houghton Mifflin.

Mars, R. B., Sallet, J., & Rushworth, M. F. S. (2011). Neural Basis of Motivational and Cognitive Control. Cambridge: The MIT Press.

Meltzoff, A. and Moore, M. (1994). Imitation, memory, and the representation of persons. Infant Behavior and Development, 17(1), 83-99. doi:/10.1016/0163-6383(94)90024-8.

Povinelli, D., & Eddy, T. (1996). Chimpanzees: Joint visual attention. Psychological Science, 7(129-135).

Piaget, J. (1969). The psychology of the child. Basic Books.
Rodman, Peter S.; McHenry, Henry M. (1980). Bioenergetics and the origin of hominid bipedalism. American Journal of Physical Anthropology, 52, 103–106. doi:10.1002/ajpa.1330520113
Schlenker, P., Chemla, E., Arnold, K., Lemasson, A., Ouattara, K., Keenan, S., . . .

Zuberbühler, K. (2014). Monkey semantics: Two ‘dialects’ of campbell’s monkey alarm calls. Linguistics and Philosophy, 37(6), 439-501. doi:/10.1007/s10988-014-9155-7
Stern, D. (1971), A microanalysis of mother–infant interaction. Journal of the American Academy of Child Psychology, 19:501–517.

Striano, T., & Reid, V. M. (2006). Social cognition in the first year. Trends in Cognitive Sciences, 10(10), 471 – 476. doi:10.1016/j.tics.2006.08.006
Terrace, Herbert S. (2005) The Simultaneous Chain: A New Approach to Serial Learning. TRENDS in Cognitive Sciences, 9(4), 202-210.

Turing, A. M. (1936) On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2(42), 230–65.
Watanabe, H., & Mizunami, M. (2007). Pavlov’s cockroach: Classical conditioning of salivation in an insect. PLoS One, 2(6) doi:/10.1371/journal.pone.0000529

Our Chi-val

We have a pathetically tiny corpus of texts that predate the Roman collapse, and fewer still from before the Bronze Age Collapse. The ones we do know generally survived because they were recopied more often than the rest. Most of the works we have from classical antiquity derive from copies made in Charlemagne’s era, and countless more are referenced that have never been found. Aristotle’s Poetics are a good example — the volume on tragedy survived by blind luck via an Arabic translation, and the volume on comedy is lost forever (ecce Eco). Confucius’ Analects only survived the Qin dynasty because someone hid a copy behind a wall, most of his contemporaries’ work having been burned and buried along with said contemporaries their own selves.

Human history is rife with examples of literary canons largely destroyed by the simple attrition of civlizations rising and falling in their usual messy ways. The things that survive various Yugas Kali are obsessively copied and recopied, like the Masoretic Text. Modern technology means many, many orders of magnitude more copies of modern data, which one might think bodes well for their survival. The new problem is encoding.

Linear A documents are indecipherable, since the Myceneans just wrote their own archaic Greek in the old alphabet and the subsequent Greek dark ages forgot writing altogether. So when you pick up a Linear A tablet, you lack the techniques required to read it, even though the medium has survived for millenia. Minoan decoder rings are not common these days.

Modern information storage does not survive for millenia, as Brewster Kahle would affirm. But say it did. If you find a thousand-year-old hard drive in the 31st century (unwittingly used as a brick in a 24th-century temple, say), how do you decipher it? Assuming you have reinvented the scanning electron microscope, I mean. Once you’ve destroyed a few hundred figuring out just how we stored data back at the dawn of the American Empire, all you have is a sequence of bits. You might, if you’re clever, figure out that ASCII corresponds to the Latin alphabet. You’d be much less likely to figure out, say, Unicode from first principles.

Now how many old DVDs would it take you to derive the DVD video format standard? Won’t be a problem for long; they’ll only last a couple hundred years in perfect storage, and you’d have to keep 200-year-old machinery in working order to recopy them… I’m sure you could build a DVD player from the specs, but they’re probably stored in whatever CAD format was in vogue at the time.

Lots of contemporary data, lots of the accumulating history of our time, is stored in ways that require special programs to decipher — proprietary file formats, faddish databases… Urbit, heaven forfend. These programs are stored on the same essentially ephemeral media as the data itself. Losing a text is not a matter of forgetting an alphabet over a thousand years, it’s a matter of forgetting an obsolete program over a decade. Ever tried to read a WriteNow file from 1987 you stored on floppy? Even better, the programs you need might run on architectures that haven’t existed for a long time; can you read TERNAC assembly?

Perhaps you can find an intact binary for that CAD program to read the specs to build a DVD player on its… original DVD installation media! Good luck finding the source code, that was a trade secret and there weren’t many copies. And then you need a computer that will emulate a computer old enough to emulate a computer old enough to run it, of course.

Continual recopying takes effort and energy. Even if there is no collapse — and I challenge anyone to find a Holocene millenium in which there was nothing that deserves the name collapse — much falls by the wayside. Most early silent films are already lost forever. Without Alan Lomax most early American folk music would be lost, and without the Internet Archive much of the early web… Empirically, most information anywhere gets discarded.

(A book that builds its own translator is called a genome.)

Class Rules

Americans: supplemental

American PCs get +3 charisma, -5 wisdom.

Characters start with an extra 1000gp, but must pay 1gp for every 1hp healed.

Mounts will consume 3x more feed than normal and may explode.

American rangers’ weapons do 2x damage, and cannot be stolen while wielder remains alive or before their corpse cools to <60° (Farenheit, of course).

American fighters gain +3 to-hit versus enemies associated with the color red (skins, coats, ideologies, &c.).

American mages may spend 2e9 gp to summon a legendary fire demon, who will raze target city or fortress and poison the surrounding land, but can only do so twice, as the third summoning will trigger Ragnarok.

American clerics may sacrifice freedom at any lawful shrine to obtain a security bonus.

American bards gain reputation at 4x the normal rate, and may roll a skill check to steal songs from other races.

American rogues past level 10 are considered too big to fail any skill checks.

The net is dank, and full of errors.