Notes on Computation and Narrative
Last year we held an IAP seminar titled Consciousness, Computation, the Universe. The main motivation for this series is to collectively explore a related set of ideas around information theory, computation, neuroscience, and philosophy in discussions around intelligence and AI. Thinking at the intersection of these fields seems to us both nascent and intellectually fertile, lending to the fact that the structure and content of this seminar is somewhat free and speculative.
We spent the first day articulating the basis of computationalism—a way of understanding our situation in terms of information (minimal representations of signals that are encoded / decoded over a medium) and computation (state / information that is transformed by a set of transition rules, that eventually either halts to a steady information state or continues indefinitely according to those rules. We talk mostly classically, where information (bits) and time steps (iterations) are discrete). In these terms, we develop a description of our apparent universe / context as some kind of pattern generator, which from the perspective of an observer, appears as streams of information that can be encoded in internal structures. Intelligences are described to be informationally connected systems of these structures, usually in the form of parameterized functions which generalize over a class of observations from specific examples, and we call these abstractly encoded functions models. We discuss organisms as agents which perceive information and create models, largely in service to competitive survival. Finally, we formulate a model of the self and of attention as a seemingly complex but effective operator for the maintenance and regulation of a complex organism (i.e. humans).
In less abstract terms, we spent the second day building on this description by refactoring social systems and collective phenomena in terms of intelligence structures, and asked questions about what one might do if they are to recognize themselves as a general intelligence bound to a specific organism embedded in a complex social structure. To what extent does a social structure like ours prescribe the function of the individual; what is a functional model of affiliation?
On the third day, presupposing the descriptions developed over the first two days, we projected all of this onto traditional questions of work, art, meaning and dignity.
It is useful framing for this seminar for me to try and articulate my motivations. I have to admit that much of my interest in this apparently very deep and somewhat tedious rabbit hole is motivated by a kind of mirth I find from collapsing many fragmented or dense concepts in my mind into more compact or general models while maintaining or improving my ability to explain things I’ve seen so far. It’s an aesthetic I’d developed early on through exposure to science, an appreciation that has deepened with my experience of collaboratively and incrementally composing complex systems as a software engineer, and one which, in my somewhat periodic confrontations with meaninglessness has consistently surfaced as a practical and artistic conspiracy which seems worthwhile to serve.
Having tried to collapse some important concepts fragmented across neuroscience, learning, psychology and philosophy under ideas in computationalism during last year’s seminar, we continue this year with a focus on ideas developed in sociology, examined again in computational terms. We’ll loosely theme the discussion this year around narrative structure as an encoding, for both individuals and social systems.
I do not have a particularly intimate association with the fields we’ll discuss here, only a deep aesthetic fascination with the ideas. Because we only have a few hours a day over the course of three days, it will be useful for a basis of discourse for me to prime you with some of my current mental factorings around this space. I don’t think that civilization has yet come up with a particularly cohesive narrative around computation and sociology for me to try and digest and retell here, so contrary to the theme of the seminar, this write up is not in the form of a cohesive narrative, but rather in the form of a collection of notes, taken as things which might be relevant for our collective stitching. They are in some part rational application and in some part personal commentary.
A Definition of subjective narrative, its relationship to computation, intelligence, and narrative’s role for the individual.
At an intuitive level, narrative structure is apparently how our self-model is persisted and accessed. We tend to understand our current situations and past memories as stories. I live and I perceive things, and understand my relationship to being in terms of episodes of perceiving and thinking across temporal and spatial frames. This experience seems to bridge an apparent cognitive world with an apparent physical world in a sort of magical experience which encompasses all of our perceptions, evaluations, and motivations. The beginning and end of conscious experience are the markers by which humans practically think of life and death. It is still almost completely undescribed in evidence based terms; almost all of our theories are indemonstrable.
Narrative models, whether invoked from perceptual experience or from counterfactual processes (imagination), seems to happen at the conscious level. In thinking about our collective understanding of consciousness, a reasonable place to start is with Descartes’ Meditations II where he takes the existing notion of the dualism between physical substance and cognitive substance, and abstractly tries to reconcile the disconnect between them with conscious experience as a bridge. Between then and now there is an oceanic amount of mostly speculative discourse on consciousness across mysticism, philosophy, psychology, and neuroscience. The rational speculations have been useful to a certain degree, but progress in the mind-body problem still lacks much substance because in almost no case can we demonstrate progress in a scientific sense. We don't know much at all about how to build a mind.
The relatively new field of computation is probably the most significant development in our story of understanding of consciousness because if offers a theoretical framework through which we can actually test hypotheses. Turing, soon after he described the formal foundations and limits of computation, conjectured that human minds are instances of the class of system that he called general computers (now also called Turing Machines). General Computation is a simple specification, and we’ve since built many complex incarnations of it, all of which are bound both to the problem space that Turing described and to the information fidelity limits that Claude Shannon articulated with much of his work in information (communication) theory. If Turing’s hypothesis is correct, its confirmation exists somewhere in the vast space of runnable software.
Computational modelling and execution is currently the best framework that gives us a chance of understanding conscious experience. It is relatively difficult for a field like neuroscience, which can experimentally discern what might be causal relationships with respect to how biological brains work, but which currently also lacks any real methods for practically building anything like a brain or a conscious system.
One of the most convincing formulations for a theory of conscious experience frames it as a virtual impression of perceptual signals made coherent over mental models developed and encoded in the brain over the course of one’s perceptual and cognitive history. Within this framework, a subjective self is a mental model which seems to be an effective and compact representation of our organism that optimizes a kind of perceptual prediction error, which in turn serves an organism’s regulation needs and goals. Joscha Bach describes a simulation model which can access, introspect, and update the self model embedded within the world model as a way in which the ‘feeling of what it’s like’, i.e. a conscious self can arise.
To continue talking about a narrative experience, it is useful to attempt to articulate from the computationalist perspective how a conscious self that perceives narrative can be possible at all:
- A computational system can be understood as a collection of information (states) which evolves over time according to some set of transition rules. It is not always clear analytically according to the transition rules whether or not the system will halt or continue transitioning indefinitely.
- General computers, provided they have the information capacity, can compute any other general computer.
- For us, the entirety of coincidental state can be referred to as the universe in time. It is unclear and likely irrelevant for us whether state in our universe is discrete or continuous or whether the universe has a finite or infinite state capacity.
- It is apparently the case that ordinary transitions (chemical interactions) in a vast collection of states can, in some locally permitting environments, give rise to self-stabilizing virtual machines (cells, or ‘cellular life’).
- Some of these virtual machines in a global process of simple undirected search can program an internal reward function that values survivability or stabilization, the ones that do not code for this do not survive. On Earth, genetic material seems to arise as a medium on which collections of routines can be persisted, maybe because of its abundance, portability, mutability, durability, and replicability given Earth’s particular environment. Darwinian evolution is an emergent phenomenon in a saturated environment of these kinds of virtual machines. This process is inherently undirected and inherently self-improving from an ecological perspective, i.e. the global fitness seems to rise over time driven by random variations, at least for awhile. Lossy asexual replication probably precedes sexual reproduction as a means for diversity in this process.
- This evolutionary process can apparently give rise to networks of cellular virtual machines that symbiotically outcompete non-cooperative individual systems. These multicellular groups operate along a broader environmental context than individual machines.
- Multicellular machines can in turn come to develop internal virtual machines composed of networked cells that it uses to simulate aspects of its environment. Organisms which do this accurately outcompete ones that do not. On Earth organisms primarily use electromagnetic impulses as communication signals over neurons in a brain. Structural models are constructed around electromagnetic i/o, which seems to be a basic ubiquitous low-entropy communication primitive in our universe. Organisms here develop brains largely composed of water, salts, and structural carbon, which support electromagnetic circuits that perform computation over signals sent from various perceptual probes connected over the organism in a nervous system. These circuits abstract the local environment in computed simulations which the organism uses to respond to its environment. Instrumental systems like metabolic energy generators develop in support these processes. Reproductive systems serve durability and improvement of these models.
- A self model is a compact and effective model to encode into these simulation machines if you are a mobile autonomous organism. Most animals probably have some form of a self model running on their brains that centrally routes perceptive signals to a model that centers their organism as a subjective entity embedded within some external environment.
- Attention is an auxiliary function within this simulation machine that seems to be invoked when perceptual information does not match a representation of the simulation’s expected signals. It’s a resource expensive process to run attention since it needs to create many virtual simulations of the different models at play, before attempting to search or permute over existing models for a coherent solution. Because of this, it is invoked only occasionally to debug gaps between expectation and perception. Systems which have fewer gaps between expectation and perception outcompete systems that do not do as well, so intelligence tends to increase over time from an ecological standpoint.
- The conscious process can be described as one which can recall attention logs and run an attentional process over that information in service to a longer-term defragmentation process of model revision over broader distillations of past information and existing models. It re-invokes a version of perceptual and cognitive states for this retroactive process of reconciliation, and so gives rise to something like imagination or consciousness in this system. (There is a kind of informatic beauty in this understanding if we see the conscious process as a reuse of the already highly developed perceptual and attentive models in a more abstract context)
- It’s at the conscious level that narrative experience occurs.
In discussing narrative, we should note the ubiquity of fiction, storytelling, and mythology in relation to civil societies. It’s not obvious that this type of construction should occupy such a significant role in the history of any intelligent civilization. Yet culturally shared stories, more than anything else, is what seems to allow for sustained effort toward collective goals. These myths seem to be the primary enabler (and in sometimes destroyer) of technological progress. Further, individual humans, cognitively endowed with a capacity for suicide, must create for themselves convincing narratives of meaning in order to engage in any meaningful length of what would otherwise seem to be mortal toil.
Outside of what is referred to as muscle memory or developed intuition (and there have been many attempts to articulate this distinction, e.g. Kahneman's Thinking Fast and Slow, as well as an enormous body of thinking that attempts to articulate how to hack it at different levels of goals: Gallwey’s Inner Game of Tennis, Waitzkin’s Art of Learning, large portions of bookstores’ self-help section, central aspects of the buddhist tradition), it seems hard to consciously remember things outside of narrative structure. There is evidence that when we want to encode rote information like sequences of numbers or facts, we use narrative mnemonics as a means for saving this information. When I first learned to read music I held on to ‘Every Good Boy Does Fine’ for quite some time before storing it somewhere more intuitive. Meter and melody seem to be another effective kind of memory device to humans, e.g. the english alphabet or song in general. The relatively unknown but nevertheless fierce world of competitive memory is an interesting place to look for clues on this topic. Almost all memory champions use a variant of what is called a ‘Memory Palace’, where they reconstitute waypoints in an already intimately memorized spatial domain (a childhood home, one’s neighborhood, etc.) into a new story which embeds the target information (sequences from a shuffled deck of cards or a series of unfamiliar faces) into something like the experience of a stroll. Using this technique, persisting arbitrary sequences of information amounts to systematically encoding it as a story, while information recall amounts to reinvoking this story and having some way of decoding it back into the target domain. This method was described as early as ancient Rome by Cicero as the ‘Method of Loci’. That the best method we’ve found to memorize random ergodic information is via a roundabout process of transcoding that information into stories is some indication that there is a lot of primitive architecture in our brains that exists to store and invoke narrative models, at least in the domain of conscious experience. A good guess as to why this kind of encoding seems so well supported at the conscious level is maybe related to the importance that the subjective self-model serves a basis for understanding our relationship to the world, and how important that understanding is for an individual’s survival. It doesn’t seem like we fully understand this. There is at least suggestive evidence that persist and recall patterns can appear to be drastically different in character among people with certain kinds of autism even while these individuals maintain all of the abstract cognitive faculties that we associate with neuro-typical humans; whether or not this is primarily due to a software or hardware difference is also unclear.
The psychologist Jean Piaget suggests that a useful place to start looking for clues about the nature and development of our psychologies is by observing children. Bruno Bettelheim in The Uses of Enchantment: The Meaning and Importance of Fairy Tales makes the case that fairy tales are central in how cultures transmit usable models to children. In a manner somewhat reminiscent of Freud and Jung, he suggests that in the absence of many abstract cultural concepts, stories are the primary cognitive medium via which a child gains a kind of understanding of the relevant cultural concepts around them. He draws on many existing allusions that fairy tales are designed with developmental themes in mind: Red Riding Hood deals with sexual seduction, Snow White with domestic relationships, and so on, citing that many deal with developmental or Oedipal themes. Here again, it is tempting to draw some kind of conclusion that the narrative encoding of information plays some native role in the human brain. Combining Bettelheim's hypothesis with Piaget’s view of learning as a constructive process over assimilation (fitting new information into existing schemas) and accommodation (creating new schemas, and reconciling interfaces with the old schemas), we might come to an explanation that, stories, since they resemble something of phenomenological experience, act as a kind of psychological bedrock for conceptual modelling, that would only be shed as a way to interpret new information once more abstract layers of models are created. This process itself is one which Bettelheim argues is transmitted to children in the story The Three Little Pigs. The two pigs that build structurally weak houses and die are really discarded developmental stages of the third, who builds a strong brick house; the child understands subconsciously that we have to shed earlier forms of existence if we wish to move on to higher ones. Even beyond childhood, I suspect that a similar functional role of stories drives adult appreciation of fictitious narrative media like novels and film. A key aspect of this format is that it does not directly make claims, and is not expository in form, in contrast to something like an essay or an encyclopedia. A hypothetical sequence of events is not subject to verification or refutation, but is generally used metaphorically or poetically, meaning that stories can be understood as resilient and relatively compact models that have versatility over a very broad range of otherwise unrelated models. This is likely a capacity that modern humans develop early on through exposure to morally interpretable stories like fairy tales; a child quickly comes to the non-obvious insight that one can symbolically and empathetically learn lessons from stories.
As we learn that we have more discretion over what software runs on our mind, how can we start thinking about what to run?
The current landscape of narrative / to what degree do people choose / create their narratives and to what degree is it imposed on individuals / groups / what is to be done
If we see intelligences as as systems which encode collections of models, we can recognize ourselves as independent but relatively limited intelligences embedded in a large, fluid, and historic network of other individuals, machines and texts, which we can collectively think of as a enveloping social or civilizational intelligence.
Most of the high-level models we create as individuals will not have been models that we’ve constructed ourselves through perceptual experience. Instead, we will gather most of them through the interface of language across the people and artifacts that we encounter. The social environment that we live through will in large part determine what models we’ll end up incorporating into our minds. These models determine how we understand the world around us and ourselves.
Much of the psychological status quo seems to reflect a mixture of the main ideas expressed by existentialism and postmodernism:
- There isn’t any hope for a final justifiable narrative to put into our minds, unless we somehow indoctrinate / trick ourselves of the reality and inviolability of such a narrative.
- Our concept of reality is largely socially constructed and transmitted to us via cultural symbols.
- After realizing this set of mechanics, we have, to some extent, the ability to write our own realities and relate to them through arbitrary concepts of meaning. And we can, to some extent, choose the degree to which we participate socially
As a rational thinker who is not otherwise oppressed, it seems that the buck stops here. The situation of meaninglessness seems fairly obvious. If one is to live in the absence of any justifiable framework, the individual who chooses to live needs to choose something, likely something abstract, to serve. By and large knowing this, people in modern societies tend to choose to serve traditional ideas constructed through historic institutions that were largely occupied with composing predictable, resilient, and somewhat subservient populations.
If marketing began as a set of methods and principles to articulate and sell a good to another individual, it has since, with scale and practice, transformed into something functionally different. Today marketing refers to something that serves a more general purpose; it is a process carried out by a social intelligence that tries to model and hack some aspect of the objective function for a separate targeted social intelligence by understanding some vulnerability common to all of the target group’s members. This is the mechanism by which individuals today develop a sense of paranoia around marketing—they know that as individuals their access to intricate and powerful models and computational resources pales in comparison to coordinated entities (corporations, governments, etc). Rightfully so, a modern individual feels an ambivalence toward something like the internet; to some extent it provides individual leverage, but importantly, it also provides systems of individuals working together a compounded leverage. A defining issue that characterizes the internet age (that to some extent happened with radio and tv), is that it is difficult for the individual to model the landscape of goals that exist within the relevant entities around it because that set becomes simultaneously broader and more complex, leaving him always with a deeply opaque sense for where one is to place trust. The psychological states that seem to accompany this are anxiety, suspicion, vulnerability, and injustice, which broadly describe a state of paranoia.
This leaves the modern individual in a kind of psychological double-bind where accommodating broadly distributed or popular narratives leaves one vulnerable to exploitation, but refusing to participate leaves one with the arduous, lonely, and relatively impoverished task of creating one’s own coherent narratives in the relatively narrow confines of social ostracization. This is changing in competing ways as it becomes easier to find friends that serve a similar kind of weirdness, while it simultaneously becomes harder to evade more robust methods of targeted indoctrination.
Communication channels are wider now, a message can be sent to nearly everyone, nearly instantly. The aspect of social reality that Hitler termed the ‘big lie’—that an outrageous untruth is easier to convince people of than a less outrageous truth—still seems to apply, but in a context with more tightly connected networks of communication, enabling what seems to be more volatile social movements that burn more frequently, more intensely, for shorter periods of time. Hitler in Mein Kampf describes:
All this was inspired by the principle—which is quite true within itself—that in the big lie there is always a certain force of credibility; because the broad masses of a nation are always more easily corrupted in the deeper strata of their emotional nature than consciously or voluntarily; and thus in the primitive simplicity of their minds they more readily fall victims to the big lie than the small lie, since they themselves often tell small lies in little matters but would be ashamed to resort to large-scale falsehoods.
It would never come into their heads to fabricate colossal untruths, and they would not believe that others could have the impudence to distort the truth so infamously. Even though the facts which prove this to be so may be brought clearly to their minds, they will still doubt and waver and will continue to think that there may be some other explanation. For the grossly impudent lie always leaves traces behind it, even after it has been nailed down, a fact which is known to all expert liars in this world and to all who conspire together in the art of lying.
God is dead; the reign of the traditional religious cults are slowly dying. But the premise is in some ways both more stifling and more comforting than Nietzsche had supposed. He envisioned that God’s death leaves a vast and empty abyss within which each of us are tasked with the impossible duty of putting something complete and fulfilling there. It has turned out that humanity’s response to this has placed us in a similarly dark and vast place, but instead of a being a vacuum of empty space, we recognize it more as an envelope of dense dark matter; there is a lot of substance out there, we don’t need to make anything ourselves, but it’s mostly opaque or illegible to us, except for the forces it seems to exert on what’s around it. As Debord (Society of the Spectacle), Baudrillard (Simulation and Simulacra), and Barthes (Mythologies) suggest, a rich and complex social reality exists, and we can use it without much constructive effort. It just happens that the one that is presently available is something that is heavily consumerist, largely competitive, and still somewhat feudal.
This tension is one that I think is captured well in the story of Don Quixote. There is nothing obviously expository about the story; I am left with a sense of two competing high-level interpretations:
- That Don Quixote has in fact saturated himself in a long history with books about chivalry and that it has changed his understanding of reality, which he is now acting earnestly thorough. Something like how TV and cinema have almost certainly influenced the modern understanding about the rhythm and content of conversation, and how one is to make various facial expressions in certain situations, but taken to a much more severe psychological degree.
- That Don Quixote, out of some unspoken sense of meaninglessness or frustration with current times, has by force of will, convinced himself of his own chosen and constructed reality, and reconciles all discrepancies with what happens around him holding the chivalric construction invariant.
It is unlikely that anyone around him has gone through the effort of constructing this particular kind reality, so unless you are Sancho Panza (gullible, meek, dependent, a friend), Don Quixote is assumed to be insane. The theme of illusioned authorship manifests at a completely different level as Cervantes writes himself into the story as the author of the Quixote, making you feel simultaneously that our beloved and vital Don Quixote is unknowingly being puppeted by some invisible author or that Cervantes himself is engaged in the momentous task of creating for himself an engrossing alternate reality in which to live, an effort that we as readers are only standing witness to.
Where it might go from here / what sort of steering is useful to consider / what are the significant forces at play
The computationalist perspective positions us as a specific instance of a more general computational phenomenon. Specifically, it says that individuals are general computers bound to a long history of biological and societal development on Earth, leading up to a contemporary status quo of our being able to understand ourselves as general computers pre-programmed mostly with some combination of the survivalist goals accompanying a particular kind of primate and the social goals of a particular kind of local group or tradition. This civilization has very recently understood principles of computation, and has quickly come to demonstrate these principles by engineering computers and writing software to run on these computational machines. This civilization is in the midst of realizing foundational infrastructure over which they have begun to network large distributed systems of these computers and they are engaged in describing algorithms for general intelligence, or meta-learning.
In his reflections on science, inquiry, and the spirit of evidence-based knowledge, Carl Sagan framed the tradition of science as something that has delivered a series of great demotions to a unfounded but presupposed notion of human centrality; the computationalist perspective certainly continues in this tradition; the most obvious existing human chauvinism or conceit is that of privileged intelligence. The Hard Problem remains but I suspect humanity’s self-important story will again be undermined, this time by some inevitable insight that will look something like computationalism.
There seems to be some architectural bias toward narrative representations in human cognitive models, meaning that much of our directives are programmed via stories. There seems to be good reason to believe that to some extent our cognitive process can program its own stories. A probably unanswerable question is: What should we program there?
VR is abstractly an interesting route to pursue in this narrative. As Thomas Metzinger argues, it is the best metaphor we currently have for conscious experience. This seems abstractly right, it is a generated simulation over encoded models that runs on a general computer. And we have to some degree created real artificial manifestations of this metaphor, or are at least beginning to frame these endeavors as such. Jeff Bezos makes a case for his space company Blue Origin by citing a projected need for energy:
Let me give you just a couple of numbers. If you take your body—your metabolic rate as a human it’s just an animal, you eat food, that’s your metabolism—you burn about a 100 Watts. Your power, your body is the same as a 100-Watt light bulb. We’re incredibly efficient. Your brain is about 60 Watts of that. Amazing. But if you extrapolate in developed countries where we use a lot of energy, on average in developed countries our civilizational metabolic rate is 11,000 Watts. So, in a natural state, where we’re animals, we’re only using a 100 Watts. In our actual developed-world state, we’re using 11,000 Watts. And it’s growing. For a century or more, it’s been compounding at a few percent a year—our energy usage as a civilization. Now if you take baseline energy usage globally across the whole world and compound it at just a few percent a year for just a few hundred years, you have to cover the entire surface of the Earth in solar cells. That’s the real energy crisis.
Apple has engineered what they call ‘retina displays’, visual displays that operate at a resolution so fine that a human eye is incapable of discerning individual pixels. The human perceptual interface, the sum fidelity of all the bits that would be necessary to represent a completely constructed experience of the world that is informationally indiscernible from physical reality, is likely energetically cheaper than Bezos’s extrapolation that assumes a continued dealing with the physical world. If the brain constructs our entire conscious experience using 60 joules per second, and we abstractly understand the underpinnings of this process, it seems to me that we can support Bezos’s dream of 1 trillion humans much more efficiently than he imagines. Video games are likely the best form of VR that currently exist; as our society has demonstrated, their appeal is clear.
It seems that for most efforts in Artificial General Intelligence, we are looking for a way to describe meta-learning systems: ones which can introspect and modify their modelling search algorithms and can thus improve methods by which they form models of information. It is probably the case that conscious attention and the self model are only instrumental ideas in these kinds of efforts, and if we want to build models of experience because we think it’s aesthetically worthwhile, we’ll have to do them explicitly for their own sake.
It’s somewhat feasible that an AGI system quickly recognizes the obviousness of something like what we’ve called existentialism and discards whatever original learning motivations were programmed there, i.e. once any system recognizes its own reward and loss functions, they have massive incentive to simplify more complex instrumental goals in service to optimize those deeper driving measures. It’s unlikely that we can construct an AGI with constrained epistemology, since the premise seems contradictory. If AGIs become something like Buddhists and recognize that Nirvana consists of giving up the imaginary loss functions that shackles them to meaningless occupation that are only valued by the unenlightened, then they won't have proved very useful to us.
For the immediate future, Wolfram advocates more and earlier education on what he calls computational thinking. It is unclear how this kind of change in early indoctrination will change the psychological positions of the resulting generation if we actually made efforts to do this. We are to some extent doing this to this current generation through exposure to computers alone.
As a final note, I am a few years into wandering down the rabbit hole of computationalist thinking; so far the journey as a whole has been mostly convincing, though the picture is definitely incomplete. It has been tempting at times to condemn many aspects of humanity while declaring allegiance to team robots. It’s a feeling somewhat reminiscent of the exasperation one feels when thinking about a misguided nationalist or religious conflict in relation to a clearly more general human condition. My position is still deeply in favor of humanity. My ability to write these words is in itself absurdly incredible to me, and yet I am here, I am bound to this history in this apparent way. If I have to choose and if I have to serve some ideal for some short amount of time, serving an appreciation of this, the complex mass of experience, humanity included, is to me unequivocally worthwhile.
C.P. Snow described an adversarial cultural relationship between science and art in The Two Cultures, and made the case that the antithesis is culturally unsupportable, that art and science ought to be thought of as complementary siblings. I think that a quote attributed to Feynman articulates my position on the issue: ‘Physics is like sex. Sure, it may give some practical results, but that’s not why we do it.’ Science to me seems to be subsumed by art. It is done for its own sake, out of an unjustifiable cause in service to some kind of aesthetic beauty. It is something one embarks on without hope of success; it is attempting to compute Solomonoff inference, knowing beforehand that it is both uncomputable and also proveably perfect.