Introduction

Antikythera: For a new speculative philosophy of computation (which is to say of life, intelligence, automation, and the compositional evolution of planets)

Sciences are born when philosophy learns to ask the right questions; their potential is suppressed when it does not. Today the relationship between the Humanities and Science is one of critical suspicion, a state of affairs that retards not only the development of philosophy but of new sciences to come. And so for Antikythera, the most important philosophical project of the next century is based on understanding the profound implications of new scientific and technological accomplishments. This means not just applying concepts but inventing them. This puts our work in a slightly heretical position in relation to the current orientations of the Humanities, but one that is well placed to develop the school of thought for the speculative philosophy of computation that will frame new and fertile lines of inquiry for a future where science, technology and philosophy convene within whatever supersedes the Humanities as we know it.

New things outrun the nouns available to contain them.

There are historical moments in which humanity’s speculative imagination far outpaces its real technological capacities. Those times overflow with utopias. At others, however, “our” technologies’ capabilities and implications outpace the concepts we have to describe them let alone guide them. The present is more the latter than the former. At this moment, technology and particularly planetary scale computation has outpaced our theory. We face something like a civilization scale computational overhang. Human agency exceeds human wisdom. For philosophy it should be a time of invention.

Too often, however, the response is to force comfortable and settled ideas about ethics, scale, polity, and meaning onto a situation that not only calls for a different framework, but is already generating a different framework.

The response goes well beyond applying inherited philosophical concepts to the present day. The goal, as we joke, is not to ask “What would Kant make of driverless cars?” or “what would Heidegger lament about Large Language Models?” but rather to allow for appearance and cultivation of a new school of philosophical/ technological thought that can both account for the qualitative implications of what is here and now and contributed to its compositional orientation. The present alternatives are steeped in sluggish scholasticism: asking if AI can genuinely “think” according to the standards set forth by Kant in Critique of Pure Reason is like asking if this creature discovered in the New World is actually an “animal” as defined by Aristotle. It’s obvious the real question is how the new evidence must update the category, not how the received category can judge reality.

A better way to “do Philosophy” is to experiment actively with the technologies that make contemporary thought possible and to explore the fullest space of that potential. Instead of simply applying philosophy to the topic of computation, Antikythera starts from the other direction and produces theoretical and practical conceptual tools -the speculative- from living computational media. For the 21st century, the instrumental and existential implications of planetary computation challenges how planetary intelligence comes to comprehend its own evolution, its predicament and its possible futures, both bright and dark.

Computation is born of cosmology

The closely bound relationship between computation and planetarity is not new. It begins with the very first artificial computers (we hold that computation was discovered as much as it was invented, and that the computational technologies humans produce are artifacts that make use of a natural phenomenon).

Antikythera takes its name from the Antikythera mechanism, first discovered in 1901 in a shipwreck off the Greek island, and dated to 200 BCE. This primordial computer was more than a calculator; it was an astronomical machine — mapping, tracking and predicting the movements of stars and planets, marking annual events, and guiding its users on the surface of the globe.

The mechanism not only calculated interlocking variables, it provided an orientation of thought in relation to the astronomic predicament of its users. Using the mechanism enabled its user to think and to act in relation to what was revealed through the mechanism’s perspective.

This is an augmentation of intelligence, but intelligence is not just something that a particular species or machine can do. In the long term it evolves through the scaffolding interactions between multiple systems: genetic, biological, technological, linguistic, and more. Intelligence is a planetary phenomenon.

The name Antikythera refers more generally to computational technology that discloses and accelerates the planetary condition of intelligence. It is more than one particular mechanism, but a growing genealogy of technologies, some of which, like planetary computation infrastructures, we not only use, but also inhabit.

Computation is calculation as world ordering; it is a medium for the complexification of social intelligence.

Computation takes the form of planetary infrastructure that remakes philosophy, science, and society in its image.

How does Antikythera define computation? For Turing it was a process defined by a mathematical limit of the incalculable, but as the decades since his foundational papers have shown, there is little in life that cannot be modeled and represented computationally. That process, like all models, is reductive. A map reduces territory to an image but that is how it becomes useful as a navigational tool. Similarly, computational models and simulations synthesize data in ways that demonstrate forms and patterns that would be otherwise inconceivable.

As a rules-based, output-generating operation, computation has general and specific definitions, some of which include biological analogical processing of very local information, others which include Universal Turing Machines, general recursive functions and the defined calculations of almost anything at all.

Antikythera presumes that computation was discovered as much as it was invented. It is less that natural computation works like modern computing devices but rather that modern computing devices and formulations are quickly-evolving approximations of natural computation –genetic, molecular, neuronal, etc.

Computation as a principle may be near universal, but computation as a societal medium is highly malleable. Its everyday affordances are seemingly endless. However, computational technologies evolve and societies evolve in turn. For example, in the decades to come, what is called “AI” may be not simply a novel application for computation, but its primary societal-scale form. Computation would be not just an instrumentally focused calculation but the basis of widespread non-biological intelligence.

Through computational models we perceive existential truths about so many things: human genomic drift through history, the visual profile of astonomic objects millions of light years away, the extent of anthropogenic agency and its climatic effects, the neurological foundations of thought itself. The qualitative profundity of these begins with a quantitative representation. The math discloses reality and reality demands new philosophical scrutiny.

Allocentrism in philosophy and engineering

Computation is not just a tool, anymore than language is just a tool. Both language and computation are constitutive of thought and the encoding and communication of symbolic reasoning. Both evolve in relation to how they affect and are affected by the world, and yet both retain something formally unique. That machine intelligence would evolve through language as much as language will in the foreseeable future evolve through machines suggest a sort of artificial convergent evolution. More on that below.

What does Antikythera mean by “computation” and what is its slice of that spectrum? Our approach is slightly off kilter from how philosophy of computation is, at present, usually practiced. Philosophy, in its Analytic mode, interrogates the mathematical procedure of computation and seeks to qualify those formal properties. It bridges Logic and philosophy of mathematics in often exquisitely productive but sometimes arid ways. Our focus, however, is less in that formal uniqueness than in the mutually-affecting evolution of computation and world, and how each becomes a scaffold for the development of the other.

For its part, Continental Philosophy is suspicious, dismissive and even hostile to computation as a figure of power, reductive thought, and instrumental rationality. It devotes considerable time to the often obscure prosey criticism of all that it imagines computation to be and do. As a rule, both spoken and unspoken, it valorizes the incomputable over the computable, the ineffable over the effable, the analogue over the digital, the poetic over the explanatory, and so on.

Our approach is also qualitative and speculative but instead of seeing philosophy as a form of resistance to computation, it sees computation as a challenge to thought itself. Computation is not that which obscures our view of the real, but which has, over the past century, been the primary enabler of confrontations with the real that are sometimes startling and even disturbing but always precious. This makes us neither optimists nor pessimists, but rather deeply curious and committed to building and rebuilding from first principles rather than commentary on passing epiphenomena.

Our philosophical standpoint is allocentric more than egocentric, “Copernican” more than phenomenological. The presumption is we will as always learn more about ourselves by getting outside our own heads and perspectives, almost always through technological mediation, than we will by private rumination on the experience of interiority or by mistaking culture for history. That said, even from an outside perspective looking back upon ourselves, we (“humans”) are not necessarily the most important thing for philosophy to examine. The vistas are open.

Most sciences grew out of philosophy and did so by stepping outdoors and playfully experimenting with the world as it is. Instead of science composing new technologies to verify its curiosity, the inverse is perhaps even more often the case: new technologies devised for one purpose end up changing what is perceivable and thus what is hypothesized and thus what science seeks. The allocentric turn does not imply that human sapience is not magnificent, but it does locate it differently than it may be used to. It is true that homo sapiens is the species that wrote this and presumably is reading this (the most important reader may be a future LLM) but we are merely the present seat of the intensification of abstract intelligence, which is the real force and actor. We are the medium not the message. If Antikythera might eventually contribute to the philosophical explorations of what in due time becomes a science of understanding the relationship between intelligence and technology (and life) as intertwined planetary phenomena –to ask the questions that can only be answered by that– then we will have truly succeeded.

Planetary Computation

The Anthropocene is a second order concept derived from computation

I have told this story many times. Imagine the famous Blue Marble image as a movie, one spanning all 4.5 billion years of Earth’s development. Watching this movie on super fast-forward, one would see the planet turn from red to blue and green, see continents form and break apart, see the emergence of life and an atmosphere, and in the last few seconds you would see something else remarkable. The planet would almost instantaneously grow an external layer of satellites, cities and various physical networks, all of which constitute a kind of sensory epidermis or exoskeleton. In the last century, Earth has grown this artificial crust through which it has realized incipient forms of animal-machinic cognition with terraforming-scale agency. This is planetary computation. It is not just a tool, it is a geological cognitive phenomenon.

In short it is this phenomenon –planetary computation defined in this way– that is Antikythera’s core interest. To be more precise the term has at least two essential connotations: first, as a global technological apparatus and, second, in all the ways that that apparatus reveals planetary conditions in ways otherwise unthinkable. For the former computation is an instrumental technology that allows new perceptions and interactions with the world; for the latter it is an epistemological technology that shifts fundamental presumptions about what is possible to know about the world at all.

For example, the scientific idea of “climate change,” is an epistemological accomplishment of planetary scale computation. The multiscalar and accelerating rate of change is knowable because data gleaned from satellites, surface and ocean temperatures, and most of all the models derived from supercomputing simulations of planetary past, present and futures. As such, computation has made the contemporary notion of the Planetary and the ‘Anthropocene’ conceivable, accountable, and actionable. These ideas, in turn, established that over the past centuries anthropogenic agency has had terraforming scale effects. Every discipline is reckoning in its own way with the implications, some better than others.

As the Planetary is now accepted as a “Humanist category,” it is worth emphasizing that the actual planets, including Earth, are rendered as stable objects of knowledge that have been made legible largely through first order insights gleaned from computational perceptual technologies. It becomes a Humanist category both as a motivating idea that puts the assembly of those technologies in motion and later as a (precious) second order abstraction derived from what they show us.

The Planetary is a term with considerable potential philosophical weight but also a lot of gestural emptiness. It is, as suggested, both a cause and effect of the recognition of “the Anthropocene.” But what is that? I say recognition because the Anthropocene was occurring long before it was deduced to be happening. Whether you start at the beginning of agriculture 10k years ago or the industrial revolution a few hundred years ago or the pervasive scattering of radioactive elements more recently, the anthropogenic transformation of the planet was an “accidental terraforming.” It was not the plan.

After years of debate as to whether the term deserves the status of proper geologic epoch, the most recent decision is to identify the Anthropocene as an event, as is the Great Oxidation event or the Chtulam meteor event. This introduces more plasticity into the concept. Events are unsettled, transformative, but not necessarily final. Anthropogenic agency can and likely will orient this event to a more deliberate conclusion. For its part, computation will surely make that orientation possible just as it made legible the situation in which it moves.

Computation is now the primary technology for the artificialization of functional intelligence, symbolic thought and of life itself.

Computation is for us, not only a formal, substrate-agnostic, recursive calculative process; it is also a means of world-ordering. From the earliest marks of symbolic notation, computation was a foundation of what would become complex culture. The signifiers on clay in Sumerian cuneiform are known as a first form of writing; in fact they are indexes of transactions, an inscriptive technique that would become pictograms and over time alphanumeric writing, including base 10 mathematics and formal binary notation. There and then in Mesopotamia, the first writing is “accounting”: a kind of database representing and ordering real world communication. This artifact of computation already prefigures the expressive semiotics, even literary writing, that ensues in centuries to come.

Over recent centuries, and accelerating during the mid-20th century, technologies for the artificialization of computation have become more powerful, more efficient, more microscopic and more globally pervasive, changing the world in their image. “Artificialization” in this context doesn’t mean fake or unnatural, but rather that the intricate complexity of modern computing chips, hardware and software did not evolve blindly; it is the result of deliberate conceptual prefiguration and composition, even if by accident. The evolutionary importance of that general capacity for artificialization will become clearer below.


Planetary Computation reveals and constructs the Planetary as a “humanist category”

Some of the most essential and timeless philosophical questions revolve around the qualities of perception, representation, and time. Together and separately, these have all been radicalized by planetary computational technologies, in no domain more dramatically than in Astronomy.

The Webb deep space telescope scans the depths of the universe, producing data that we make into images showing us, among other wonders, light from a distant star bending all the way around the gravitational cluster of galaxies. From such perceptions we, the little creatures who built this machine, learn a bit more about where, when and how we are. Computation is not only a topic for philosophy to pass judgment; computation is itself a philosophical technology. It reveals conditions that have made human thought possible.

Antikythera is a philosophy of technology program that diverges in vision and purpose from the mainstream of philosophy of technology, particularly from the intransigent tradition growing from the work of Martin Heidegger, whose near mystical suspicion of scientific disenchantment, his denigration of technology as that which distances us from Being and reduces the world to numeric profanity, and most of all his outrage at innovations of perception beyond the comfort of grounded phenomenology, has confused generations of young minds. They have been misled. The question concerning technology is not how it alienates us from the misty mystery of Being but how every Copernican turn we have taken, from heliocentrism to Darwinism to neuroscience to machine intelligence- has only been possible by getting outside our own skin to seeing ourselves as we are and the world as it is. This is closeness to being.


Computation reveals the planetary condition of intelligence to itself

To look up into the night sky with an unaided eye is to gaze into a time machine showing us an incomprehensibly distant past. It is to perceive light emitted before most of us were born and even before modern humans existed at all. It took well into the 18th century for the scientific realization that stripes of geologic sedimentary layers mark not just an orderly spatial arrangement but represent the depths of planetary time. The same principle that space is time is space holds as you look out at the stars, but on a vastly larger scale. To calculate those distances in space and time is only possible once their scales are philosophically and mathematically and then technologically abstractable. Such is the case with Black Holes, first described mathematically and then, in 2018, they were directly perceived by Earth itself having been turned into a giant computer vision machine.

Event Horizon telescope was an array of multiple terrestrial telescopes all aimed on a single point in the night sky. Its respective scans were timed with and by the Earth’s rotation, and thus the planet itself was incorporated into this optical mechanism. Event Horizon connected the views of telescopes on the Earth’s surface into ommatidia of a vast compound eye, a sensory organ that captured 50 million year old light from the center of the M87 galaxy as digital information. This data was then again rendered in the form of a ghostly orange disc that primates, such as ourselves, recognize as an “image.” Upon contemplating it, we can also identify our place within the universe that exceeds our immediate comprehension but not entirely our technologies of representation. With computational technologies such as Event Horizon, it's possible to imagine our planet not only as a lonely blue spot in the void of space but as a singular form that finally opens its eye to perceive its distant environment.

For Antikythera, this is what is meant by “computational technology disclosing and accelerating planetary intelligence.” Feats such as this demonstrate what planetary computation is for.

Research Program

Having hopefully drawn a compelling image of the purpose of Antikythera as a generative theoretical project, I would put that image in motion and describe how the program does its work. As you might expect, it is not in the usual way and it is deliberately tuned for the messy process of concept generation, articulation, prototyping, narrativization, and ultimately, convergence.

The link between Philosophy and Engineering is a more fertile ground than that between the Humanities and Design

Antikythera is a philosophy of technology research program that uses studio-based speculative design methodologies to provoke, conceive, explore, and experiment with new ideas. At the same time, it is often characterized as a speculative design research program that is driven by a focused line of inquiry in the philosophy of technology. Yet neither framing is precisely right.

As I alluded to, within the academic sub-field of philosophy of technology, Antikythera is positioned opposed to the deeply rooted legacy of Heideggerian critique that sees technology as a source of existential estrangement, and so perhaps our approach is the opposite of “philosophy of technology?” Technology of philosophy? Maybe. At the same time, despite the longstanding crucial role of thought experiments in advancing technological exploration, the term “speculative design” has unfortunate connotations of whimsical utopian/dystopian pedantic design gestures. While Antikythera is appreciative of the inspiration the Humanities provides to Design, that must more than simply injecting the latest cultural theoretical trend into the portfolios of art students.

A more precise framing may be a renewed conjunction of philosophy and engineering. “Engineering” is often seen as barren terrain for the high minded abstractions of philosophy, but that’s exactly the problem. Functionalism is not the enemy of creativity but, as a constraint, perhaps its most fertile niche. By this I don’t mean a philosophy of engineering, but a speculative philosophy drawn from a curious, informed and provocative encounter with the technical means by which anthropogenic agency remakes the world in its image and, in turn, the uncertain subjectivities that emerge, or fail to emerge, from that difficult recognition of its dynamics. Obviously, for today those technical means are largely computational and hence our program for the assembly of a new school of thought for the speculative philosophy of technology focuses its attention specifically on computation.

This may not locate Antikythera in the mainstream of either Humanities, Philosophy, or Science and Technology Studies and perhaps rightly so, but perhaps positions the program to accomplish things it otherwise could not. Many sciences began as a subject matter in philosophy: from physics to linguistics, from economics to neuroscience. This is certainly true for Computer Science as it congealed from philosophy of mathematics, logic, astronomy, cybernetics and more. Of all computational technologies, AI in particular emerged through a non-linear genealogy of thought experiments spanning centuries before it became anything like the functional technologies realized today, which, in turn, redirected those though experiments.This is also what is meant by the conjunction of philosophy and engineering.

Furthermore, this also suggests that the ongoing “Science Wars” –which the Humanities absolutely did not win– are all the more unfortunate. The orthodoxy project is to debunk, resist and explain away the ontologically challenging assignments that technoscience puts forth with the comforting languages of social reductionism and cultural determinism. This not only retards the development of the Humanities, a self-banishment to increasingly shrill irrelevance, concealing rather than revealing the extent to which philosophy is where new sciences come from, it also can be the co-creation of those sciences to come.

It need not be so. There are many ways to reinvent the links between speculative and qualitative reason and functional qualitative creativity. Antikythera’s approach is just one.

Conceive, Convene, Communicate

So what is the method by which we attempt to build this school of thought? The approach is multifaceted but comes down to three things: (1) the tireless development of a synthetic and hopefully precise conceptual vocabulary, posed both as definitional statements and as generative framing questions with which to expand and hone our research, (2) the convening of a select group of active minds intrigued by our provocation, eager to collaborate with those from other disciplines and backgrounds, and (3) to invest in the design of the communication of this work such that each bit adds to an increasingly high resolution mosaic that constitutes the Antikythera school of thought. The ideas and implications of those outcomes feed back into the conceptual generative framing of the next cycle. Each go around, and the school of thought gets bigger, leaner and more cutting.

This means a division of labor spread across a growing network. We work with existing institutions in ways that they may not be able to do on their own. The institutional affiliations of our partners include Google/ DeepMind. Our affiliate researchers come from Cambridge, MIT, Santa Fe Institute, Cal Tech, Sci_Arc, Beijing University, Harvard, UC San Diego, Oxford, Central St. Martins, UCLA, Yale, Eindhoven, Penn, New York University- Shanghai, Berkeley, Stanford, University of London, and many more. More important than the brand names on their uniforms is their disciplinary range: computer science and philosophy obviously, architecture, digital media, literature, filmmaking, robotics, mathematics, astrophysics, biology, history, cognitive neuroscience.

At least once a year, Antikythera hosts an interdisciplinary design research studio in which we invite and support 15 younger and mid-career researchers to work with us on new questions and problems and to generate works of speculative engineering, worldbuilding cinematic works, as well as formal papers. We have hosted studios in Los Angeles, Mexico City, London, and Beijing, and in 2025 the studio will be based in Cambridge, Mass. on the campus of MIT. Studios draw applicants from around the world and disciplines from computer scientists to science-fiction authors, mathematicians to game designers, and of course philosophers.

At our Planetary Sapience symposium at MIT Media Lab, we recently announced a collaboration with MIT Press: a book series and a peer-reviewed digital journal that will serve the primary platform for publishing the work of the program, as well as intellectually related work from a range of disciplines. The first “issue” of the digital journal will go live in concert with a launch event at the Venice Architecture Biennale next Spring. The first title in the book series, What is Intelligence? By Blaise Agüera y Arcas will hit shelves in Fall 2025. The digital journal will publish original and classic texts as imagined and designed by some of the top digital designers working today. The journal is a showcase for both cutting edge ideas in the speculative philosophy of computation and cutting edge digital design, together establishing a communications platform most appropriate to the ambitions of the work. Each of the articles in the first upcoming issues is discussed below in context of the Antikythera research track to which it most directly contributes.

Antikythera is made possible by the generosity and far-sighted support of the Berggruen Institute, based in Los Angeles, Beijing and Venice, under the leadership of Nicolas Berggruen, Nathan Gardels, Nils Gilman, and Dawn Nakagawa.


Research Areas

Antikythera’s research is roughly divided into four key tracks, each building off the core theme of Planetary Computation. They each can be defined in relation to one another.

As mentioned, Planetary Computation refers to both the emergence of artificial computational infrastructures at global scale as well as the revelation and disclosure of planetary systems as topics of empirical scientific interest, and “The Planetary” as a qualitative conceptual category. This is considered through four non-exclusive and non-exhaustive lenses.

Synthetic Intelligence refers to the emergence of artificial machine intelligence in both anthropomorphic and automorphic forms as well as a complex and evolving distributed system. In contrast with many other approaches to AI, we emphasize (1) the importance of productive misalignment and the epistemological and practical necessity to avoid alignment overfitting to dubiously defined human values and (2) the eventual open-world artificial evolution of synthetic hybrids of biological and non-biological intelligences, including infrastructure-scale systems capable of advanced cognition.

Recursive Simulations refers to the process by which computational simulations reflexively or recursively affect the phenomena that they represent. In different forms, predictive processing underpins diverse forms of evolved and artificial intelligence. At a collective scale, this allows complex societies to sense, model and govern their development. In this, as simulations compress time, they become essential epistemological technologies for the understanding of phenomena otherwise imperceptible.

Hemispherical Stacks examines the implications of multipolar computation and multipolar geopolitics, one in terms of the other. It considers the competitive and cooperative dynamics of computational supply-chains and both adversarial and reciprocal relations between States, Platforms, and regional bodies. Multiple scenarios are composed about diverse areas of focus including, chip wars, foundational models, data sovereignty and astropolitics.

Planetary Sapience attempts to locate the artificial evolution of substrate-agnostic forms of computational intelligence within the longer historical arc of the natural evolution of complex intelligence as a planetary phenomenon. The apparent rift between scientific and cultural cosmologies, between what is known scientifically and cultural worldviews, is posited as an existential problem, one that cannot be solved by computation as a medium but by the renewal of a speculative philosophy that addresses life, intelligence and technology as fundamentally integrated processes. More on this below.

Synthetic Intelligence

The eventual implication of artificialization of intelligence is less humans teaching machines how to think than machines demonstrating that thinking is a much wider and weirder spectrum.

Synthetic intelligence refers to the wider field of artificially-composed intelligent systems that do and do not correspond to Humanism’s traditions. These systems, however, can complement and combine with human cognition, intuition, creativity, abstraction and discovery. Inevitably, both are forever altered by such diverse amalgamations.

The history of AI and the history of the philosophy of AI are intertwined, from Leibniz to Turing to Dreyfus to today. Thought experiments drive technologies, which drive a shift in the understanding of what intelligence itself is and might become, and back and forth. This extends well beyond the European philosophical tradition. In our work, important touchstones include those drawn from Deng-era China's invocation of cybernetics as the basis of industrial mass mobilization, and the Eastern European connotations of AI, which include what Stanislaw Lem called an ‘existential’ technology. Many of these touchpoints contrast with Western individualized and individualistic and anthropomorphic models that dominate contemporary debates on so-called AI ethics and safety.

Historically, AI and the Philosophy of AI have evolved in a tight coupling, informing and delimiting one another. But as the artificial evolution of AI accelerates, the conceptual vocabulary that has helped to bring it about may not be sufficient to articulate what it is and what it might become. Asking if AI can genuinely “think” according to the standards set forth by Kant in Critique of Pure Reason is like asking if this creature discovered in the New World is actually an “animal” as defined by Aristotle. It’s obvious the real question is how the new evidence must update the category, not how the received category can judge reality.

Now as before, not only is AI defined in contrast with the strangely protean image of the human, but the human is defined in contrast with the machine. By habit it is taken almost for granted that we are all that it is not and it is all that we are not. Like two characters sitting across from one another, deciding if the other is a mirror reflection or true opposite, each is supposedly the measure and limit of the other.

People see themselves and their society in the reflection AI provides, and are thrilled and horrified by what it portends. But this reflection is also preventing people from understanding AI, its potential, and its relationship to human and non-human societies. A new framework is needed to understand the possible implications.

What is reflected back is not necessarily human-like. The view is beyond anthropomorphic notions of AI and toward fundamental concern with machine intelligence. What Turing proposed in his famous test as a sufficient condition of intelligence has become instead decades of solipsistic demands and misrecognitions. Idealizing what appears and performs as most “human” in AI –either as praise or as criticism– is to willfully constrain understanding existing forms of machine intelligence as they are.

To ponder seriously the planetary pasts and futures of AI, not only extends but alters our notions of “artificiality” and “intelligence”, and draws from the range of such connotations but will also, inevitably, leave them behind.


The Weirdness Right in Front of Us

This weirdness includes the new unfamiliarity of language itself. If language was, as the structuralist would have it, the house that man lives in, now, as machines spin out coherent ideas at rates just as inhuman as their mathematical answers, the home once provided by language is now quite uncanny.

Large Language Models’ eerily convincing text prediction/ production capabilities have been used to write novels, screenplays, make images and movies, songs, voices, symphonies, and are even used by some biotech researchers to predict gene sequences for drug discovery – here at least, the language of genetics really is a language. LLMs also form the basis of generalist models capable of mixing inputs and outputs from one modality to another (e.g. interpreting what is in an image so as to instruct movement of a robot arm, etc.) Such foundational models may become a new kind of general purpose public utility around which industrial sectors organize: cognitive infrastructures.

Whither speculative philosophy then? As a co-author and I wrote recently, “reality overstepping the boundaries of comfortable vocabulary is the start, not the end, of the conversation. Instead of a groundhog-day debates about whether machines have souls, or can think like people imagine themselves to think, the ongoing double-helix relationship between AI and the philosophy of AI needs to do less projection of its own maxims and instead construct more nuanced vocabularies of analysis, critique, and speculation based on the Weirdness right in front of us.” And that is really the focus of our work: the weirdness right in front of us and the clumsiness of our languages to engage with it.

Asking if AI can genuinely 'think' according to the standards set forth by Kant in Critique of Pure Reason is like asking if this creature discovered in the New World is actually an 'animal' as defined by Aristotle. It’s obvious the real question is how the new evidence must update the category, not how the received category can judge reality.

The fire apes figured out how to make the rocks think

To zoom out and try to locate such developments in the longer arc of the evolution of intelligence, what has been recently accomplished is truly mind bending. One way to think about it, going back to our blue marble movie mentioned above, is that we've had many millions of years of animal intelligence, which became homo sapiens’ intelligence. We've had many millions of years of vegetal intelligence. And now we have mineral intelligence. That the fire apes, that's us, have managed to fold little bits of rocks and metal into particularly intricate shapes and run electricity through them, and now the lithosphere is able to perform feats that until very recently, only primates had been able to do. This is big news. The substrate of complex intelligence, that's us, now includes both the biosphere and the lithosphere. And it's not a zero sum situation. The question we are beginning to be able to ask is how do those integrate together in such a way that they become mutually reinforcing rather than mutually antagonistic.


Alignment of AI with Human Wants and Needs is a Necessary short-term tactic and an insufficient and even dangerous long-term norm

What does it mean to ask machine intelligence to “align” to human wishes and self-image? Is this a useful tactic for design, or a dubious metaphysics that obfuscates how intelligence as a whole might evolve? How should we rethink this framework in both theory and practice?

The emergence of machine intelligence must be steered toward planetary sapience in the service of viable long term futures. Instead of strong alignment with human values and superficial anthropocentrism, the steerage of AI means treating these humanisms with nuanced suspicion and recognizing its broader potential. At stake is not only what AI is, but what a society is, and what AI is for. What should align with what?

At stake is not only how AI must evolve to suit the shape of human culture but also how human societies will evolve in relationship with this fundamental technology. AI overhang -unused or unrealized capacity of AI, not yet, if ever, acclimated into sociotechnical norms– affects not only narrow domains, but also arguably, civilizations, and how they understand and register their own organization–past, present, and future. As a macroscopic goal, simple “alignment” of AI to existing human values is inadequate and even dangerous. The history of technology suggests that the positive impacts of AI will not arise through its subordination to or mimicry of human intuition. The telescope did not only magnify what it was possible to see, it changed how we see and how we see ourselves. Productive disalignment—dragging society toward fundamental insights of AI—is just as essential.

Always remember that everything you do from the moment you wake up to the moment you fall asleep, is training data for the futures model of today.

Cognitive Infrastructures: Open World Evolution

Any point of alignment or misalignment between human and machine intelligence, between evolved and artificial intelligence will converge at the crucial interfaces between each. HCI gives way to HAIID (Human-Computer Interaction Design). HAIID is an emerging field, one that contemplates the evolution of Human-Computer Interaction in a world where AI can process complex psychosocial phenomena. Anthropomorphization of AI often leads to weird "folk ontologies" of what AI is and what it wants. Drawing on perspectives from a global span of cultures, mapping the odd and outlier cases of HAIID gives designers a wider view of possible interaction models. But as opposed to single-user relations with chatbot agents we turn our attention to the great outdoors and the evolution of synthetic intelligence in the wild.

Natural intelligence evolved in open worlds in the past, and so the presumption is that we should look for ways in which machine intelligence will evolve in the present and future through open worlds as well. This also means that it's not just that its substrates of intelligence may be quite diverse, they don't necessarily need to be human brain tissue, or silicon, they may be taking many different forms. Another way of putting this is, instead of the model of AI as a kind of brain in a box, we prefer to start with the question of something more like AI in the wild. Something that is interacting with the world, in lots of different strange and unpredictable ways.

Natural Intelligence also emerges at environmental scale and in the interactions of multiple agents. It is located not only in brains but in active landscapes. Similarly, artificial intelligence is not contained within single artificial minds but extends throughout the networks of planetary computation: it is baked into industrial processes; it generates images and text; it coordinates circulation in cities; it senses, models and acts in the wild.

As artificial intelligence becomes infrastructural, and as societal infrastructures concurrently become more cognitive, the relation between AI theory and practice needs realignment. Across scales—from world-datafication and data visualization to users and UI, and back again—many of the most interesting problems in AI design are still embryonic.

This represents an infrastructuralization of AI, but also a ‘making cognitive’ of both new and legacy infrastructures. These are capable of responding to us, to the world and to each other in ways we recognize as embedded and networked cognition. AI is physicalized, from user interfaces on the surface of handheld devices to deep below the built environment.

Individual users will not only interact with big models, but multiple combinations of models will interact with groups of people in overlapping combinations. Perhaps the most critical and unfamiliar interactions will unfold between different AIs without human interference. Cognitive Infrastructures are forming, framing, and evolving a new ecology of planetary intelligence.

How might this frame human-AI interaction design? What happens when the production and curation of data is for increasingly generalized, multimodal and foundational, models? How might the collective intelligence of generative AI make the world not only queryable, but re-composable in new ways? How will simulations collapse the distances between the virtual and the real? How will human societies align toward the insights and affordances of artificial intelligence, rather than AI bending to human constructs? Ultimately, how will the inclusion of a fuller range of planetary information, beyond traces of individual human users, expand what counts as intelligence?

Recursive Simulations

Simulation, Computation, and Philosophy

Foundations of Western philosophy are based on a deep suspicion of simulations. In Plato’s allegorical cave, the differentiation of the world from its doubles, its form and its shadows, takes priority for the pursuit of knowledge. Today, however, the real comes to comprehend itself through its doubles: the simulation is the path toward knowledge, not away from it.

From Anthropology to Zoology, every discipline produces, models and validates knowledge through simulations. Simulations are technologies to think with, and in this sense they are fundamental epistemological technologies, and yet they are deeply under examined. They are a practice without a theory.

Some computational simulations are designed as immersive virtual environments where experience is artificialized. At the same time, scientific simulations do the opposite of creating deceptive illusions; they are the means by which otherwise inconceivable underlying realities are accessible to thought. From the infinitesimally small in the quantum realm to the inconceivably large in the astro-cosmological realm, computational simulations are not just a tool; they are a technology for knowing what is otherwise unthinkable.

Simulations do more than represent; they are also active and interactive. “Recursive” Simulations refers to simulations that not only depict the world, but act back upon what they simulate, completing a cybernetic cycle of sensing and governing. They not only represent the world, they organize the world in relation to how they summarize and rationalize it. Recursive Simulations include everything from financial models to digital twins, user interfaces to prophetic stories. They cannot help but transform the thing they model, which in turn transforms the model and the modeled in a cyclical loop.


The Politics of Simulation and Reality

We live in an era of highly politicized simulations, for good and ill. The role of climate simulations for planetary governance is only the tip of the proverbial iceberg. Antikythera considers computational simulations as experiential, epistemological, scientific and political forms and develops a framework to understand these in relation to one another.

The politics of simulation, more specifically, is based on recursion. This extends from political simulations to logistical simulations to financial simulations to experiential simulations: the model affects the modeled.

Antikythera’s research in this area draws on different forms of simulation and simulation technologies.These include machine sensing technologies (vision, sound, touch, etc.), synthetic experiences (including VR/AR), strategic scenario modeling (gaming, agent based systems), active simulations of complex architectures (digital twins), and computational simulations of natural systems enabling scientific inquiry & foresight (climate models and cellular/genomic simulations). All of these pose fundamental questions about sensing and sensibility, world-knowing and worldmaking.

They all have different relations to the Real. While scientific simulations pose meaningful correspondence with the natural world and provide access to ground truths that would be otherwise inconceivable, virtual and augmented reality produce embodied experiences of simulated environments that purposefully take leave of ground truth. These two forms of simulation have inverse epistemological implications: one makes an otherwise inaccessible reality perceivable, while the other bends reality to suit what one wants to see. In between is where we live.


Existential Implications of the Simulations of the Future

Recursion can be direct or indirect. It can be a literal sensing/actuation cycle, or the indirect negotiation of interpretation and response. The most nuanced recursions are reflexive. They mobilize action to fulfill or prevent a future that is implied by a simulation. Climate politics exemplifies the reflectivity of recursive simulations: through planetary computation, climate science produces simulations of near term planetary futures, the implications of which may be devastating. In turn, climate politics attempts to build planetary politics and planetary technologies in response to those implications and thereby extraordinary political agency is assigned to computational simulations. The future not only depends on them, it is defined by them.

Scientific simulation, however, not only has deep epistemological value, it also makes possible the most profound existential reckonings. Climate science is born of the era of planetary computation. Without the planetary sensing mechanisms, satellites, surface and air sensors, ice core samples, all aggregated into models and most importantly the supercomputing simulations of climate past present and future, the scientific image of climate change as we know it does not happen. The idea of the Anthropocene, and all that it means for how humans understand their agency, is an indirect accomplishment of computational simulations of planetary systems over time.

In turn, the relay from the idea of the Anthropocene to climate politics is based too on the geopolitics of simulation. The implications of simulations of the year 2050 are dire and so climate politics seeks to mobilize a planetary politics in reflexive response to those predicted implications. That politics is recursive. Deliberate actions are consciously taken now to prevent the future. This is an extraordinary agency to give simulations. It is possible that many climate activists may not feel warmly about the idea, but climate politics is one of the important ways in which massive computational simulations are driving how human societies understand and organize themselves. It’s why the activists are in the streets to begin with.


Pre-Perception: Simulation as Predictive Epistemology

Quite often, though, the simulation comes first. Its predictive ability may imply that there must be something we should look for because the model suggests it has to be there.

Thus the prediction makes the description possible as much as the other way around. Such is the case with Black Holes, which were hypothesized and described mathematically long before they were detected let alone observed. For the design of the Black Hole in the Nolan brothers’ film, Interstellar, scientific simulation software was used to give form to the mysterious entity based on consultation with Kip Thorne at CalTech and others. The math had described the physics of black holes, and the math was used to create a computational model that was used to create a dynamic visual simulation of something no one had ever seen.

Of course, a few years later we did see one. The Black hole at the center of the M87 galaxy was “observed” by the Event Horizon telescope and a team at Harvard that included Shep Doelman, Katie Bowman and Peter Galison. It turns out we, the humans, were right. Black Holes look like what the math says they must look like. The simulation was a way of inferring what must be true– where to look and how to see it. Only then did the terabytes of data from Event Horizon finally discover a picture.


Toy Worlds & Embodied Simulations

Friends from Neuroscience (and Artificial Intelligence) may raise the point that simulation is not only a kind of external technology with which intelligence figures out the world, but simulations are how minds have intelligence at all. The cortical columns of animal brains are constantly predicting what will be next, running through little simulations of the world and the immediate future, resolving them with new inputs and even competing with each other to organize perception and action.

For many computational simulations, their purpose is as a model that reflects reality (such as for climate science or astrophysics). For others the back and forth is not just mirroring. Some simulations not only model the world, but they also feedback upon what they model both directly and indirectly; these recursive simulations not only model an external reality, but directly act back upon that reality in a decisive feedback loop. “Digital twins” express this dynamic. In the recursive relation between simulation and the real, the real is the baseline model for simulations and simulations as a baseline model for the real.

Many AIs, especially those embodied in the world, such as driverless cars, are trained in Toy World simulations, where they can explore more freely, bumping into the walls, until they, like us, learn the best ways to perceive, model, and predict the real world.


Simulation as Model / Model as Simulation

Scientific simulations not only do more than deceive us, they are, arguably, the essential mechanism by which otherwise inconceivable underlying realities are accessible to thought. From the very very small in the quantum realm to the very very large in the astro-cosmological realm, computational simulations are essential not just as a tool, but as a way of thinking with models, a fundament of induction, deduction and abduction.

At the same time, simulations are based on models of reality, and the status of the model has been a preoccupying concern in philosophy of science, even if simulations as such are more presumed that philosophized.Models are a way of coalescing disparate bits of data into a composite structure whose whole gives shape to its parts, suggesting their interactions and general comparisons with other structures. It is a tool to think with. The value is in the descriptive correspondence with reality, but this correspondence is determined by its predictive value. If a scientific simulation can predict a phenomenon, its descriptive quality is implied. A model is also, by definition, a radical reduction in variables, i.e. a map reduces a territory. A geocentric or heliocentric model of the solar system can be constructed with styrofoam balls, and one is definitely “less wrong” than the other, but both are infinitely less complex than what they model.

This is especially important when what is simulated is as complex as the universe itself. Astrophysics is based almost entirely on rigorous computational simulations of phenomena that produce difficult to observe data, assembled into computationally expensive models, and which ultimately provide for degrees of confident predictability about astronomic realities that situate us all. This is what we call cosmology, the meta-model of all models of reality in which humans and other intelligences conceive of their place in space-time. Today, cosmology in the anthropological sense is achieved through cosmology in the computational sense.

Hemispherical Stacks

The Stack: Planetary Computation as Global System Is this here or Hem Stacks?

Planetary computation refers to the interlocking cognitive infrastructures that structure knowledge, geopolitics and ecologies. Its touch extends from the global to the intimate, from the nanoscale to the edge of the atmosphere and back again. It is not a single totality demanding transparency, but a highly uneven, long-emerging blending of biosphere and technosphere.

As you stare at the glass slab in your hand, you are, as a user, connected to a planetary technology that both evolved and was planned in irregular steps over time, each component making use of others: an accidental, discontiguous megastructure. Instead of a single megamachine, planetary computation can be understood as being composed of modular, interdependent, functionally-defined layers, not unlike a network protocol stack. These layers compose The Stack: the Earth layer, Cloud layer, City layer, Address layer, Interface layer, and User layer.

Earthly ecological flows become sites of intensive sensing, quantification and governance. Cloud computing spurs platform economics and creates virtual geographies in its own image. Cities form vast discontiguous networks as they weave their borders into enclaves or escape routes. Virtual addressing systems locate billions of entities and events into unfamiliar maps. Interfaces present vibrant augmentations of reality, standing in for extended cognition. Users, both human and non-human, populate this tangled apparatus. Every time you click on an icon, you send a relay all the way down the paths of connection and back again, you activate (and are activated by) the entire planetary infrastructure hundreds of times a day.

The Emergence of Multipolar Geopolitics through Multipolar Computation

The emergence of planetary computation in the late 20th century shifted not only the lines on the map but the maps themselves. It distorted and reformed Westphalian political geography and created new territories in its own image. Large cloud platforms took on roles traditionally assumed by nation-states (identity, maps, commerce, etc.) now based on a transnational network geography, while nation-states increasingly evolved into large cloud platforms (state services, surveillance, smart cities, etc.). The division of the Earth into jurisdictions defined by land and sea has given way during the last few decades to a more irregular, unstable and contradictory amalgam of overlapping sovereign claims to data, people, processes, and places defined instead by bandwidth, simulation, and hardware and software chokepoints.

Over the past decade, these stacks have been decisively fragmenting into multipolar hemispherical stacks defined by geopolitical competition and confrontation. A North Atlantic-Pacific stack based on American platforms was delinking from a Chinese stack based on Chinese platforms, while India, The Gulf, Russia, and Europe charted courses based on citizenship identification, protection and information filtering.

From Chip Wars to EU AI Decrees, this marks a shift toward a more multipolar architecture, hemispheres of influence, and the multipolarization of planetary scale computation into Hemispherical Stacks. These segment and divide the planet into sovereign computational systems extending from energy and mineral sourcing, intercontinental transmission, and cloud platforms to addressing systems, interface cultures and different politics of the “user.”

A New Map

This is both exciting and dangerous. It implies both Galapagos effects of regional cultural diversity but also artificially encapsulated information cultures. For geotechnology just as for geopolitics, “digital sovereignty” is an idea beloved both by democracies and authoritarians.

The ascendance of high end chip manufacturing to the pinnacle of strategic plans — in the US and in the China Strait — is exemplary, and corresponds with the removal of Chinese equipment from Western networks, the removal of Western platforms from Chinese mobile phones, and so on. Economies are defined by interoperability and delinking. But the situation extends further up the stack. The militarization of financial networks in the form of sanctions, the data driven weaponization of populism, and the reformulation of “citizen” as a “private user with personal data” all testify to deeper shifts.In some ways these parallel historical shifts in how new technologies alter societal institutions in their image, and yet the near term and long term future of planetary computation as a political technology is uncertain. Antikythera seeks to model these futures pre-emptively, drawing maps of otherwise uncharted waters.

Hemispherical Stacks describes how the shift toward a more multipolar geopolitics over the last five years and the shift toward a more multipolar planetary computation, not only track one another, in many respects, they are the same thing.


The AI Stack

It is likely that the last half century during which “The Stack” evolved and was composed was really just a warmup for what is about to come: from computation as planetary infrastructure to computation as planetary cognitive infrastructure, from a largely State-funded “accidental megastructure” to multiple privately-funded overlapping, strategically composed, discontiguous systems, from the gathering, storage and sorting of information flows to the training of large and small models and serving generative applications on a niche-by-niche scale, from Von Neumann architectures and procedural programming to neuromorphic systems and the collapse of the user vs. programmer distinction, and from sending light and inexpensive information to information on heavy hardware to heavy information loads accessed by light hardware. Despite how unprepared mainstream international relations may be for this evolution, this is not science fiction; this is last week.

Chip Race: Adversarial Computational supply-chains

Computation is, in the abstract, a mathematical process, but it is also one that uses physical forces and matter to perform real calculations. Math may not be tangible, but computation as we know it very much is, since electricity moves in tiny pathways on a base made of silicon. It is also worth remembering that the tiny etchings in the smooth surface of a chip, with spaces between measured in nanometers, are put there by a lithographic process. The intricate pathways through which a charge moves into order to compute are, in a way, a very special kind of image.

The machines that make the machines are the precarious perch on which less than a dozen companies hold together the self-replication of planetary computation. The next decade is dedicated to the replication of this replication supply chain itself: the race to build better stacks. If society runs on computation, the ability to make computational hardware is the ability to make a society. This means that the ability to design and manufacture cutting-edge chips, shrinking every year toward perhaps absolute physical limits of manipulable scale, is now a matter of national security.

Chips are emblematic of all the ways that computational supply chains have shifted and consolidated the axes of power around which economies rotate. One Antikythera project, Cloud Megaregionalization, observes that a new kind of regional planning has emerged–from Arizona to Malaysia to Bangalore–that concentrates Cloud manufacturing in strategic locations that balance many factors: access to international trade, energy and water sourcing, access to educated labor, physical security. These are the new criteria for how and where to build the cloud. Ultimately, the chip race is not just a race to build chips but to build the urban regions that build the chips.


Astropolitics: Extraplanetary Sensing and Computation

Another closely related race is the re-emergence of outer “space” as a contested zone of exploration and intrigue, from satellites to the moon and mars and back again. It is being driven by advances in planetary computation and which in turn drives those advances, spreading them beyond terrestrial grounding.

Planetary computation becomes Extraplanetary computation and back again. If geopolitics is now driven by the governance of model simulations then the seat of power is the view from anywhere. That is, if geopolitics is defined by the organization of terrestrial States, Astropolitics is and will be defined by the organization of Earth’s atmosphere, its orbital layers, and what lies just beyond. The high ground is now beyond the Karman Line, the territory dotted with satellites looking inward and outward.

In the 1960s much was made of how basic research for the space race benefitted everyday technologies, but today this economy of influence is reversed. Many of the technologies that are making the new space race possible—machine vision and sensing, chip miniaturization, information relays and other standards–were first developed for consumer products. As planetary computation matured, the space race turned inward toward miniaturization, but today the benefits of these move outward again.

The domain of space law, once obscure, will in the coming decades come to define international law, as it is the primary body of law that takes as its jurisdiction an entire astronomic body, of which Earth and all those things in its orbit are also exemplary.

What do we learn from this? How is this an existential technology? There is no Planetarity without the Extraplanetarity: to truly grasp the figure of our astronomic perch is a Copernican trauma by which the ground is not “grounding” but a gravitational plane, and the night sky is not the heavens but a time machine showing us light from years before the evolution of humans. For this, the archaic distinction between ‘down here’ and ‘up there’ also fractures.


The Technical Milieu of Governance

The apparent technologically determined qualities of these tectonic shifts may undermine some mainstream political theory’s epistemological habits: social reductionism and cultural determinism. While the fault lines by which hemispheres split traces the boundaries of much older socioeconomic geographic blocs, each bloc is increasingly built upon a similar platform structure, one that puts them in direct competition for the ability to build the more advanced computational stack and thereby to build the more advanced economy, and through this compose society.

The “Political” and “governance” are not the same thing. Both always exist in some technical milieu that both causes and limits their development. If the Political refers to how the symbols of power are agonistically contested, then governance (inclusive of the more cybernetic sense of the term) refers to how any complex system (including a society) is able to sense, model and recursively act back upon itself. Some of the confusion for forms of political analysis born of a pre-computational social substrate seems to be the closely held axiom that planetary computation is something to be governed by political institutions from an external and supervisory position. The reality is that planetary computation is governance in its most direct imminent sense; it is what senses, models and governs and what, at present, is reforming geopolitical regimes in its image.


Beyond Cultural Determinism

Not surprisingly, cultural determinism enjoys an even deeper commitment in the Humanities, and there, even when planetary computation is recognized as fundamental, the sovereignty of cultural difference is defended less as cause of computation’s global emergence (as it may be for political science) but also as the remedy for its emergence. “Cosmotechnics” and other gestures toward pluralism as both means and end confront the global isomorphic qualities of planetary computation as the expression not of artificial convergent evolution but as the expressive domination of a particular culture. To contest that domination is thus to contest the expression. For the most extreme forms, pluralism is framed as a clash of reified civilizations, each possessing essential qualities and ‘ways of being technological’ –one Western, one Chinese, etc. Beyond the gross anthropological reduction, this approach evades the real project for the humanities: not how culture can assert itself against global technology but how planetary computation is the material basis not only of new “societies” and “economies” but of different relations between human populations bound to planetary conditions.

As I put it in the original Hemispherical Stacks essay, “Despite the integrity of mutual integration, planetarity cannot be imagined in opposition to plurality, especially as the latter term is now over-associated with the local, the vernacular, and with unique experiences of historical past(s). That is, while we may look back on separate pasts that may also set our relations, we will inhabit conjoined futures. That binding includes a universal history, but not one formulated by the local idioms of Europe, or China, or America, or Russia, nor by a viewpoint collage of reified traditions and perspectives, but by the difficult coordination of a common planetary interior. It is not that planetary-scale computation brought the disappearance of the outside; it helped reveal that there never was an outside to begin with.”

Planetary Sapience

What is the relationship between the planetary and intelligence? What must it be now and in the future?

These questions are equally philosophical and technological. The relationship is one of disclosure: over millions of years, intelligence has emerged from a planetary condition which, quite recently, has been disclosed to that intelligence through technological perception. The relationship is also one of composition: for the present and the future, how can complex intelligence—both evolved and artificialized— conceive a more viable long term coupling of biosphere and technosphere?


Planetary Intelligence

Over billions of years, Earth has folded its matter to produce biological and non-biological creatures capable of not only craft and cunning but also feats of artistic and scientific abstraction. Machines now behave and communicate intelligently in ways once reserved for precocious primates. Intelligence is planetary in both origin and scope. It emerges from the evolution of complex life, a stable biosphere, and intricate perceptual-cognitive organs. Both contingent and convergent, intelligence has taken many forms, passing though forking stages of embodiment, informational complexity and eventually even (partial) self-awareness.


Planetary Computation and Sapience

Planetary-scale computation has allowed for the sensing and modeling of climate change and thus informed the conception of an Anthropocene and all of its existential reckonings. Among the many lessons for philosophy of technology is that, in profound ways, agency preceded subjectivity. Humans (and the species and systems that they cultivated and were cultivated by) terraformed the planet in the image of their industry for centuries before really comprehending the scale of these effects. Planetary systems both large and small, and inclusive of human societies and technologies, have evolved the capacity to self-monitor, self-model, and hopefully deliberately self-correct. Through these artificial organs for monitoring its own dynamic processes, the planetary structure of intelligence scales and complexifies. Sentience becomes sapience: sensory awareness becomes judgment and governance.


Modes of Intelligence

The provocation of Planetary Sapience is not based in an anthropomorphic vision of Earth constituted by a single ‘noosphere.’ Modes of intelligence are identified in multiple scales and types, some ancient and some very new. These include mapping the extension and externalization of sensory perception; redefining computer science as an epistemological discipline based not only on algorithmic operations but on computational planetary systems; comparing stochastic predictive processing in both neural networks and artificial intelligence; embracing deep time of the planetary past and future as foundation for a less anthropomorphic view of history; modeling life both by the transduction of energy and/or the transmission of information, exploring substrate dependence or independence of general intelligence; embracing astronautics and space exploration as a techno-philosophical pursuits that define the limit of humanity’s tethering to Earth and extend beyond it; exploring how astroimaging–such as Earth seen from space and distant cosmic events seen by Earth–has contributed to the planetary as a model orientation; theorizing simulations as epistemological technologies that allow for prediction, speculation and ultimately a synthetic phenomenology; measuring evolutionary time in the complexity of the material objects that surround us and constitute us; recomposing definitions of “life,” of “intelligence” and of “technology” in light of what is revealed by the increasing artificialization and recombination of each. These modes of intelligence together lead us to construct a technological philosophy that might synthesize these into one path toward greater planetary sapience: a capacity for complex intelligence to cohere its own future.


The Evolution of Artificialization, Intelligence and the Artificialized Intelligence

Evolution of Autopoiesis

To properly ask the questions posed requires us to locate the emergence of artificial computational intelligence within the longer arc of the evolution of intelligence as such and its relationship to artificialization as such. The two have, I argue, always been intertwined, as they are now and as they will be for both foreseeable and unforeseeable futures.

Our thinking on this is also influenced by Sara Walker and Lee Cronin’s provocative Assembly Theory. which posits that evolutionary natural selection begins not with Biology but (at least) with Chemistry. The space of possible molecules is radically constrained and filtered through selection toward those which are most stable and most conducive to becoming scaffolding components for more complex forms. Those forms which are able to internalize and process energy, information and matter with regular efficiency can autopoietically replicate themselves become what we can call “life” (also by Auguera y Arcas’ computational definition.) The process is best defined not by how a single organism or entity internalizes the environment to replicate itself (as a cell does) but by how a given population distributed within an environment evolves through the cooperative capacity to increase energy, matter and information capture for collective replication. I invite the reader to look around and consider all the things upon which they depend to survive the day.

For autopoietic forms to succeed in folding more of the environment into themselves to replicate, evolution, arguably, selects for their capacity to transform their niche in ways that make it more suitable for this process. For example, by reducing the amount of energy expenditure necessary for energy capture, a given population is able to accelerate and expand its size, complexity and robustness. More of the world is transformed into that population because it is capable of allopoiesis, the process of transforming the world into systems that are external to the agent itself. That is, evolution seems to select for forms of life capable of artificialization. Perhaps those species most capable of artificialization are the greatest beneficiary of this tendency.

Complexity begets complexity. Simple allopoiesis and environmental artificialization may be an all but autonomic process, but greater cooperation between agents allows for more complex, efficient and impactful forms of artificialization. Here selection pressure enables the evolution of more nuanced forms of individual and collective intelligence as well as more powerful forms of technology, which we might define “technology” very generally as durable scaffolding apparatus that is leveraged by intelligent agents to transform and internalize matter, energy or information at scales and with regularity and precision otherwise impossible. In this regard, “technology” occupies a parallel symbiotic evolutionary track, one that determines and is determined by the ongoing evolution of intelligent life. What emerges are technologically-enabled conceptual abstractions about how the world is and, perhaps more importantly, counterfactual models about how it might be otherwise. For a form of autopoietic life (including humans) to get really good at intelligence, it means to instantiate counterfactual models and communicate them. It will require something like formally coded symbolic language, which eventually evolved in aggregation of all the preceding biosocial and sociotechnical scaffolds.

Simple evolutionary processes are what enable autopoetic forms to emerge, which become scaffolds for yet more complex forms, which become scaffolds for yet more complex forms capable of allopoetic accomplishments, which become scaffolds for complex intelligence and technologies, which in turn become scaffolds for durable, cultural and scientific abstractions as mediated by symbolic language and inscription. The accumulation and transgenerational transmission of conceptual and technical abstractions through linguistic notation, in turn amplifies not only the aggregate intelligence of the allopoeitically-sophisticated population but also their real capacity for transforming their world for autopoietic replication. Language began a great acceleration, and another threshold was passed with symbolic forms, another with coded notation, another with mechanical capture of condensed energy, and another with the artificialization of computation.

The very earliest forms of artificialization are driven by primordial forms of intelligence and vice versa. Each evolves in relation to other to such a degree that from certain perspectives they could be seen as the same planetary phenomena: autopoietic matter capable of allopoiesis (and technology) because it is intelligent enough and capable of devoting energy to intelligence because it is allopoieitic. Regardless, intelligence is at least a driving cause for the technological and planetary complexification of artificialization as an effective process. The question that demands to be asked is then what happens when intelligence, the driving force of artificialization for millions of years, is itself artificialized? What is foreseeable through the artificialization of artificialization itself?

It is perhaps not altogether surprising that language would be (for now) the primary scaffold upon which artificialized intelligence is built. It is also assured that the artificialization of language will recursively transform the scaffold of language upon which it depends as much as the emergence of coded language affected social intelligence, the scaffold upon which it depends, and intelligence would affect the allopoietic artificialisation, the scaffold upon which it depends and so on. Ultimately, the long arc of bidirectional recursion may suggest that the emergence of increasingly complex artificialized intelligence will affect the direction of life itself, how it replicates and how it directs and is directed by evolutionary selection.

The most pressing question now is: “for what is AI a scaffold? What comes next?”

There is no reason to believe that this is the last scaffold, that history has ended, that evolution has reached anything but another phase in an ongoing transition from which each of our lives takes momentary shape. Because this isn’t ending, AI is not the last thing, just as intelligence was a scaffold for symbolic language, which was scaffold for AI. AI is a scaffold for something else, which is a scaffold for something else, which is a scaffold for something else, and so on, and so on. What we're building is a scaffold for something unforeseeable. What we today call AI replicates both autopoietically and allopoietically, it is “life” if not also alive. It would be an enormous conceptual and practical mistake to believe that it is merely a “tool” which implies that it is separate from and subordinate to human tactical gestures, that it is an inert lump of equipment to be deployed in the service of clear compositional intention, and that it has no meaningful agency beyond that which it is asked to have on our provisional behalf.

It is, rather, like all of the various things that make humans human and make life life, a complex form that emerges from the scaffolds of planetary processes that precede it, and it is a scaffold for another something yet to come, and on and on.

None of this implies the disappearance of absorption of humanity into a diminished horizon anymore than the evolution of language leads to the disappearance of autopoieisis. Scaffolds not only live on, but when successful they tend to be amplified and multiplied by what they subsequently enable.

Part of the ethics of philosophy is that it's never, ever done, that the best thing you can hope to build is something that later on becomes part of something else. You build something that others can build with later on. Machine intelligence is evolving and evolving through processes that are roughly like evolution. It will reveal something and then become something else.


Life / Intelligence / Technology

At the same moment that we discover that they have always been more deeply interconnected than we realized, we learn to artificially combine them in new ways. The technical reality drives the paradigmatic shift which drives the technological shift. Hybrids of living and non-living materials produce chimeras and cyborgs, from the scale of individual cells to entire cities. Minerals are folded and etched with intricate care, pulsing with electrical current, performing feats of intelligence previously exclusive to primates.

Concurrent with the physical convergence of life, intelligence and technology is a paradigmatic convergence of their definitions. Each is being redefined in relation to the other, and the definitions look increasingly similar. Life is a technology. Technology evolves. Intelligence uses life to make technology and technology to make life which makes intelligence in turn. Life is a factory for making life and technology is a factory for making technology. As both amplify intelligence, they may actually be the same factory.

What philosophy going back to Aristotle has seen as fundamentally different categories may be different faces of the same wonder. If so, then a more general science may be being born, and we are its midwives. Cybernetics foreshadowed this broader systems science that integrated humans, animals and artificial species, and precedented further advances in foundational theories of learning and intelligence. What comes now will surely be yet more momentous. The capacity to realize its possibilities is orders of magnitude more powerful. The atom splits again.

Computation is making this possible. In the past half century computation has become not only the primary general purpose technology at planetary scale, it is also the means by which life and intelligence are both studied and engineered. It is how we understand how brains work and how we build artificial brains, how we understand how life works and build artificial life, how we understand how technology works. Because technology evolves, computation is how we make better computation.

Computational technologies necessitate a living philosophy of computation that allows for a science that studies what computational technologies reveal about how the world works and the implications of how they transform the world in their image. This is the planetary in planetary computation: a cosmology in every sense of the word.


Planetary Reason, Philosophy and Crisis

“The decisive paradox of planetary sapience is the dual recognition that, first, its existence is cosmically rare and extremely fragile, vulnerable to numerous threats of extinction in the near and long term, and second, that the ecological consequences of its own historical emergence have been a chief driver of the conditions that establish this very same precariousness.”

The global effects of complex intelligence have put the future of complex intelligence in peril. Planetary technologies, such as nuclear fusion, are always and forever a means of both creation and destruction. That which might enable the further growth and complexification of intelligence is simultaneously that which may drive its auto-extinction. The backdrop of this permanent dilemma is the universality of planetary time and cycles in which struggles against entropy are fought. For how long? What are the preconditions for a viable coupling of biosphere and technosphere? Is complex intelligence adaptive (and one of those such preconditions because it can remake the planet in its image) or is it actually maladaptive, precisely because, like anthropogenic climate change, it remakes the planet in its image? More importantly, what would make it adaptive? How might planetary intelligence steer itself toward its own survival?

All philosophy and all engineering are intrinsically planetary, not only because of their ambition but also because of their origins and consequences. Engineering must be guided by this perspective, just as philosophy must be renewed by a direct collaboration with the planetary technologies that not only extend the reach of intelligence, but which reveal and demystify intelligence as it looks in mirrors of its own making.

Author

Benjamin Bratton

Benjamin Bratton is Director of the Antikythera program at Berggruen Institute. He is Professor of Philosophy of Technology and Speculative Design at the University of California, San Diego. His research spans philosophy of technology, social and political theory, computational media & infrastructure, and speculative design. He is the author of several books including The Stack: On Software and Sovereignty, The Terraforming, and The Revenge of the Real.

Design

Channel Studio

Channel is a brand and user experience innovation company combining intensive technology with formal design expertise.

Will Denton · Principal Director
Luiza Dale · Designer
Gabriel Mester · Designer
Mianwei Wang · Designer
Lukas Eigler-Harding · Lead Developer
Yuri Bultheel · Developer