Artifices of Intelligence/Intelligence of Artifices

example cellular automata

The following is the text of a talk I gave on Derrida’s La Vie La Mort Seminar in February 2020 at a conference in Ann Arbor organized by Sergio Villalobos-Ruminott.

Thank you all for being here; I would especially like to thank Sergio for the invitation to speak. I have to admit that I am both honored and a little bit nervous about being the first speaker, perhaps it is not the case but I feel a certain responsibility to set a tone for our conference. While I hope to do my best in this regard, I have to say I am even a little more nervous about my abilities to do so given my present circumstances. When I began planning for this talk a few months ago, I set myself what appeared to me a rather ambitious research question, namely, to ask what are we to make of this question of life-death and the program in the light of advances and excitement (likely over-excitement) around artificial intelligence. But, and apologies for sharing such personal details, in the intervening weeks I have had to deal with a family medical situation in which our child has been in and out of the hospital dealing with frequent seizures. So, I hope you will accept my apologies that my talk will have to be more speculative, less well cited, fragmented and reframe some of the work that is present in my book on deconstruction and cyberwar rather than rigorously pursuing new research. Yet, I will still attempt to at least chart for you all where I see this research going and some of the questions I believe it raises in the context of Derrida’s meditations on life-death.

Moreover, though I fear I can do little with these thoughts at the moment, on a personal level I have been thinking deeply of the questions raised by this seminar and the work many of you in this room have done around its questions, as I have been searching personally for metaphors for these organic neurons, that may or may not bear similarities to those artificial or computational neurons that have become the workhorses of modern artificial intelligence, and their networks, energetics, discharges, stimulations and ultimately storms, seizures and related malfunctions, that is their ceasing to function or their functioning differently. And it has struck me that there are many questions to ask about this economy of energetics, and its différances, alongside Derrida’s writing on Freud’s attempts to describe these systems, and ultimately the extent to which these energetic economies are best understood as a form of arche-writing or perhaps more clearly as that arche-violence with which Derrida also offers to name the same movement of différance.

I would like to start with four epigraphs of sorts; each related to attempts to produce or reproduce life or intelligence computationally, which at our present moment amounts to producing it in silicon.

A complete discussion of automata can be obtained only by taking a broader view of these things and considering automata which can have outputs something like themselves. Now, one has to be careful what one means by this. There is no question of producing matter out of nothing. Rather, one imagines automata which can modify objects similar to themselves, or effect syntheses by picking up parts and putting them together, or take synthesized entities apart. In order to discuss these things, one has to imagine a forma set-up like this. Draw up a list of unambiguously defined elementary parts. Imagine that there is a practically unlimited supply of these parts floating around in a large container. One can then imagine an automaton functioning in the following manner: It also is floating around in this medium; its essentially activity is to pick up parts and put them together, or, if aggregates of parts are found, to take them apart.
This is an axiomatically shortened and simplified description of what an organism does. It’s true that this view has certain limitations but they are not fundamentally different from the inherent limitations of the axiomatic method. Any result one might reach in this manner will depend quite essentially on how one has chosen to define the elementary parts.
– John von Neumann, “Re-evaluation of the problems of complicated automata—problems of hierarchy and evolution”, 1949

We consider Conway’s latest brainchild, a fantastic solitaire pastime he calls ‘life’. Because of its analogies with the rise, fall and alternations of a society of living organisms, it belongs to a growing class of what are called ‘simulation games’–games that resemble real-life processes…The basic idea is to start with a simple configuration of counters (organisms), one to a cell, then observe how it changes as you apply Conway’s “genetic laws” for births, deaths, and survivals…You will now have the first generation in the life history of your initial pattern…Mistakes are very easy to make, particularly when first playing the game. After playing it for a while you will gradually make fewer mistakes, but even experienced players must exercise great care in checking every new generation.
– Martin Gardner, “Mathematical Games: The fantastic combinations of John Conway’s new solitaire game ‘life’”, 1970.

It seems to me that today artificial intelligence represents the fourth narcissistic blow to humanity. Recall Freud’s famous statement [recounting the historical blows dealt by the sciences to “the naive self-love of men”] First Copernicus, followed by Darwin, then psychoanalysis, and now the fourth blow: the capturing of intelligence by its own simulation, exceeding and transcending it.
– Catherine Malabou, Morphing Intelligence, 2018

If a technology tested in Europe proves to be too opaque or fails to comply with the rules in place, regulators may order the company to reboot the AI and make it learn from scratch based on different data
– Valentina Pop, Big Tech to Face More Requirements in Europe on Data Sharing, AI, Wall Street Journal, February 19, 2020

example cellular automata

If the, or at least one, question posed in La Vie La Mort could be summarized as: how are we to confront this moment when in the history of science the concepts of writing, of trace, of différance, become necessary to advance a scientific project that still clings to metaphysics, especially a metaphysical conception of life, what I hope to do today is to open a corollary question or perhaps the inverse of this question: namely what happens to metaphysics when life, and its products, such as reproduction and intelligence, can be expressed as a program. Alongside the question of understanding life as program, we confront the question of the program and the computer being able to “give birth” to life, to a form of life, of life-death, that if we take it seriously immediately calls into question the metaphysical notion of life, of intelligence, rationality and the human.

To start from the last example from these epigraphs, of artificial intelligence, I will admit that up until the point I sat down to write this paper I fear I have not taken it seriously enough. I have tended in many ways to dismiss it variously as dimensionality reduction, statistics without rigorous hypothesis testing, unable to pose or even answer actually interesting questions etc. etc. But, I realize now perhaps these have all been defenses against including advances of artificial intelligence within the list of Freud’s humiliations of man as Catherine Malabou does. Perhaps it is as such a humiliation that it could shake our very idea of intelligence, and especially its relationship to humanity and a metaphysical understanding of life. My intention in this talk is to focus on this question of artificial intelligence, that is an intelligence that is artificial in so much as it is ‘programmed’ and its potential affront to metaphysical concepts of life, intelligence, etc. as well as its simultaneous threat to reify metaphysics to the nth degree and then to return to this question of the program especially as it is presented in Derrida’s later work.

To begin, it is important to point out as Malabou does and many others along with her the long racist and eugenic history and nature of many discourses around intelligence, associating intelligence with what Sylvia Wynter calls the “overrepresentation” the human, that is with its white western ableist and male conception. While this can be well traced through the history and discourses of IQ testing, it can also be seen it at work in the infamous Turing, in which true artificial intelligence is measured not by some deep understanding that always appears ineffable when one tries to determine its existence, but rather by the ability of a computer to fool a human judge into thinking that it too is a human.

Thus, we should ask the question of which humans would be able to pass this test, that ultimately requires a certain acumen in the language of the examiner. Moreover, this use of language often tends to mean not so much the highest expression of rationality as the enlightenment dreamt it but rather a certain default of sense and of reason that marks the human use of language. It would not be difficult to show that this concept of intelligence, both as represented in the IQ test and the Turing test, is thoroughly metaphysical and anthropocentric insisting that “man” is both the possessor and measure of intelligence.

To take seriously the advancements of artificial intelligence, especially as a humiliation to man, must involve in a sense abandoning or at least thinking beyond such a conception. At the same time the discourse around such developments, especially those that adhere to certain ideas of progress, reveal the deepest dreams of metaphysics, specifically those of Hegel that Derrida cites in the seminar, where life, or here intelligence, should in progressing transcend the individual and with it death.

In this way, we touch on a more general concern that is at work in computation and programming in general, especially as it simulates and produces the products of life and the sciences writ large, namely their simultaneous allegiance and foundation in a logocentric metaphysics and their threat to outpace that metaphysics. Derrida starts the Grammatology stating that logocentricism controls in one and the same order, amongst other things, “the concept of science or the scientificity of science—what has always been determined as logic—a concept that has always been a philosophical concept, even if the practice of science has constantly challenged its imperialism of the logos, by invoking, for example, from the beginning and ever increasingly, nonphonetic writing.” Likewise, Francesco Vitale says of Derrida’s work in the seminar that he “he verifies the deconstructive effects of the recourse to the theory of information and in particular to the notions of “programme” and “writing” in the context of the life sciences. It is worth remarking, however, that, from the first pages, Derrida seems to be very suspicious about the alleged emancipation of biology from philosophy.” I think this is ultimately what is at stake both in biology and the simulation and rearticulation of life and intelligence in computer science. In short the turn to arche-writing in the biological sciences at stake in la vie la mort and the more recent turn to programming, statistics and networks in opaque machine learning algorithms threaten in a sense simultaneously to cement and to oust metaphysics.

In this regard, I have been struck for a number of years now by a single sentence about cybernetics, in the Grammatology; Derrida says in the form of a conditional:

“If the theory of cybernetics is by itself to oust all metaphysical concepts — including the concepts of soul, of life, of value, of choice, of memory — which until recently served to separate the machine from man, it must conserve the notion of writing, trace, written mark, gramme or grapheme, until its own historico-metaphysical character is also given up.”

I have chosen to translate this last word as “give up”, especially in the sense of turning oneself in to the authorities, rather than “expose” or “denounce” as Spivak has, for a number of reasons, that I should like to make clear to you.

This giving itself up suggests that what is at stake is simultaneously admitting what has taken place, abandoning a former way of being, that is saying to oneself “I can’t go on this way anymore; life on the lam is no longer possible” and ultimately taking responsibility. But let us be clear this taking responsibility does not necessarily mean in a personal or moral sense but rather in the face of a situation, and here an interpretation gets especially perilous, but perhaps what cybernetics gives itself up to is even a certain authority, which here may be metaphysics qua science itself; we imagine then in a way the police, who have always in a way been synonymous with metaphysics, turning themselves in to themselves. When one gives one self up in this way at minimum they only admit that they can no longer avoid the consequences of what has happened or at least what they have been accused of regardless of whether or not one feels the law has actually been broken or that the law is just. We of course here risk the use of a metaphysical accounting for this abandonment, but that is precisely what is at stake in this sentence is metaphysics qua cybernetics giving itself up, exposing and denouncing, abandoning its life on the lam.

Moreover, what strikes me about this sentence is that Derrida says that cybernetics would have to preserve writing, trace, etc. up until that moment it gives itself up. Remembering that this is all under a conditional “if”, were cybernetics capable of fulfilling this conditional in the affirmative, on the other side of giving itself up, it could even perhaps go further than Derrida and do away with these quasi-metaphysical concepts that teeter on the edge between metaphysics and some elsewhere, but still trapped in its enclosure. What it seems to me Derrida is suggesting here is the possibility that cybernetics could possibly escape metaphysics, that it bears within it the possibility of a non-metaphysical science. We may even suggest the hypothesis that what ultimately lead to such rapid abandonment and collapse of cybernetics at the forefront of science was that it drew in its later iterations too close to this precipice, but such hypotheses must be held in reserve for the time being.

To continue on this path of inquiring into whether the advancement of “the program” as a deconstructive force would allow cybernetics or the biological sciences to outpace metaphysics, recognizing full well that it makes no promises to do so, requires than accounting for the function of such a program. I have taken this question of the program up at length in my book, so I will not belabor the point, but it should be noted in this regard that at points especially I think in his later work and inheriting in some ways Heidegger’s antipathy to cybernetics and the program, Derrida insists that the program in its calculability forecloses the event and the possibility of that which arrives. For him, the unconditionality of the openness to what arrives under the name of deconstruction marks “the end of the horizon, of teleology, the calculable program, foresight, and providence.” That is to say that program is too machinic, to determined, to predictable and thus in a way captured within the enclosure of metaphysics

The programme is dual for Derrida: In the seminar he states:

“The heterogeneity of causes and effects, the non-deliberate character of changes in programme, in a word, all that places subjects within the system in a situation of unconscious effects of causality; all that produces effects of contingency between the action coming from the outside and the internal transformations of the system, all of that characterizes the non-genetic programme as much as the genetic one.”

Of course, Derrida is aware of this machinic force that haunts both writing and deconstruction and at times writes directly of it, offering it a proper name, but a proper name that has already been named as the opposing force of a radical other who would preclude the programmatic. In relation to Joyce’s Ulysses, referring to two Elijahs that inhabit the work, he states:
No longer Elijah the grand operator of the central, Elijah the head of the megaprogramotelephonic network, but the other Elijah, Elijah the other. But this is a homonym, Elijah can always be either one at the same time, one cannot call on one without risking getting the other.

In this sense deconstruction both requires this programmatic other and surrounds itself with a protective discourse that aims to resist the seduction of the programotelephonic Elijah that would make of deconstruction a machine that repeats the machine of metaphysics. One always risks the mutual contamination of these two Elijahs and in response deconstruction at places attempts to refuse the machinic Elijah. While Elijah the grand operator is never able to exclude or refuse Elijah the other, any who would ally themselves with the other must carefully avoid this force of alterity itself being haunted by a megaprogramotelephonic network. Derrida says: “I hear this vibration as the very music of Ulysses. The computer today cannot enumerate these interlacings, despite all the many ways it is already able to help us. Only a computer which has not yet been invented could answer that music in Ulysses.” For Derrida at the point of this text, no extant programming language or computer could ever integrate the beautiful ambiguity and musical vibrations of writing; only a computer to come.

Retaining the distinction between writing and program and with it a distinction between calculation and the incalcuable, between literature and the literal writing of programming has allowed a certain reading of deconstruction to conserve a relationship with authorial intent; as long as the program is neither deconstrucable nor a force of deconstruction, one can maintain, even if in covert fashion, that deconstruction must proceed exclusively as an authored theoretical activity. Minimizing the nonhuman agency of the program, and with it arche-writing, produces a corollary effect, reifying the sanctity of the subject who writes in human languages. This distinction is thus central to a deconstruction that would attempt to secure an authorial importance while disavowing the threat the sciences and maybe even life itself may portend to metaphysics.

Derrida, in speaking of animals, hints that the contamination of writing by animals and machines has always been central to his work: “This animal-machine has a family resemblance with the virus that obsesses, not to say invades everything I write. Neither animal nor nonanimal, organic or inorganic, living or dead, this potential invader is like a computer virus. It is lodged in a processor of writing, reading and interpretation.” Perhaps then all of deconstruction has been a question of this contamination between ‘natural’ languages and programs, of the machine and the human (and the animal somewhere between them).

Ultimately the program, especially as advances by cybernetics then transmutated into the advances of artificial intelligence, threatens in a way both to mark the ultimate victory of metaphysics, the pure presence of everything as data to computation, simulation and predication, but also its abandonment, the end of that adventure that associated technics with logos as Derrida says, its move beyond fixity and presence into a constantly shifting economy of différance, that is to say arche-writing and arche-violence as the non-ground of a post-metaphysical science, one that must first take up the work of deconstruction. Of course it is too early to even begin to sketch what such a science would look, I think if we are anywhere we are only at a moment where it may be possible to imagine whether or not such a science could be imagined.

To do so the language of writing, trace, grapheme, etc. must be thought as right on the edge of this deconstruction, a deconstruction which perhaps or perhaps not cybernetics threatened to traverse. As Derrida says, “The movements of belonging or not belonging to the epoch are too subtle, the illusions in that regard are too easy, for us to make a definitive judgement.” That is to say that it cannot be a choice simply between a deconstructive program or computational science or a metaphysical one, but rather the arrival and threat of both at once. I am reminded in this way of how Derrida ends structure, sign and play, contrasting a sad nostalgic Rousseauist interpretation with a joyful affirmative Nietzschean one but concluding that we are beyond the realm of being able to choose one or the other and rather are glimpsing the conception, formation and labor of a process of giving birth to the unameable and the non-species, “in the formless, mute, infant, and terrifying form of monstrosity.” If metaphysics as science, especially science of life, of the social, of the human and its others, is to give itself up it can only be in the form of the formless of the monstrous and unameable.

– Ann Arbor, February 20, 2020

The Computational Real

I just came across Sungyong Ahn’s new article, “Shooting a metastable object: targeting as trigger for the actor-network in the open-world videogames” about the use and creation of objects in video games. It is an excellent exploration of metastability and individuation à la Simondon in relation to objects in video-games and especially open-world and augmented reality. But, for that you should read the article.

One of the reasons I came across this article is that it cites an essay I wrote on the invention of the object as a computational concept. Specifically Ahn cites my use of the term ‘computational real’, described there as “bits or mere differences of voltage fluctuating on the system boards,” then higher level languages, “translate these ephemeral signals into human-understandable data structures and their functional relations.”

In reading Ahn’s article, I realized this was a concept that I relied on heavily in my article but did not really explain much at all and I thought it might be worth quickly spelling out what I meant. While I did not use the term directly, I developed the idea behind it in much further depth in my recent book Deconstruction Machines about cyberwar, specifically in this footnote on the real in Lacan as it can be read through Kittler and Derrida:

One could define the Lacanian real in multiple ways. Zupančič argues (in an article building on and taking issue with Badiou’s interpretation of the real), “Here, representation as such is a wandering excess over itself; representation is the infinite tarrying with the excess that springs not simply from what is or is not represented (its ‘object’), but from this act of representation itself, from its own inherent ‘crack’ or inconsistency. The Real is not something outside or beyond representation, but is the very crack of representation,” thus explicitly denying that the real is some thing or material that escapes symbolization. Alenka Zupančič, “The Fifth Condition,” in Think Again: Alain Badiou and the Future of Philosophy,ed. Peter Hallward (London: Continuum, 2004), 199. Likewise, Žižek suggests that this real is a void: “the Real Thing is ultimately another name for the Void.” Slavoj Žižek, “Welcome to the Desert of the Real,” in The Universal Exception, ed. Rex Butler and Scott Stephens (London: Continuum, 2006), 267. All of these descriptions suggest important elements of this real that resist or crack the symbolic. Despite these interpretations, Kittler, in Gramophone, Film, Typewriter, 16, suggests in a more media-centric manner, “Machines take over functions of the central nervous system, and no longer, as in times past, merely those of muscles. And with this differentiation—and not with steam engines and railroads—a clear division occurs between matter and information, the real and the symbolic.” Despite the differences between Kittler’s material description of the real and Zupančič’s and Žižek’s definitions, there is a way of understanding the real that would suggest that the real as symbolic void and the real as material are not so far apart. First, it is important to note that the symbolic is always a trace, inscribed outside of the body, written into a material support. Thus, in a physical sense, the symbolic is always material. It is inscribed within our mediatic world. Second, what cracks (in a Derridean sense) the symbolic is perhaps its always already being inscribed in insecure media/materiality. Representation, in Zupančič’s sense, exceeds itself precisely because it exists outside of itself, in inscription. The real is not, then, material in the simple sense of being beyond representation; rather, the real is material insomuch as the real-materiality of the symbolic precludes the possibility of it totalizing itself and guarantees its vulnerability.

In many ways, I am still wrestling with these questions, but I think it is valuable for both media studies and Lacanian theory to stress the materiality of the symbolic and at the very least to explore the material reality of inscription as one possible site for the instability and cracking of the real.

Neo-structuralism and Event

Illustration showing the visualization of a networkThis is the text of my talk at Derrida Today in Montreal from May 2018:

What I hope to interrogate with you all, and with Derrida, today is an event. An event, whose specific history and historical development I will have to leave to the side in the interest of time, but that I hope nonetheless you will recognize. This event is one that corresponds with the rise of machine learning, data analytics, the data deluge, “the fourth paradigm” of science as Microsoft has put it, “the end of theory” as Chris Anderson described.

In order to do so I will, like Levi-Strauss’ bricoleurs, need to in due course invent a few engineers; so I offer, in advance, my apologizes to those implicated. But they all seem to now have start-ups or to be Associate Deans for research. So, I think they will be fine, but still my apologies.

Just like with structuralism the question of the event will be central and de-centered to the thought of this event. Derrida begins Sign, Structure and Play by calling the rise of structuralism an event, but warning that the very concept of event is one that structural thinking seeks to “reduce or suspect.” The event appears as a threat to structurality, especially if we think of the event in its quasi-messianicity that it takes on in Derrida’s later work, because the event, that which arrives unpredictably from elsewhere, is effective in so much as it disrupts, rearranges and reconfigures structure.

I would like to suggest that an especially productive term for this impulse or this event would be neo-structuralism. With this term I aim both to argue that these discourses take part in a certain historical amnesia of structuralism and cybernetics and yet that something has clearly taken place in the history of thought, corresponding with the increasing reliance for social and scientific knowledge production on algorithms trained on relatively large datasets. Ultimately what I would like to do, in a possibly round about way, is to articulate what precisely constitutes this neo-structuralist event, how it relates to and differs from classic structuralism and argue for the applicability of the Derridean critique of structuralism to this new form, especially as it is articulated in Sign, Structure and Play in the Discourse of the Human Sciences.

Sign, Structure and Play is an incredibly dense text with a number of elements at stake. For the sake of orientation, and with apologies to the depth and nuance of the text, let me offer a very sparse and overly simple summary of what interests me most vis-à-vis these questions: Derrida argues that the history of structure has been the history of the replacement of transcendental centers: God, man, etc. Structuralism is unique in that it de-centers structure insisting on its free play, but Derrida tells us that its attempt to erase the center always get caught back in the game and end up under the sway of a now absent center.

But to turn to our present moment, and offer an explanation of neo-structuralism, I believe there is no surer guide than the clearly hyperbolic and likely wholly and technically incorrect words of Chris Anderson. But, for all that he gets wrong, I think Anderson in his 2008 piece in Wired Magazine entitled “The End of Theory” (as Derrideans we have perhaps heard those words or that threat too many times to count) most fully encapsulates and enunciates the ultimate desire or even drive of machine learning and data analysis. He says:

Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves. The big target here isn’t advertising, though. It’s science…faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete…There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models.

In sum, Anderson’s dream is one in which we throw enough data into a computer running some machine learning algorithm and it offers us, instead of causal theories, predictions with associated probabilities for individual questions. It no longer matters how the economy actually functions if we can predict what it will do tomorrow. The same with medicine, physics, public policy, perhaps even literature, etc.

In this way, many of these systems go beyond structuralism in effacing any sort of a center or even a ground. We can take the example of Google’s Page Rank algorithm, whose specifics we do not entirely know, but whose basic operation was published as an academic paper by Larry Page. One computes the probability that a random web ‘surfer’ clicking on random links and visiting random pages would end up at any given page. Then this probability becomes a measure of its importance. In essence, what we end up with is a definition of importance that is wholly recursive. What makes a page important is that many other important pages point to it, for that is precisely how our ‘surfer’ will end up on a given page. In lieu of some deep theory of structure, we get simulations than run over and over until they converge or stabilize on some passable solution to the problem at hand, always deferring what we mean to some other moment.

Instead of categories defining our relationship to the world these approaches attempt to operationalize momentary correlations, what my colleague John Cheney-Lippold calls “measurable types” (e.g. ‘importance’ in the recursive way PageRank defines it). Geoffrey Bowker puts the condition quiet poetically stating: “Thus, in molecular biology, most scientists do not believe in the categories of ethnicity (Reardon, 2001)—and are content to assign genetic clusters to diseases without passing through ethnicity…recommender systems work through correlation of purchases without passing through the vapid categories of the marketers—you don’t need to know whether someone is male or female, queer or straight, you just need to know his or her patterns of purchases and find similar clusters.”

But, Bowker dealing rather succinctly with both Anderson and Bruno Latour, argues that there are two main problems with this approach. The first is that categories still matter. These categories still define our identities. We can cite Judith Butler’s work on performance here to quickly note that we all still in some capacity perform some concept of gender that we identify with—and these algorithms by showing us and guiding us relative to our previous patterns push us even further into performing that which we are already performing. Moreover, we require the categories of race and sex in order to be able to point to the existence of racism and sexism. The occlusion of these categories offered up by the high priests of machine learning result in the situation outlined so clearly in Safiya Nobel’s new book Algorithms of Oppression or Virginia Eubanks’ Automating Inequality. These algorithms can very easily reproduce the categories they claim to efface.

Second, and still with Bowker, the archive from which this data is drawn is always selective. He cites Archive Fever on this point to argue that the data of big data is never the world, but rather a very small selection of it. To compute the world in whole would require the entirety of the world; and hence every computable dataset is an archive, a finite selection of attributes and data points. Thus, this selection always requires a model or a theory—those who claim to not have one merely have bad and implicit theories. As Bowker and others have put it: “there is no raw data.” Herein lies the fundamental contradiction of neo-structuralism, while it claims to do away with models and theories, it always, perhaps at the moment if offers up a TED-talk but also at the moment it commences gathering data, falls back into theories.

In turning this work immediately into action–“who cares why these things correlate, but let us act as if they do”–neo-structuralism tries to go even one step further than structuralism in neutralizing the event that would disrupt it. Algorithms are often designed to privilege more recent data over older—even for the sake of the size of the archive deleting old data—but what this ultimately means in terms of actual function is that an event can, at least theoretically, be absorbed, be managed. To offer a personal, if banal example, from the realm of online shopping, one could suddenly lose their interest in ultra marathons and develop a new found appreciation for the nuances of diaper sizing, brands and types.

Machine learning and algorithmicity are constantly being fine-tuned to strike an exceptional balance between memory and event; to be able to act “in real time” without giving in to the Humeian concern that perhaps it is equally likely the sun will rise or not rise tomorrow. But, as these forms of knowledge production attempt to capture the event two things happen. First, they are confronted with the possibility of an event that would be more than internal to their operation (such as a change of tastes or an outbreak of a new disease), but rather an event that overflows the algorithm; an event perhaps of the kind that the financial crisis of 2008 represents; one that would break the frame of the algorithm and even if that particular event was reabsorbed into global capitalism another one may not.

Second, and most importantly for our purposes there is a tendency for something else to get caught in the capture of the event. Derrida says of structuralism:

This disruption was repetition in all of the senses of this word. From then on it became necessary to think the law which governed, as it were, the desire for the center…The absence of the transcendental signified extends the domain and the interplay of signification ad infinitum.

We find ourselves, just as under structuralism, in want of categories, in want of a center that would hold the freeplay of these algorithms together. Derrida says this of those structures that used to have a center, God, man, the empire, etc: “coherence in contradiction expresses the force of a desire.” This desire is precisely what structuralism and neo-structuralism repeat. In doing away with this center it finds itself governed by its logic as an absence, all the more powerful for having been repudiated.

Nowhere is this more clear than in a recent project by Matt Jockers, wherein he used sentiment analysis to detect plot structures in tens of thousands of books; a project that was picked up with some interest by the national media. He concluded, and this is from an interview:

“I did some distance similarity metric calculations and machine clustering to see if I could identify archetypal plot shapes,””The short answer is, yes I did, and there’s six or sometimes seven.” That little ambiguity, Jockers explained, is because the data collecting and sorting technique “involves picking at random from 50,000.” “There’s six about 90 percent of the time,” Jockers said. “Ten percent of the time, the computer says there’s a seventh [plot shape].”

Let me just note as an aside, Annie Swafford did some amazing investigation and experiments with Jockers’ code and demonstrated that most of what he was detecting was not even in the data he collected but rather what are known as ringing effects that result from the low pass filters he was using to attempt to smooth these structures. I’m happy to explain exactly what that means in the Q&A or after to anyone who is curious, but the important point is that the categories he thought he saw were largely artifacts, not of his archive, but of his tools. Moreover, we can see here precisely Bowker and Derrida’s point from above: this is an amazing reduction and occlusion of the archive—all of these novels, which are only English novels, are in essence reduced to whether words are happy or sad words.

But, what is most telling is the force of this desire that would make of the ability to perform real-time analysis a set of structural-archetypal statements; namely that there are in the end six (well maybe seven) plot structures; and we should note this probabilistic remainder of the seventh category is telling of the contradiction of these methodologies. In neutralizing the event in real-time neo-structuralism offers up the illusion that the event has been mastered beyond time, beyond history and so despite Anderson’s promise of an end of theory, we are tempted, just this once, since the numbers look so good to offer a theory that would be beyond event. And so we are given the six (well maybe seven) plot structures that explain all literature. While these structures may appear antithetical to the aims of neo-structuralism, its real-time mobile and measurable types, we see here coherence in contradiction expressing the force of a desire—for a center, for stability, for a very old mode of reading and categorizing literature. Just as old centered structures used to, it, offers up, as Derrida says, the “certitude anxiety can be mastered.”

One more quick example of this event in a slightly different field: In 1988, Strack, Martin, and Stepper published the results from a study that found that when participants moved their mouths into the shape of a smile or frown (by holding a pen between their teeth or lips) it affected their emotional response. This ‘facial feedback hypothesis’ has since become a well accepted idea in modern experimental psychology. But, a large scale replication study failed to find any significant effect.

While the replication study was being carried out at multiple independent labs, Strack co-authored an article with Stroebe, a social psychologist, arguing that replication studies do not necessarily call into question the theories they claim to debunk. They argue “There seem to be no reasons to panic the field into another crisis. Crises in psychology are not caused by methodological flaws but by the way people talk about them (67).” They argue that these replication studies do not replicate the initial conditions—e.g. the moment, culture, place, etc. I do not think it was their intent, but their argument if taken in its entirety seems to suggest that ‘studies’ that explore effects on multiple subjects do not tell us anything about human psychology. Rather, what constitutes the human psyche is an infinite variation of individual semantic and historical linkages, in essence the psychoanalytic subject. They never admit as much, but the path they lay out away from any replication crisis leads straight back to psychoanalysis and the importance of the case over aggregate experiments.

They are caught in the same trap as Levi-Strauss, as Derrida outlines: “The risk I am speaking of is always assumed by Lévi-Strauss and it is the very price of his endeavor. I have said that empiricism is the matrix of all the faults menacing a discourse which continues, as with Lévi-Strauss in particular, to elect to be scientific.” Their commitment to maintain some generality to their theory—under conditions electing to be scientific—mean that the empirical result of a replication test always threatens this work.

While the scene is wholly different and their work is not machine learning per say, they attempt to repeat a variation of Jocker’s move and, I would argue, operationalize the event of neo-structuralism for their defense. They use the infinite variability of the individual data point, the algorithm’s receptivity to an event, to neutralize the event; to say in the end: “well there is really no event, the general theory holds because the algorithm can always update itself.” There is a double movement at play here: on the one hand Anderson’s dream is accepted as reality but in order to reassert the strength of a theory (a theory which amounts to in Derrida’s words “philosophizing badly”) to have done away with the event rather than to have integrated it. There are, again to quote Derrida, “many ways of being caught in this circle.”

Neo-structuralism then presents itself as a variation of structuralism, a repetition of this earlier repetition. We can hear it clearly in what Derrida says of Levi-Strauss:

If Lévi-Strauss, better than any other, has brought to light the freeplay of repetition and the repetition of freeplay, one no less perceives in his work a sort of ethic of presence, an ethic of nostalgia for origins…As a turning toward the presence, lost or impossible, of the absent origin, this structuralist thematic of broken immediateness is thus the sad, negative, nostalgic, guilty, Rousseauist facet of the thinking of freeplay of which the Nietzschean affirmation—the joyous affirmation of the freeplay of the world and without truth, without origin, offered to an active interpretation—would be the other side…There are thus two interpretations of interpretation, of structure, of sign, of freeplay. The one seeks to decipher, dreams of deciphering, a truth or an origin which is free from freeplay and from the order of the sign, and lives like an exile the necessity of interpretation. The other, which is no longer turned toward the origin, affirms freeplay and tries to pass beyond man and humanism, the name man being the name of that being who, throughout the history of metaphysics or of onto-theology—in other words, through the history of all of his history—has dreamed of full presence, the reassuring foundation, the origin and the end of the game.

It is here, I think, that we can fully differentiate neo-structuralism from structuralism but also see its return into structuralism. What we witness is its desire for a future moment of coherence, of convergence over a certain time bound data set, all the while admitting the impossibility of origin. Everything is chaos and noise, but we will seek out some small correlation that offers some advantage to our action. Thus, if structuralism always chose the former path, and failed to do away with the origin, the arche, neo-structuralism repeats this, offering instead of an origin a telos, a future of managed free-play. We do not know what created the complexity of the world, but the neo-structuralist claim is that with enough data and the right algorithms we can manage and predict the unpredictable. But, right at the beginning of sign, structure and play, we are told: “From the basis of what we therefore call the center and which, because it can be either inside or outside, is as readily called the origin as the end, as readily arché as telos.” While neo-structuralism may substitute a telos for an arche, in the end it will have amounted to nearly the same thing.

So, perhaps we should prefer a different course, rather than repeating structuralism to instead choose the Nietzschean affirmation, to try to pass beyond man and humanism. But, as Derrida reminds us, we are caught in this game, these systems and structures compute and think us. It is far from clear that a choice is at our disposable. Derrida again:

For my part, although these two interpretations must acknowledge and accentuate their différence and define their irreducibility, I do not believe that today there is any question of choosing…Here there is a sort of question, call it historical, of which we are only glimpsing today the conception, the formation, the gestation, the labor. I employ these words, I admit, with a glance toward the business of childbearing-but also with a glance toward those who, in a company from which I do not exclude myself, turn their eyes away in the face of the as yet unnameable which is proclaiming itself and which can do so, as is necessary whenever a birth is in the offing, only under the species of the non-species, in the formless, mute, infant, and terrifying form of monstrosity.

If this is our task, we have only just begun or perhaps not even begun. In a perhaps slightly different scene and with slightly different technologies, we repeat a dream that we have dreamt before. I’m not sure that our course will be different or that it will necessarily be the same. But, my hope is in the very least to demonstrate the value of Derrida’s thought and writing to this moment and beyond that to suggest that in seeing how we are confronted not with an event that is wholly new, but rather an event that is a form of repetition we may be better prepared to await the arrival of this future, to engage the labor of its thought.

General Intellect

I have been spending a bit of time with the autonomists and their reading of Marx’s fragment on machines. I was especially struck by this paragraph from Paolo Virno‘s short essay on the General Intellect:

The cynic recognises the primary role of certain epistemic models in his specific context, as well as the absence of real equivalents; he repeals any aspiration to transparent and dialogical communication; from the outset, he relinquishes the search for an inter-subjective foundation to his praxis and withdraws from reclaiming a shared criterion of moral judgement. The cynic dispels any illusion of prospects of egalitarian ‘mutual recognition’. The demise of the principle of equivalence manifests itself in the cynic’s conduct as the restless abandonment of the demand for equality. The cynic entrusts his self-affirmation to the unbound multiplication of hierarchies and inequalities that the centrality of knowledge in production seems to entail.

There is something really important going on here with the rise of computation and face systems of calculating equality.