Day 1: Wednesday 13 Dec
This talk considers the unlikely symmetries between two major rationalist endeavours of the twentieth century: Louis Althusser's epistemological recasting of Marxism and Donald Davidson's project for a truth conditional semantics. At the heart of both projects was an effort to reconcile a thoroughly naturalistic or at any rate deterministic metaphysics with a robust account of truth's integral role in judgment and interpretation. Spinoza's theory of true ideas, important to both thinkers, is the prism through which this common concern becomes clear. Given the disparate contexts and putative political commitments behind Althusser's and Davidson's efforts, the conceptual parity becomes more salient and thereby conspicuous. If it weren't preposterous to suggest it, one might be tempted to see in this salience the workings of a more general 'Spinoza function' discernible in the historical field of modern philosophy.
Knox Peden is the Gerry Higgins Lecturer in the History of Philosophy at the University of Melbourne. He is the author of Spinoza Contra Phenomenology: French Rationalism from Cavaillès to Deleuze (Stanford, 2014) and the co-editor, with Peter Hallward, of a two-volume work devoted to the Cahiers pour l'Analyse (Verso, 2012).
In 1979, the philosopher and critical theorist Hans Jonas published Das Prinzip Verantwortung, in which he argued, against the background of a continuing Cold War, that modern technology has radically transformed human action to the extent that it threatens our future existence. According to Jonas, that meant that a new ethics was required that takes into account the future rights of nature and the survival of human beings. Published in English as The Imperative of Responsibility, a more proper translation would be The Principle of Responsibility, and in fact Jonas' book was explicitly conceived as a rebuttal of Ernst Bloch's Das Prinzip Hoffnung (The Principle of Hope), published in the west in 1959. Jonas takes aim at Bloch, at that time an East German citizen, whose Marxist nature philosophy he saw as both politically and ethically problematic. Jonas believed that the proper ethical attitude towards a potentially catastrophic future was one of responsibility and the measured, democratically regulated practical use of technology that followed from that. He criticized Bloch's attitude of hope, which emphasized a fundamental unity between human beings and our environment, and pleaded for an alliance between technology and nature. This paper revisits the Bloch-Jonas debate, examining the resources critical theory can offer in order to address a current conjuncture described in a recent volume as beset by a 'widespread helplessness'. Drawing on Bloch and Jonas, I argue for an ethical response to today's twin crises—environmental destruction on the one hand, the technologization of the lifeworld on the other—that combines hope and responsibility. For while an absence of hope can lead to cynicism and inertia, an absence of responsibility risks replaying a reckless, hubristic faith in the power of the human. Ultimately, however, whereas for Jonas responsibility to the future was primarily an ethical, rather than a social or political question, I argue that today, ethics is not enough. Whether it is a question of the disciplinary use of biotechnologies, or of entrenched financial interests in exploitative energy industries, the stakes of hope and responsibility are political as much as they are ethical.
Cat Moir is a Lecturer in Germanic Studies at the University of Sydney. Her research brings an area studies lens to questions across the fields of history of ideas; critical theory; environmental humanities; the long nineteenth century; colonial histories; history and philosophy of science; literature and philosophy; and visual and material cultures.
'Since instruments are already doing politics, one question to ask is how to reorient the brutality of instrumentality away from the senseless stirring of beliefs and desires, and towards a dynamic of reasoning that affords the re-articulation—rather than the elimination—of aims (Parisi, "Reprogramming Decisionism.")'As machinic intelligence becomes increasingly "unreasonable," appeals to that which exceeds reason and instrumentality as the key to "saving us" from technological determinism are being outmoded. Drawing upon the work of Luciana Parisi, Ray Brassier, and Peter Wolfendale, this paper makes a case for the re-instrumentalisation (as opposed to the relinquishment) of reason in the twenty-first century. Againstthe politics of amelioration and the politics of anticipation promoted by Fatalism and Messianism respectively, this paper calls for a politics of intervention or Prometheanism. Prometheanism is characterised by its rejection of the foundational premises of Fatalism and Messianism. Against Fatalism, it holds that there is no pre-determined limit to our capacities for action and self-transformation. Against Messianism, it holds that there is no pre-determined limit to our capacities for thought and self-understanding. This is not to assert that limits do not exist. Rather, it is to assert that the limits currently constraining our capacities for thought and action are always contingent and mutable, revisable and updatable. In rendering that which appears necessary contingent and that which appears impossible attainable, Prometheanism presents a serious challenge to status quo, which is all too often perpetuated by Fatalism and Messianism.
Emma Black is an RHD candidate in Philosophy at the University of Queensland, Australia. She is co-founder of the Queensland School of Continental Philosophy, and one half of >ect—an online podcast series tackling recent trends in philosophy and techno-politics. She is author of "#Accelerate's Feminist Prototypes."Platform Journal of Media and Communications. 6.2. (2015). pp. 33-45.
Karl Popper's falsificationist demarcation of science from non-science rests on a specific ontological commitment to what he calls his 'naïve realism'. The self-conscious 'naiveté' here lies in the undefined - and for Popper largely undefinable - nature of this reality. All that his falsificationist protocol requires of it is that it be objective - that is to say, that neither our thoughts, nor beliefs or justifications bear on it or its 'contents'. It is what it is, and nothing we have to say on the matter can alter it one jot. He doesn't, I think, seek to claim that this objective reality exhausts the possibilities of the real - he does not, for example, deny the existence of subjective states. But he does claim an independence of objective reality from those states (though the reverse does not hold true). However, I think it is possible, on the basis of Popper's own arguments, to say more about the character of this objective reality, and it independence, in ways that neither invalidate nor contradict Popper's arguments but do enrich them in useful ways. I will argue that the terms of what Popper calls his 'evolutionary epistemology' require us to understand the objectivity of this external reality not in actual or material terms, but rather in those of differential relations that are external to the terms they relate. In this light, Popper's external reality, the guarantor of his falisificationist demarcation, appears as not as a realm of actual things, but of virtual problems. In short, what I aim to do is to read Popper in and on his own terms, but 'behind his back' (as Deleuze would say) as a philosopher of difference. Doing so significantly reframes the image of both science and knowledge that Popper's falsificationist demarcation gives rise to, in ways that both complicate and shed light on the very demarcation it seeks to impose.
Allan James Thomas teaches in the School of Media and Communication at RMIT. His book Deleuze, Cinema and the Thought of the World (Edinburgh University Press) is due out in April 2018.
In the world of Anglophone philosophy, the work of Bernard Stiegler is mainly known for its critical reading of Derrida and for the quasi-Heideggerian thesis that the history of philosophy or metaphysics is the history of the repression of the technical. Outside that world, Stiegler is perhaps best known as a kind of media or cultural 'theorist' who elaborates a 'pharmacological' critique of digital technology and consumer capitalism. These two 'Stieglers' in fact correspond, more or less well, to the first two of the three 'conversions of the gaze' that define the trajectory of Stiegler's work: the 'technological' conversion and the 'organological' conversion, related, respectively, to a personal epokhē and a collective epokhē. But Stiegler's current work comes in the wake of a third conversion – a 'macrocosmological' conversion that has led him to call for a 'neganthropology', through which alone it would be possible to enact a reckoning with what has come to be called the Anthropocene. It is argued here that such a periodization of Stiegler's work should be understood less as a succession of 'breaks' than as a différantial process of reinscription operating within the spiral of Stiegler's continuing path of care-ful thinking.
Daniel Ross received his PhD on "Heidegger and the Question of the Political" from Monash University in 2002, and is co-director of the film, The Ister (2004) and author of Violent Democracy (Cambridge University Press, 2004). He is the English translator of many texts by the French philosopher, Bernard Stiegler, including eight books, most recently, Automatic Society, Volume 1: The Future of Work (Polity, 2016). Two further translations are forthcoming: The Neganthropocene (Open Humanities Press, 2017) and In the Disruption: How Not to Go Mad? (Polity, 2018).
Influential sections of our society are obsessed with 'technology' as such, an obsession that encompasses both rampant technophilia and rampant technophobia, sometimes even combined together (witness the concern even within the transhumanist tech industry milieu about AI). While this obsession often appears to focus on highly specific questions – the role of social media in fomenting rebellion or propagation 'fake news', the affect of backlit screens on sleep, the potential for AI to enslave humanity – I suspect that these all emanate in part from the same basic attitude and hence take a deflationary approach to any such concern: rather I will argue that technology is an anthropological constant, and that our concern with the potential effects of technological innovation should proceed via a careful analysis of the actual relations between power and specific technologies, rather than normatively condemning technologies based on essentially fantasies about their threat to a mythic non-technological natural order, or indeed normatively lauding them based on a fantasy of a techno-utopia.
Mark G. E. Kelly is Associate Professor in the School of Humanities and Communication Arts at Western Sydney University. He is the author of several books on the thought of Michel Foucault, as well of Biopolitical Imperialism (Zero, 2015), and the forthcoming For Foucault: Against Normative Political Theory (SUNY, 2018).
A discursion via the techné of oikonomia, the technical rule of castration and the endless proliferation of technological devices into Foucault's claim that "Psychoanalysis has never succeeded in making images speak". Visiting along the way Warburg, Binswanger, Twin Peaks and some church fathers, this delirious investigation of the image in the 'belle époque' of the speaking body will attempt to make some small, fragile steps towards an articulation of a collective logic predicated on the real of the social bond as the inexistence of the sexual rapport. The question hiding behind any discourse of power or authority is: "Who is the dreamer?"
Robyn Adler practices psychoanalysis, philosophy and art in equal measure but with the only measure being the movements of others.
In this paper I argue that Blanchot developed a fundamentally new articulation of the relationship between being, technology and knowledge—but that this relationship is itself not fully articulated in Blanchot's writings. It emerges rather, from the synthesis of two apparently opposed positions. The first is well known: if literature is the highest form of work, it is because the act of writing, unlike other forms of work, has no plan, or no end. The writer goes to the limit of negation, to the moment at which whatever is determinate in negation loses its power of determination. Most of Blanchot's central positions concern this moment: the writer confronts a blank neutrality, where any relation to end is given over to a state of errancy, a "wayfaring without end," transforming thought into a throw of the dice, and so on. Work, from this point of view, becomes an "extraordinary, unforeseeable innovation," impossible to conceive in advance, an experiment whose effects one cannot grasp, "no matter how consciously the were produced." At yet, at the same time, Blanchot has a persistent fascination with Valéry's emphasis on technique—a technique that would be, precisely, a form of conscious production. In this paper, I play the two positions off of one another in order to determine what happens to technique at the end of work.
What makes mathematics a paradigm of truth for Platonists? What does it show exemplarily as essential to truth? The classical answer is that mathematics introduces an immanent concept of truth through the notion of a proof. In this paper, I argue that the basis of this immanence is the process in mathematics of placing a problem. To place a problem means to construct a new intellectual space on the basis of the necessary conditions for its resolution, illuminating its fundamental aspects and enabling us to say why it is the case. This new space is exterior from the one where the problem emerged, but in relation with it, allowing us to reapproach the problem. The truth then of a problem exists in the construction of a theoretical space that is larger or higher than that of the problem itself, and mathematics demonstrates that such a construction is possible.
In Aristotle, the phrase 'as fire burns' occurs at least twice. It appears in the Nicomachean Ethics, where Aristotle comments that: 'Some people think that all rules of justice are merely conventional, because whereas a law of nature is immutable and has the same validity everywhere, as fire burns both here and in Persia, rules of justice are seen to vary.' The second place is in the Metaphysics, where Aristotle asserts: 'we think the manual workers are like certain lifeless things which act indeed, but act without knowing what they do, as fire burns [οἷον καίει τὸ πῦρ], — but while the lifeless things perform each of their functions by a natural tendency, the labourers perform them through habit.' If the problematic of 'fire' is near-ubiquitous in Aristotle, this Aristotelian phrase bears in each case upon the relation between 'nature' and 'culture,' between the immutable and the variable, inherent and acquired automatism, between action and knowledge. The figure of the slave is therefore from the first a paradigm of privatized, naturalised techno-economics, a living automaton which labours by habituation —what we could in today's jargon call 'programming' — without asking or knowing the causes of its labour. This paper takes up this foundational identification in order to draw connections with our own contemporary political moment in the discourses of AI.
Justin Clemens is an author and editor of many books, including What is Education? (Edinburgh UP 2017), edited with A.J. Bartlett, and The Afterlives of Georges Perec (Edinburgh UP 2017). He teaches at the University of Melbourne.
This paper examines the distinction between knowledge and truth in Benjamin's writing. It considers this distinction in relation to two sets of ideas that seem at cross-purposes: his conception of technology as 'construction' in his writing on the nineteenth century; and his view that genuine historical knowledge belongs to the sphere of experience.
Alison Ross teaches Philosophy at Monash University in Melbourne. She works mainly in the fields of aesthetics, the history of modern philosophy and critical theory. Her books include The Aesthetic Paths of Philosophy: Presentation in Kant, Heidegger, Lacoue-Labarthe and Nancy(2007), Walter Benjamin's Concept of the Image (2015), and History and Revolution in Walter Benjamin: A Conceptual Analysis (2018).
Day 2: Thursday 14 Dec
In his classic 1977 book The Passions and the Interests, Albert Hirschman identified a distinctive argument for the civilizing effects of the market. On Hirschman's telling, the argument that commerce was a source of "sweetness, softness, calm and gentleness" (douceur) appealed to seventeenth-century Europeans who longed to be free of warring passions. This celebration of the moral virtues of commerce became difficult to sustain in a context marked by the French Revolution, the Napoleonic Wars and the social dislocation of the industrial revolution. The subsequent period was dominated by anxieties that, far from promoting morality and "civilization", the market was undermining moral virtues, disrupting traditional forms of life and producing widespread anomie, atomization and class conflict. By the twentieth-century, Hirschman concludes, no observer could contend that the hopeful vision of the market had been borne out by events. In this paper, I argue, in contrast, that in the most inauspicious circumstances of the early twentieth-century, neoliberalism was founded on an attempt to revive the argument that the market was a technology of pacification. For Friedrich Hayek, Ludwig von Mises, Milton Friedman, and the other members of the neoliberal Mont Pelerin Society, the promise of the competitive market lay in what they depicted as its capacity to replace violence, force and coercive colonial rule with peaceful, mutually-beneficial, voluntary relations. For the neoliberals, the competitive market was not simply an efficient mechanism for the distribution of goods and services; it was an alternative political model, or rather an alternative to politics, which was recast as coercive, conflictual and, ultimately, totalitarian.
Jessica Whyte is Senior Lecturer in Cultural and Social Analysis at the University of Western Sydney, Australia and an Australian Research Council DECRA Fellow. Her work has been published in a range of fora including Contemporary Political Theory; Humanity: An International Journal of Human Rights, Humanitarianism and Development; Law and Critique; Political Theory; and Theory and Event. Her first monograph, Catastrophe and Redemption: The Political Thought of Giorgio Agamben, was published by SUNY in 2013. Her forthcoming book, Human Rights and the Rise of Neoliberalism will be published by Verso in 2018. She is currently working on a three-year Australian Research Council-funded project, 'Inventing Collateral Damage: The Changing Moral Economy of War'.
In a 2002 lecture delivered at the European Graduate School, Alain Badiou distinguished truth from knowledge by evoking (Heidegger's) aletheia and technē. He then briefly cited the French Revolution of 1792 (also known as the second French Revolution) as an example of an event which begins a truth-process. My paper seeks to elaborate on this citation, in the light of Badiou's work since that lecture and the recent French Revolution scholarship. I will distinguish between, on the one hand, the revolutionary-political truths of the event of 1792 and, on the other hand, the ideologico-technical objectification of the event. I will argue, via a Badiouian reading of a crucial passage from Rousseau's The Social Contract, that the Revolution (of 1792) produced a new loyal subject in the form of the people, despite the reactive and obscure beliefs in the technicity of the so-called Reign of Terror.
"[E]veryone in fact, is equally able to obey on command. But not everyone is equally able to be wise" TTP chap 13, §16. As Rancière has pointed out there is a tension between the two halves of this kind of statement. The equality of intelligence that obedience presupposes appears to imply that everybody is equally able to be wise. Spinoza's insistence on 'taking men as they are' is a recognition both that there are no natural hierarchies between people (his deeply problematic passages on women in his final chapter on Democracy in the Political Treatise notwithstanding) and that nevertheless the governance of the multititude, given the difficulty and rareness of wisdom and the ever-present danger of the bondage of the passions, is a problem. This de facto not de jure 'elitism' means that the self-governance of the multitude under democracy does not alleviate the problem of governance, it heightens it. Spinoza's attempt to resolve the difficulties of governance rests on an opposition between living according to reason and living under the " 'guidance' [ductus: leadership] of reason" where the same rational and beneficial outcomes can have different causes: wise and free, or passionate and servile. This kind of opposition is notoriously unstable (both practically and conceptually). That though is not the problem I am immediately concerned with. In Spinoza and Politics, Balibar suggests that the guidance of reason is in fact a 'ruse' of reason that genuinely leads not just to material satisfaction but genuine wisdom. I will argue that he is mistaken. Spinoza's writings on the Hebrew State and its techniques of 'disciplined obedience' which result in a population where "no one desired what was denied, but only what was commanded" coupled with his ontological claim that, "if a number of individuals so concur in one action that together they are all the cause of one effect, I consider them all, to that extent, as one singular thing" can be productively read alongside Lewis Mumford's work on megatechnics and the megamachine. Mumford's work spans from the ancient Egyptians to the current 'invisible city' of our online communities. He describes new forms of the megamachine formed by information and communication technologies. His work provides a network that allows us to question the current conjuncture, through a Spinozist fibre-optic cable.
Jon Rubin taught the graduate program, 'The Philosophy and Ethics of Mental Health', for eight years in the Medical School of the University of Warwick, before moving to Australia. He now lectures for the MSCP most recently on 'Spinoza and Politics'. His research is currently split between Spinoza and Deleuze.
This paper considers the implications of the rise of 'artificial intelligence' for knowledge and truth. The considerations arise from my art practice, involving (soft and hard) robots and so-called 'deep learning' alogrithms that are popular in the current context of mainstream artificial intelligence practice. Specifically, I discuss how the baldly representative nature of the so-called 'deep learning' model is of little use to art practice, and therefore to knowledge and learning generally, and show the theoretical and practical application of an alternative model called 'artificial desire'. This is an approach devised specifically for the robotic artworks I create, based on a psychoanalytic and philosophical reading of the concept of 'desire' in the context of learning and knowledge. This leads to a discussion of the differences and similarities between pattern recognition and knowledge. Drawing on the transductive philosophy of Gilbert Simondon, with support from Bernard Stiegler, Alain Badiou, digital physics, computer science and paraconsistent logic, I sketch some possible ways in which digital technologies, robotics and artificial intelligence might change or contribute to our understanding of knowledge, knowing and truth.
This talk begins with the notorious disappearance of a Malaysian airliner, MH370, bound for Beijing in March 2014. Signing off from Kuala Lumpur air traffic control, MH370 ceased communications some forty minutes into the flight and, then, vanished from both civilian and military radar. The fate of the plane quickly became the subject of global fascination and endless speculation, but perhaps more significant was the air of incomprehension it inspired. How could a 300,000-pound plane, outfitted with one of the most advanced navigations systems in the world, carrying two hundred and twenty-seven passengers and a crew of nine, simply vanish? The question, endlessly asked or implied in the days and weeks afterward, seems rational enough until one takes a longer and more symptomatic view. Twenty years ago, when the disappearance of a commercial aircraft was no less statistically aberrant, such an even still was altogether comprehensible ("given the scale of space, it's amazing that more planes don't vanish"). The technical and technological horizon of expectation has undergone a momentous transformation. It's only against the assumption of all-encompassing tessellation of digital communications and surveillance networks that we could possibly look upon MH370 with bewilderment: the event flies in the face of a growing sense of a global "control-grid." With this in mind, I want to consider both the real (i.e., technical) and imaginary (i.e., phantasmatic) constituents of MH370 by situating the global grid in a longue durée. Returning to the conquest of the "longitude problem" in the eighteenth century, the paper traces this early technology of globalization to the development of GPS, the first geosynchronous satellite communications systems. My aim, however, is to try to cash out the technology in an epistemological (or, better still, epistemic) transformation with which philosophy itself reckons today. In Heidegger's words, "I was shocked when a short time ago I saw the pictures of the earth taken from the moon. We do not need atomic bombs at all—the uprooting of man is already here."
This paper will sketch the theoretical outline for a book project I am currently researching, which proposes a conceptual history of the end of history diagnosis. Further advancing Reinhart Koselleck's historicization of the metahistorical categories of experience and expectation, the French historian François Hartog has recently argued that we have entered a new "regime of historicity", understood as the prevailing order of time that governs the temporality of a given epoch. This paper will argue that the end of history is the conceptual figure that registers, in diverse fields and according to different orientations, the emergence of this new regime of historicity, in which, according to Hartog (and others), the ruinous past is memorialized in order to normatively pre-empt a future that is now conceived as threatening. In the sense, far from representing the culmination of the modern idea of the progress, the end of history diagnosis in fact marks its eclipse.
Luciano Floridi produces an ethics of information that is based upon an ontological proposition that all that exists can be understood as information, just as it can also be analysed in terms of its chemistry, for example. Floridi extends the claim by bio-ethicists that we should respect or care for our environment by focusing upon our online environment and the blurring between what is online and offline. He argues that we should give some weight to the protection of informational entities from informational entropy; further, that human beings should also be respected – not as persons – but as informational entities. By viewing human beings in informational terms, he is also able to support his position that attacks on our privacy are akin to kidnap rather than theft; that "my information" in many instances is more akin to "my body" than "my car". In this paper, I will consider the extent to which Floridi's ontology maps onto that of Spinoza, whose Ethics also starts with a discussion of the whole of existence ("Nature") in which our place is not the grandiose "Kingdom with a Kingdom" but is simply that of any other part of Nature. In common with all modes of the one substance of which Nature is comprised, we are defined by what we do to promote our own survival and to thrive (our conatus). The power of our conatus is increased by our ability to understand the world, which is aided by our ability to communicate adequate knowledge (rather than sad passions, such as fear that prompts superstition). Posthumously published in 1677, Spinoza's Ethics can now be read in terms of a philosophy of information, given the central role that communication plays in the Ethics. Spinoza's ethics prompts the question: to what extent are "our" (whose?) powers of acting being affected by current changes in our informational environment? Further, Floridi's claim that we should "respect" informational entities can be distinguished from Kant's conception of respect for persons, which is based on equal moral worth. Floridi's analysis certainly cannot be collapsed into a Hobbes' position that our value is simply our price. Floridi rightly attacks the claim that informational entities and the informational environment (the "infosphere") have value purely in market terms. I therefore interrogate the question of whether Floridi's conception of the ethical weight that we should give to informational entities can be informed by a Spinozist analysis.
Janice Richardson is an Associate Professor at Monash University. She is author of the following three books: Selves, Persons, Individuals: Philosophical Perspectives on Women and Legal Obligations (Aldershot: Ashgate Press. 2004), The Classic Social Contractarians: Critical Perspectives from Feminist Philosophy and Law (Aldershot: Ashgate, 2009) and Law and the Philosophy of Privacy (London: Routledge, 2016). She is co-editor of Routledge's Feminist Perspectives on Tort law and Feminist Perspectives on Law and Theory and has published extensively in journals, including: Feminist Legal Studies, Law and Critique, Angelaki: Journal of the Theoretical Humanities, Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science.
We live in scandalous times. Every day some new controversy demands our attention, our emotional investment, and ultimately our judgment. The transgressive nature of these scandals leads many to understand them in revelatory terms: as offering a tantalising glimpse of what really goes on, cutting through the multiple layers of artifice to reveal an underlying 'truth'. Others, however, contend these routine transgressions simply present the strategic face of contemporary capitalism: calculated marketing exercises designed to stimulate consumer interest, such 'controversies' arguably do little more than attest to the contemporary truism that there isno such thing as 'bad publicity'. Yet there exists today another, far more insidious form of scandal. Less concerned with the direct accumulation of capital than with shoring up the base mechanisms of power, this fundamentally statist development involves the fabrication of controversynot simply in the placeof, but moreover in the form of the real. Here, the scandal paradoxically replicates the disruptive effects of radical creation (without, for all that, actually presenting anything new) in order to produce its opposite: stasis. What we instead bear witness to is the production of the 'simulacrum of novelty', the sole purpose of which is to stand in for – and thereby neutralise – the very possibility of real creation and the threat this entails. This hijacking of the idea of 'creation' and annexingof the real by the state – aided in no small part by the rise of new media technologies – is finally the real scandal of our times, the 'scandal of scandals' by which the 'radically new' is replaced with the 'simulacrum of novelty', and the controversial 'creative act' is supplanted by the static act of creating controversy.
Alex Ling is Senior Research Lecturer in Communication and Media Studies at Western Sydney University. He is the author of Scandalous Times: Contemporary Creativity and the Rise of State-Sanctioned Controversy (Bloomsbury, forthcoming), Badiou Reframed (I.B. Tauris, 2016) and Badiou and Cinema (Edinburgh University Press, 2011), and co-editor and translator (with A.J. Bartlett) of Mathematics of the Transcendental (Bloomsbury, 2014).
This paper proposes to examine the status of memory technologies. It does so in order to better situate the relationship between memory and technics than it is in the work of Bernard Stiegler, whose investments in the category of the 'originary' in particular undermines his analysis. It is also proposed in light of a dramatic proliferation of a particular monetary mnemotechnology, namely the blockchain. This paper will discuss the role of the blockchain protocol in the operation of cryptocurrencies as a particularly clear and important example of contemporary mnemotechnics.
Day 3: Friday 15 Dec
In Camera Lucida, Roland Barthes described the air of a photograph as the technologically hallucinogenic experience of the living presence of the dead. For Walter Benjamin, the aura (etymologically, air) of a photograph likewise combined elements of memory, unreproducible singularity, and technological mediation. My paper examines an episode from the prehistory of this photographic air. It tells the story of Tom Wedgwood, and of his 1802 publication, authored by Humphry Davy, "An account of a method of copying paintings upon glass and of making profiles by the agency of light upon nitrate of silver". In Wedgwood's prephotography, images could be technologically formed but not fixed. Light, the condition of seeing the image, also darkened it into uniform blackness before the gaze. I link this publication to the radical atmo-medical experiments of Davy and Thomas Beddoes at the Pneumatic Institution, and to the radical poetic experiments of S.T. Coleridge and William Wordsworth—all of which were funded by Wedgwood. In these three fields—prephotography, pneumatic medicine and poetry—the latest technological advances were configured as means through which to reverse the erosion of memory by modern technological advances. That dialectic is also described, in more abstract terms, in Wedgwood's speculative metaphysical work, the posthumously published An Enquiry Into the Origin of Our Notion of Distance. My aim in the talk is to identify a set of shifts in the intertwined histories of technology, memory and atmosphere. My hope is thereby to make plausible the Benjaminian claim that what we encounter when we are touched by the air of a technological image is its disappearance.
In Søren Kierkegaard's The Seducer's Diary, Johannes imagines a book he would like to write. Titled Contribution to the Theory of the Kiss, it would be a compendium of the experience of kissing, dedicated "to all tender lovers." The book would fill a "long-felt want" in philosophy, which up till now has either dismissed the phenomenon of labral contact as an unfit subject for philosophical attention, or has simply failed to understand it. In his exploration of the phenomenon of the kiss in the novels of Henry James, J. Hillis Miller suggests that the kiss operates as a sort of speech act. The kiss would be a sort of wordless word, similar to the examples J. L. Austin gives, which Miller cites, of sentencing a person to death by donning a black hood or the thumb's down gesture of the audience at the Roman Colosseum. But the unique peculiarity of the kiss, Miller points out, is that it specifically prevents speech even as it acts efficaciously. The kiss is a mute gesture, a performative that creates what it fails to name. Yet if the kiss precludes speech, this does not mean that it is silent. Proposing to classify kissing according to different principles, Kierkegaard's seducer begins with its sound. He laments that language is not sufficiently "elastic" to record all the smacking, hissing, crackling, explosive, booming, sonorous, hollow, squeaky possibilities of the kiss which he has "learned to know". Then immediately following, Johannes explains to Cordelia that "a good answer is like a sweet kiss, says Solomon." How are we to understand that the word uttered by the King of wisdom himself becomes unintelligible, that the one who should authorize and authenticate the philosophical contract, responds with a cacophony of hissings, cracklings and squeaks? In this reading, the kiss, as first technology of both truth and knowledge implicates our ability to think.
Sigi Jöttkandt is the author of First Love: A Phenomenology of the One (re.press, 2010) and Acting Beautifully: Henry James and the Ethical Aesthetic (SUNY Press, 2005). She teaches English at UNSW, Australia and is currently completing a book on Vladimir Nabokov and cinema.
Perhaps the most striking feature of current discussions about the potential impact of technology on work is their overwhelming stupidity. In some ways the irony is simple: these discussions about the relation between technology, automation, artificial intelligence, roboticisation and work are typified by a remarkable intellectual laziness and produce the nagging sense that they are being conducted by eighteenth century mechanical automatons. With few exceptions, public and policy discussions of technology and work again and again rest on the idea that technological change – somehow unbound from contingent but fully concrete social and economic relations – might by itself do something to the nature of work. On one side of the political spectrum this leads to speculative hopes for a 'world without work' and on the other a brutalising demand for adjustment to a perilous future that is always just around the corner. While critique of this insipid nonsense might entertain us for a few minutes, critique alone will not satisfy us philosophically or politically. Rather, my purpose will be to turn to some of the technologies that historically have and today could serve the purpose of actualising radically different concepts and practices of work. Any technology that could serve this purpose would be far from automatic and would be the very manifestation of thought. Such technologies go by various names, and at a certain point it will become unclear whether they can still rightly be called technologies. Among other things, those technologies for creating new relations to work can also be called language and political organisation.
Published in 1960 and read almost as infrequently, or inattentively, as it is today, Sartre's Critique of Dialectical Reason nevertheless offers a coherent account of technology, which Sartre groups under the broader ontological category of the practico-inert. For Sartre, the practico-inert is any human creation that, once constituted, emits exigencies to its future users — inert instructions that seem to make the user the inessential means by which the practico-inert achieves its ends. Sartre's readers have most often focused on this relation of passive activity between the human agent and its technology. Yet for Sartre what was most essential was the way technology's exigencies were actualised within the social form of a series, and, correlatively, the way a series' inertia — and the consequent inertia of technology's exigencies — could be negated by groups. In other words, his account of the practico-inert ultimately leads to an account of the social relations that cluster around it, at once sustaining it but also softening its coercive force. In this presentation, I will show how Sartre's Critique offers a differential physics of social power, which clarifies why and to what degree technology can determine the field of political possibilities.
Freud's young lesbian patient recounts a dream of a happy family with husband and children – and Freud tells her that the dream is insincere and that she dreamed it with the intention to deceive him. Questions abound: Is she lying? If so, whom to? To Freud? To the Other? More generally: what is the relationship of lying to truth? I will examine some examples to explore the relationship of lying to the truth.
It is fascinating to consider ramifications of an apocalyptic proposition: what if the material technology we rely upon to enable us to speak was taken away from us? What if the mobile phone, the film projector, the MRI, even the light bulb, suddenly ceased to exist? How, in struggling to (re)find these would we cope with having lost the language they afforded us? Today we are compelled to be subjects of a specific material technology in which naïve liberalism merges with science and from where the functioning of technology unquestioningly establishes our day-to-day lives. In offering a contingency of language (or a language to come) the materiality of technology obfuscates the possibility of not-knowing and also a discourse which fails to capture the subject, or more precisely, lalangue. Although our technological age promises unending technological improvement, it is all the while diminishing social space. This malaise is complicated both by its being an obvious insufficiency imposed on the speaking body also by being a malaise which from which jouissance is readily taken up. Technology together with its insufficiency takes up residence in the subject, maybe even producing 'new' symptoms and excesses with which the subject must grapple. As material technology accelerates it seems that we have forgotten that there is no metalanguage, yet we need to remember this 'forgetting' in order to handle the contingency of language. It appears that when forgetting is deliberate, incisive and contradictory commands emerge. This paper grapples with a singular question: what makes language possible (and viable) in the current age of technology?