All Stories
Abduction
Logical inference coined by Charles Sanders Peirce; a creative leap used to generate hypotheses from limited data
Abduction, as theorised by the mathematician and philosopher Charles Sanders Peirce, is reasoning that infers the best explanation from incomplete data.
Linked throughout Peirce’s writing to ‘hypothetical thinking, imagination, intuition and guessing’ (Fortes, 2022: 1), abduction consolidates for him in the 1890s as a kind of inference involving the ‘generation and evaluation of explanatory hypotheses’ amid uncertainty (Thagard, 2007: 226).
While twentieth century AI built logic-based systems capable of deductive and inductive reasoning, twenty-first century deep learning is said to specialise in abductive reasoning, going beyond what has been taught to discover otherwise unknowable patterns and correlations.
Differently from deductive reasoning (starting with a general rule and applying it to concrete cases) or inductive reasoning (inferring a general rule from facts about individual cases), abductive reasoning in machine learning (asking what would need to be the case for a theory to be true) operates as a speculative experiment in which ‘what one will ask of the data is a product of patterns and clusters derived from the data’ (Amoore, 2020: 47).
When referring to deep learning operations as “abductive”, however, it is important to recognise that, for Peirce, abduction is a multi-modal mode of discovery in which what is most important is not necessarily the data itself but rather subjects’ affective reactions: abduction is initiated by ‘the feeling of puzzlement and ends at the satisfaction of knowing’ (Fortes, 2022: 5).
Topics: Ethics
Affective computing
Emotion-oriented AI designed to detect, interpret, and respond to human feelings and affect
Affective computing enables computational systems and devices to recognise, process, and simulate human emotions. Pioneered by the computer scientist Rosalind Picard in the 1990s, it is an interdisciplinary field which brings together cognitive science, psychology, and computer science.
As Picard wrote in 1997, ‘the essential role of emotion in both human cognition and perception, as demonstrated by recent neurological studies, indicates that affective computers should not only provide better performance in assisting humans, but also might enhance computers’ abilities to make decisions’ (Picard, 1997: 1).
Questions of affect, feeling, and the sensory, however, were more pertinent to digital computing in the 1940s and 1950s than is commonly recognised, amid the interactions among neurophysiology, mathematics, computer science, information theory, and psychology that cybernetics entailed.
As the cultural theorist Elizabeth A. Wilson notes, ‘the relationship between artificiality and affectivity’ permeated ‘the early, heady years of computation – when the first calculating machines were imagined and built and when the conceptual parameters of electronic calculation were first being fashioned’ (Wilson, 2010: xviii).
The role of affect in human cognition took centre stage, for instance, in a 1952 BBC broadcast entitled ‘Can Automated Calculating Machines Be Said To Think’, featuring the mathematicians Alan Turing and Max Newman, the philosopher R.B. Brathwaite, and the neurologist Geoffrey Jefferson.
For Jefferson, any meaningful account of ‘thinking’ must address the significance of ‘external stimuli’; extraneous socio-affective factors ‘like worries of having to make a living, or paying ones taxes, or get the food one likes’. A machine, he emphasises, ‘has not an environment’, and yet ‘man is in constant relation to his environment, which, as it were, punches him while he punches back’ (BBC, 1952).
This affective emphasis in debates about machine intelligence shifts, however, from the mid-1960s as cybernetics and early neural network research stalls following trenchant critiques. With the consolidation of logic-based AI, visceral sensory-affective and neuro-physiological relations are reformulated as cognitive “emotional” processes understood to be discrete, categorizable, and amenable to logical algorithmic modelling.
Different strands of AI research have since explored how affect might better figure in synthetic learning and sense-making. At MIT Media Lab from the mid-1990s, for instance, the roboticist Rodney Brooks’ ‘novelle AI’ sought to build artificial agents which demonstrated a responsive distributed intelligence with ‘the capacity for growth’ (Wilson, 2010: 4).
This new AI animated an intuitive intelligence cultivated directly via ongoing environmental interactions populated by sensory, perceptual, and behavioural data.
Today, proliferating sensor technologies underpinned by machine learning algorithms are transforming sensing practices through ‘ensembles of multiple humans and more-than-humans, environments and technologies, politics and practices’ (Gabrys, 2019: 273).
When it comes to how sensory, affective, and physiological data is measured, interpreted, and mobilised, however, much mainstream AI research remains bound to the cognitivist paradigm that took shape in the second half of the twentieth century.
Common computational techniques such as sentiment analysis, for instance, involve ‘the tabulation and classification of common words for emotional expression based on their frequency’ (Stark, 2018: 214). In providing statistical proxies for affective intensities’ (Andrejevic, 2013: 54), such approaches lack robust means of addressing affective experience that exceeds language.
As Geoffrey Jefferson underscored in 1949, ‘of the vast stream of sense data that enters into our nervous systems we are aware of few and we name still fewer … Only by necessity do we put a vocabulary to what we touch, see, taste and smell, and to such sounds as we hear that are not themselves words’ (Jefferson, 1949: 1109).
Topics: Culture, Innovations, Systems
Algorithmic life
Concept describing how life is managed and governed by algorithms and data
Algorithmic life describes how algorithms, from Bayesian networks to LLMs, govern the everyday.
Our current historical moment is one in which software, AI, and algorithms are increasingly shaping the conditions of social existence – pushing forward Alan Turing’s speculative account of a future in which ‘machines would exceed the rules-based decision procedures and extend to the affective pull of intuitions for data’ (Amoore, 2020: 57).
With the post-millennial rise of Web 2.0 and a range of social media, entertainment, retail, governmental, and logistical platforms powered by machine learning, expressions such as “algorithmic life”, “algorithmic logic”, “algorithmic imagination”, “algorithmic governance”, and “algorithmic bias” punctuate public discourse.
Critical commentators debate, in turn, the nature and implications of “algorithmic thought” in our unfolding ‘common space of decision, classification, prediction and anticipation’ (McKenzie, 2017: 10).
The media theorist Taina Bucher defines the “algorithmic imaginary” as involving not only the ‘mental models’ that people develop in relation to algorithms but also ‘the productive affective power that these imaginings have’ (2017: 41) – and, crucially, how, in ‘ranking, classifying, sorting, predicting, and processing data’ algorithms ‘make the world appear in certain ways rather than others’ (2018: 3).
What has been termed “artificial intuition” consolidates in such socio-technical conditions.
Topics: Culture, Ethics, Governance, Innovations
Algorithmic thought
How computational technologies are transforming the nature of thought
What has been diagnosed as ‘the algorithmic condition’ (Coleman et al, 2018) addresses not only how people think, talk, or become aware of algorithms in daily life, but also how the growing ubiquity of machine learning technologies may be transforming the nature of thought itself.
The philosopher Michel Serres (2015) suggests, for example, that our increasing delegation of mental synthesising and processing to smart devices has produced a generation of digital humans programmed in an ‘algorithmic mode of thought’ which is procedural, technical, calculative, and data-oriented.
Or, as the digital media scholar Wendy Chun puts it, ‘through habits users become their machines: they stream, update, capture, upload, grind, link, save, trash, and troll’ (2016: 1).
Algorithmic thought, in this context, signals a ‘practice-based shift in knowledge production and acquisition’ (Coleman et al., 2018: 9) that is changing the dynamics of cognition (Pedwell, 2019).
Through our growing entanglement with computational media, we are, from this perspective, becoming different kinds of humans – ones who think, feel, remember, respond and move differently.
If younger generations do not rely on the same cognitive habits and capacities as their parents or grandparents, this, Serres contends, is because they do not need them: ‘With their cell phone, they have access to all people; with GPS, to all places; with the internet, to all knowledge’ (2015: 6).
What is particularly significant about Serres’ imagined digital subject, however, is how she combines algorithmic thinking with ‘an innovative and enduring intuition’. Precisely because she no longer has to dedicate so much neural capacity to gathering, storing, and organising information, this “new human” cultivate a more intuitive mode of engagement attuned to the visceral experience and flow of everyday life (Serres, 2015: 19).
Yet if some media scholars explore how networked technologies are remediating “the human”, others argue that what is most distinctive about twenty-first century media is their disregard for anthropocentric categories, processes, and experiences.
As the digital media scholar Luciana Parisi argues, ‘soft(ware) thought’ is not what ‘affords the mind new capacities in order to order and calculate’ or what ‘gives the body new abilities to navigate space’. Instead, it involves the ‘automated prehension’ of infinite data that cannot be fully compressed, comprehended or sensed by totalities such as ‘the mind’, ‘the machine’ or ‘the body’ (Parisi, 2013: xviii, ix).
Similarly, for the media theorist Patricia Clough and colleagues, media analytics enable ‘a new prehensive mode of thought that collapses emergence into the changing parameters of computational arrangements’ (2015: 108).
Topics: Culture, Innovations, Systems
Alien agencies
Capacities of AI systems that are alien to human thought.
Alien agencies are capacities exhibited by computational technologies that are alien to human thought, temporality, and spatiality.
As the digital media scholar Beatrice Fazi suggests, after the Turing Test much AI discourse has reflected a ‘simulative paradigm’ in which the “success” of machine intelligence is assessed by measuring how well computing systems are able to mimic human cognitive, sensorial, and perceptual capacities. Yet what may be most significant about current algorithmic architectures is their disregard for anthropocentric processes and experiences (Fazi, 2020: 2).
Discussions of “algorithmic culture”, for instance, address how statistical approaches such as single value decomposition (a factoring technique within linear Algebra), have enabled entertainment platforms like Netflix to “intuit” subtle human behaviours and latent correlations which operate ‘beyond human perception, language, and sense-making’ in order to optimise their recommendation algorithms (Hallinan and Striphas, 2016: 125).
From this perspective, machine learning innovations which endow AI with greater intuitive flexibility and generalisability do not seek to simulate human sensory, cognitive, or perceptual functions; instead, they hone computational capacities that may be incommensurable with them.
As the digital media theorist Luciana Parisi argues, ‘soft(ware) thought’ is not what ‘affords the mind new capacities to order and calculate’ or what ‘gives the body new abilities to navigate space’; rather, it involves the automated prehension of infinite data that cannot be fully compressed, comprehended, or sensed by totalities such as “the mind”, “the machine’”, or “the body” (2013: xviii).
“Artificial intuition” accordingly elaborates ‘visual information that humans cannot even receive or perceive’ and constructs ‘representations that are more relevant than those that any human computer could have identified’ (Fazi, 2020: 12) – while operating within durations outside of human time, space, or sense perception.
Antiaircraft fire
WWII targeting system that used feedback loops, inspiring cybernetics and automated prediction
Antiaircraft fire prediction, developed during World War II, aimed to improve targeting of moving aircraft. Informed by the mathematician Norbert Wiener’s work on feedback systems and statistical prediction, it laid the groundwork for cybernetics and early artificial intelligence.
Amid’s Germany’s catastrophic arial attack on Britain, Wiener’s objective, as the historian Peter Galison notes, was to design an “intuitive” algorithmic calculating device which could model ‘an enemy pilot’s zigzagging flight, anticipate his future position, and launch an antiaircraft shell to down his plane’ (Galison, 1994: 229).
Wiener saw that, when it came to a ‘pilot, flying amidst the explosion of flak, the turbulence of air, and the sweep of searchlights, trying to guide an airplane to a target’ (Galison, 1994: 236), the statistical design of feedback mechanisms needed to address the interplay of machinic and human neurophysiological processes.
Although the “antiaircraft predictor” did not fully materialise during the war years, Wiener’s collaborative work with the computer engineer Julian Bigelow and the neurophysiologist Arturo Rosenblueth produced an understanding of human-machine relations in which ‘soldier, calculator, and fire-power [operated as] a single integrated system’ (Galison, 1994: 235).
This model of human-machine assemblage, alongside the ideas of feedback systems and black boxes, animated Wiener’s wider cybernetic vision. Its legacy also persists, however, in modern warfare technologies and algorithmic surveillance systems.
Topics: Governance, Innovations, Systems
Artificial intuition
AI concept proposing how machines might simulate intuition for rapid, creative, or uncertain decisions
In tech journalism, “artificial intuition” is an industry buzzword describing the ability of ‘AI systems to make intuitive choices and respond intuitively to problems’ through ‘subconscious pattern recognition’ (Johannsen and Wang, 2021: 175-6).
Emergent work in computer science defines it as an automatic process ‘which does not search for rational alternatives, jumping to useful responses in a short period of time’ (Johnny et al., 2020: 464).
Within AI research, artificial intuition is understood as arational, abductive, and generative. Unlike ‘deductive reasoning by hypothesis testing’, deep learning programmes ‘deploy abductive reasoning so that what one will ask of the data is a product of patterns and clusters derived from the data’ (Amoore, 2020: 47).
Artificial intuition is associated with deep neural nets operating in conditions of ‘radical uncertainty’ (Prokpchuk et al, 2020) to gain ‘understanding of reality beyond what is specified in a data set’ (Le Cunn, 2021). Often working with raw and unlabelled data streams, programmes employ unsupervised or self-supervised learning to map the structures and patterns of their input data and identify hidden correlations.
For the computer scientist and literary theorist N. Katherine Hayles, generative AI – including large language models such as ChatGPT – acquires ‘a kind of intuitive knowledge’ derived from ‘the intricate and extensive connections that it builds up from the references it makes from its training dataset’ (Hayles, 2022: 648-9).
Further questions emerge concerning what artificial intuition might entail within quantum AI. In using quantum bits or qubits, ‘which can exist in multiple states simultaneously thanks to a phenomenon called superposition’ (Marr, 2024), quantum computers can perform certain calculations exponentially faster than classical computers.
Artificial intuition, however, is best understood as more-than-human. It animates the relational and continent nature of “autonomous” AI and points to how ‘all modes of autonomy are acquired affectively and relationally’ (Wilson, 2010: 85).
The political geographer Louise Amoore’s discussion of surgical robotics systems such as Intuitive Surgical’s da Vinci robot is exemplary:
Working experimentally with mass quantities of data at an inhuman scale and speed, the algorithms extract the features of movement from surgical gestures to hone ‘the spatial trajectory of the act of suturing flesh’. In turn, the embodied sensing, navigation, decision-making, and action engaged in by the surgeons is actively shaped by ‘algorithmic judgements, assumptions, thresholds and probabilities’ (Amoore, 2020: 59, 64).
In this case and other digitally-mediated realms, intuition is distributed, collaborative, and always unfolding – “intuitive” systems sense their way towards choices and outcomes through overlaying and intermeshing human, superhuman, and inhuman trajectories and durations.
Topics: Culture, Ethics, Innovations, Systems
Automated home assistants
Consumer AI devices that use voice to interact and manage everyday tasks
Automated home assistants use machine learning to respond to voice commands. Emerging from the 1960s chatbot ELIZA’s legacy, they enhance human-AI interaction but raise privacy concerns.
Across public digital cultures, AI innovations are presented as making intelligent devices more flexible and intuitive—with automated assistants such as Alexa and Siri offering prominent examples.
Amazon’s Alexa, for instance, can now whisper if she picks up that you are trying to be quiet, recommend a recipe for chicken soup if she senses you are ‘coming down with something’ (Fussell, 2018), or ask about ‘a light you left on if she has a hunch that you did it unintentionally’ (Biggs, 2019).
Employing an algorithmic system called ‘Hunches’, the Amazon Echo correlates information from a user’s Alexa-enabled devices with ‘publicly available information such as timetables, clocks and weather patterns to develop an understanding of human habits’ and ‘intuit a user’s needs’ (Atkinson and Barker, 2021: 58).
The more that Alexa can passively acquire intimate, somatic, and behavioural data, the more pre-emptive she can be, anticipating requests before they are made and nudging emergent feelings, thoughts, and behaviour into being (Pedwell, 2023).
Topics: Culture, Initiatives, Innovations, Systems
Bayesian networks
Probabilistic model used in AI to reason under uncertainty by mapping variable dependencies
Bayesian networks model probabilistic relationships for reasoning under uncertainty.
From the 1980s, high level collaboration between mathematics, economics, and neuroscience led to the integration of probability and decision theory into digital computing and AI – including the development of Bayesian networks (Pearl, 1985).
Developing insights from the eighteenth century mathematician Thomas Bayes, who offered ‘a novel way to reason about the probability of events’, Bayesian networks proved a powerful tool in machine learning technologies – often combining with neural network algorithms to allow ‘AI to learn adequately despite imperfect data’ (Fan, 2019: 46).
For the political geographer Louise Amoore, the re-making of eighteenth century rules of chance via Bayesian inference models, alongside the development, from the 1990s, of advanced data mining techniques, signalled the infiltration of ‘the intuitive and the speculative within the calculation of probability’ (Amoore, 2013: 44).
This partial shift from strict probability to speculative possibility is, in conjunction with the design of advanced evolutionary algorithms, crucial to the post-millennial rise of “artificial intuition” (Pedwell, 2023).
Topics: Innovations, Milestones, Systems
Bergsonian intuition
Philosophical approach emphasising intuition over logic, conceptualised by Henri Bergson.
As theorised by the philosopher Henri Bergson, intuition is a way of knowing that combines cognitive and sensory data to connect us viscerally with change as it unfolds.
Unlike analysis, which reduces objects to ‘elements already known’, Bergsonian intuition is a form of immersive engagement with the world which connects us with ‘what is unique’ and ‘consequently inexpressible’ in an object (Bergson, [1903]1912: 7).
Given that both we and the objects we encounter are never static but rather always moving and becoming, intuition is, for Bergson, primarily about the experience of duration, process, and transformation. It allows us to inhabit, if only fleetingly, the ‘continuous flux’ beneath the ‘sharply cut crystals’ of analytical thought (Bergson, [1903]1912: 3).
While Bergson aligns intuition with the capacity for sensing, he departs from the philosophy Immanual Kant, who he contends pours ‘the whole of possible experience into pre-existing moulds’ ([1903]1912: 85).
In both Bergson’s writing and Gilles Deleuze’s later account of ‘Bergsonism’, intuition brings together ‘experience and experiment’ (Seigworth, 2006: 118) to produce speculative knowledge about new and specific problems as they unfold in time.
Although Bergsonian intuition may seem to require embodied experience which eschews the distancing effect of technological mediation, Bergson’s method of intuition focuses on the specificity of experience ‘for the explicit purpose of going beyond the “turn of experience” to explore that which conditions it’ (Lundy, 2018: 40).
Topics: Culture, Ethics, Initiatives
Can We Survive Technology?
John von Neumann essay addressing the ambivalence of twentieth century technological progress
‘Can We Survive Technology?’ is a 1955 essay by the mathematician, computer pioneer, and Manhattan Project member John von Neumann, published in Fortune magazine.
In this influential piece, von Neumann reflects on the links among three major mid-twentieth century scientific advancements in which he had a direct hand: nuclear power, digital computing, and numerical weather prediction. What links these technologies, he claims, is that ‘all are intrinsically useful’ but nonetheless ‘lend themselves to destruction’ (von Neumann, 1955: 11-12).
Of particular concern is how atmospheric modelling enables not only more accurate daily weather forecasting but also growing capacities for weather modification and climate control.
With high-speed digital computing is possible, von Neumann acknowledges, to ‘carry out analyses needed to predict results and intervene at any desired scale’. Beyond the existing technologies of “rain making” (seeding clouds with chemical compounds such as silver iodide), ‘a new ice age or tropical era’ could be created (1955: 11).
Anticipating the long-term consequences of a general cooling or heating the atmosphere, however, is a complex and indeterminate undertaking; it is not immediately obvious which interventions would be beneficial or harmful and to ‘which regions of the earth’ (1955: 10).
Once global climate control is actualised, von Neumann speculates, ‘[p]resent awful possibilities of nuclear warfare may give way to others even more awful’ (1955: 16) – in ways that will make existing geopolitical involvements seem straightforward.
To address this coming crisis, von Neumann suggests that ‘we must not only look at relevant facts, but also engage in some speculation’ (1955: 1). We must intuit the implications of potential technological developments to come, with attention to the exploitation of nuclear energy, increasing automation, and growing capacities for climate warfare.
In addition to framing post-war technological advancement as ‘an ambivalent achievement’ (1955: 12), von Neumann’s essay raises broader questions about the meaning and utility of speculation, intuition, forecasting, prediction, and control in a nuclear age.
Given that the transformations associated with technological progress inevitably ‘create instability’, their wider consequences ‘are not a priori predictable’ and experience shows, he argues, that ‘most contemporary “first guesses” concerning them are wrong’ (1955: 16).
The fragile global situation therefore calls for different modalities of scientific and political governance. More speculative, pragmatic, and responsive forms of navigating possible technological and geopolitical developments and their uncertain implications are required – ones that, in mobilising novel human-machine synergies to sense and respond to feedback, resonate with cybernetic imaginaries.
Topics: Ethics, Governance, Innovations
Chaos
Scientific theory about situations that obey particular laws but appear to have little or no order
In 1969 the MIT meteorologist Ed Lorenz introduced his influential theory of ‘the butterfly effect’: the core idea of chaos theory that small changes to a complex system’s initial conditions can produce dramatically different outcomes.
In his earlier paper ‘Deterministic Nonperiodic Flow’ in the Journal of Atmospheric Sciences (1963), Lorenz drew on work conducted with the MIT computer scientist Ellen Fetter to launch chaos theory as a branch of mathematics addressing the behaviour of dynamical systems that are highly sensitive to initial conditions.
In meteorology and beyond, the concept of chaos offered ‘a way to introduce unpredictability into a system without descending into randomness’ (Dry, 2019: 278).
Chaos theory took shape in the 1960s and 1970s alongside the rise of complexity science and its theorisation of open systems and far-from-equilibrium conditions.
As the philosophers of science Iyla Prigogine and Isabelle Stengers argue in Order out of Chaos (first published in France in 1978), if traditional science in the Age of Machine emphasised ‘stability, order, uniformity, and equilibrium’ and ‘concerned itself mostly with closed systems’, a new paradigm is consolidating that privileges ‘disorder, instability, diversity, disequilibrium, [and] nonlinear relationships’ (Toffler, 2018: xiv-xv).
Relatedly, second order cybernetics from the 1960s – informed by innovations in computer science, early systems research, and evolutionary theory – addressed the dynamics of learning in complex systems, attending to ‘forms of self-control and autopoiesis’ (Lemke, 2021: 177).
Around the year 2000, ‘neocybernetics’ emerges with the rise of new computational and sensorial technologies and practices that seek to capture, measure, and modulate atmospheric forces and intensities amid chaos and unpredictability (Lemke, 2021).
Today, with the folding of chaos into programming culture (Parisi, 2013), what has been termed “artificial intuition” involves advanced machine learning systems which incorporate noise, doubt, and uncertainty into their algorithmic operations (Amoore, 2020) to generate previously unknown associations.
Generative AI now engages in a prehensive prompting of futurity which aims to steer the flow of actions and events in a world of growing atmospheric, ecological, and geopolitical instability and chaos.
Computational common sense
Aims to equip AI with everyday human ways of knowing, as pursued by projects like Cyc and Open Mind Common Sense.
The “common sense problem” within AI is longstanding; notable early failures include computing systems ‘suggesting boiling a kidney to cure an infection, and attributing reddish spots on a Chevrolet to a case of the measles’ (Cantwell Smith, 2019: 37).
Advanced AI systems are still often seen to lack common sense, understood as ‘the ability to reason intuitively about everyday situations and events, which requires background knowledge about how the physical and social world works’ (Choi, 2022: 139).
Logic-based AI’s quest to “codify common sense” through translating tacit human knowledge into machine-readable knowledge is perhaps best encapsulated by the Cyc project, launched by computer scientist Doug Lenat and colleagues in 1984.
CyC was designed to navigate new situations by drawing analogical links to what it already “knows” – which required that it be programmed with a substantial base of knowledge and an organised body of reasoning methods (Lenat et al, 1985). Through mobilising higher-order logic, Cyc began to abstract, generalise, and learn from its experience.
Over time, however, systems like Cyc failed to achieve robust flexibility and intuitive sense-making – deficits linked not only to difficulties in scaling up from isolated ‘micro-worlds’ (Davis, 1990) but also to logic-based AI’s account of ‘intelligence as a passive receiving of context-free facts into a structure of already stored data’ (Dreyfus, 1992: 34).
Later AI initiatives, such as MIT’s Open Mind Common Sense project (1999-2016), drew on machine learning and internet-sourced data to simulate human common sense knowledge.
Key actors now recommend a hybrid “solution” to AI’s persistent common sense problem that integrates the speculative pattern recognition of generative AI and language-based (as opposed to logic- or rules-based) formalisms (Choi, 2022).
Yet critics argue that such “neuro-symbolic” approaches continue to neglect how human common sense is emerges out of situated socio-affective contexts and relations (Suchman, 2024). Also at stake is the capacity of mainstream AI research to confront common sense’s pervasive political, ideological and ethical elements (Pedwell, 2024).
Topics: Culture, Ethics, Initiatives, Innovations
Cyc Project
Knowledge system aimed at encoding common sense into AI through formal logic and inference rules
Cyc is a logic-based AI common sense reasoning system launched by computer scientist Doug Lenat and colleagues at the Texas-based Microelectronics and Computer Technology Corporation in 1984.
The Cyc project aimed to build on the work of the mathematician John McCarthy (an advisor on the project), the computer scientist Marvin Minsky, and others which had pinpointed early AI systems’ limited capacity for sound everyday judgement as what made them “brittle” and non-intuitive; unable to expand beyond the intentions of their designers to respond flexibly to uncertainty and change.
Cyc’s initial methodology involved encoding in machine-readable terms 99 per cent of a one-volume American desk encyclopaedia, followed by all of the common sense ‘facts’ (e.g. that an object can’t be in two places at once) that the encyclopaedia’s creators presumed the reader already knew (Lenat et al, 1985). This design, the team hoped, would enable the system to infer further rules directly from ordinary language.
Through mobilising higher-order logic, Cyc began to abstract, generalise, and learn from its experience. It could, for example, ‘infer “Garcia is wet” from the statement “Garcia is finishing a marathon run”, by employing its algorithmic rules that running a marathon entails high exertion, that people sweat at high levels of exertion, and that when something sweats it is wet’ (Copeland, 2016, online).
Framed as a forerunner to IBM’s Watson supercomputer, Cyc influenced the emergence of other AI common sense reasoning projects including MIT Media Lab’s Open Mind Common Sense initiative (1999-2016).
Topics: Culture, Initiatives, Innovations
Electric brain
Metaphor for early computers, shaping public imagination of thinking machines
The term “electric brain” emerged in the 1940s and 1950s to describe early computers, reflecting hopes they could mimic human cognition. Linked to cybernetics and neural network research, it captured public imagination about digital computing’s potential.
Envisioning his electric calculating machines as “giant brains”, the mathematician Alan Turing, in a 1947 lecture for the London Mathematical Society, animated a future computer that ‘can learn from experience’, and which ‘must be allowed to have contact with human beings in order that it may adapt itself to their standards’.
This talk would inform Turing’s most famous article, ‘Computing Machinery and Intelligence’, published in 1950, which posed the provocative question, ‘Can Machines Think?’.
The cultural theorist Elizabeth A. Wilson (2010) situates Turing’s work within what the literary scholars Eve Sedgwick and Adam Frank call ‘the cybernetic fold’: a period ranging from the 1940s to the 1960s involving the interaction between ‘postmodern and modern ways of hypothesizing about the brain and mind’ (1995: 509). As a historical moment in which new digital computers were on the horizon but ‘the actual computational muscle’ required to animate such technologies was not yet available (1995: 508), conditions were ripe for unbounded anticipations of their future possibilities.
The metaphor of electric brains, however, was also subject to considerable criticism. For example, in his Lister Oration at the British Royal College of Surgeons on 9th June 1949, entitled ‘The Mind of Mechanical Man’, Turing’s senior colleague, the pioneering Manchester University neurosurgeon Geoffrey Jefferson, urged caution amid the fever pitch surrounding “electric brains”.
‘[N]either animals or men’, Jefferson argued, ‘can explained by studying nervous mechanisms in isolation, so complicated they are by endocrines, so coloured is thought by emotion’ (1949: 1107).
As public interest in the potential capacities of ‘electric brains’ heightened, Turing and Jefferson debated matters face-to-face in a historic 1952 BBC broadcast. Entitled ‘Can Automated Calculating Machines Be Said To Think?’, the programme also featured the Cambridge philosopher R. B. Brathwaite and the Manchester mathematician Max Newman, who had, alongside Turing, been a WWII code-breaker at Bletchley Park.
Although Turing’s fellow panellists had much to contest in his speculative vision of artificial intelligence, the significance of affect and the sensory to human behaviour assumed centre stage. Human interests, Brathwaite argues, are shaped by ‘appetites, desires, drives, instincts’, yet these machines ‘have rather restricted appetites and they can’t blush when they are embarrassed’.
Jefferson, in turn, stressed the difficulty (if not impossibility) of engineering human intuitive, tacit, and affective knowledge into digital computers, chiding Turing: ‘There would be too much logic in your huge machine’!
Topics: Culture, Initiatives, Systems
Environmentality
A new mode of governance conceptualised by Michel Foucault
In his The Birth of Biopolitics Lectures at the College de France in 1978-79, the philosopher Michel Foucault forecast a new governmental logic of ‘environmentality’. Concurrent with the rise of neoliberalism, environmental governance is concerned with the speculative management of ‘fluctuating processes’ within an ecological system understood to be unpredictable and turbulent (Foucault, 2008).
No longer working through mechanisms of standardisation and regulation managed by the state, environmentality seeks to ‘steer and manage performances and circulations by acting on and controlling heterogeneities and differences that make up a milieu’ (Lemke, 2021: 169). Operationally ‘open to unknowns and transversal phenomena’, environmental governance modulates conditions to adapt to a future where crisis is ubiquitous (Foucault, 2008: 261).
Environmentality took shape in Britain and North America in conjunction with the development of complexity science, second order cybernetics, and ecological thinking. Together, these epistemological currents inform the rise of governing practices that ‘brea[k] with the idea of knowledge-based planning’ to stress ‘opportunism and preparedness instead of prevention and prediction’ (Lemke, 2021: 173).
Around the new millennium, ‘neocybernetics’ emerges with the rise of new computational and sensorial technologies and practices that seek to capture, measure, and modulate atmospheric forces and intensities (Lemke, 2021). At stake across these socio-technical developments are digitally-informed modes of environmental control that aim not only to “intuitively” connect with change as it unfolds but also to actively shape emergence amid uncertainty (Seigworth, 2025).
These conditions are associated with the consolidation of what the philosopher Brian Massumi (2015) calls ‘ontopower’, an environmental mode of power led by pre-emption. Marshalling twenty-first century machine learning architectures, contemporary ontopower’s operational imperative is not to accurately predict and prevent threats on the basis of credible information but rather to “intuitively” leverage partial data and indeterminacy to orchestrate how contingent atmospheres and environments can unfold (Massumi, 2025).
Topics: Culture, Governance
First order cybernetics
An interdisciplinary science of communication, computation, and automatic control.
The 1940s witnessed the consolidation of first order cybernetics as an ‘interdisciplinary science of communication, computation and automatic control’ (Conway and Siegelman, 2005: xi) that would lay vital groundwork for the nascent fields of cognitive science and artificial intelligence.
In the United States, the Macy conferences in cybernetics, inaugurated at the Beekman Hotel in New York City in 1942, brought together leading mathematicians including Norbert Weiner and John von Neumann with the pioneering neuroscientists Warren McCulloch and Walter Pitts, the information theorist Claude Shannon, and the anthropologists Margaret Mead and Gregory Bateson, among others.
In Britain, the Ratio Club, which met in London between 1949 and 1958, enabled wide-ranging cybernetic conversations among physiologists, mathematicians, and engineers, including the mathematician Alan Turing, as well as the neurophysiologist W. Grey Walter and the psychiatrist W. Ross Ashby.
The 1948 publication of Wiener’s best-selling book, Cybernetics: Or Control and Communication in the Animal and Machine, pioneered a statistical probabilities-based approach to communications engineering and introduced the concept of “feedback” as what links humans and certain kinds of machines.
We might note the partial resonances between cybernetic accounts of feedback and philosophies of intuition. For Weiner, neither human beings nor intelligent machines are isolated systems: both possess ‘sense organs’ – ‘a special apparatus for collecting information from the outer world’ (Wiener, 1950: 26) which is fed back into the system to inform future operations.
The cultural theorist Lauren Berlant, in turn, describes intuition as involving ‘sensual data gathering’ through which we navigate the present and feel-forward into the future. Intuition involves bodies ‘continually busy judging their environments and responding to the atmospheres in which they find themselves’ (Berlant, 2011: 15).
On both sides of the Atlantic, cybernetics was more than a narrow computational engineering enterprise; it was, as the sociologist Andrew Pickering contends, a new way of thinking about the world premised on a ‘non-modern ontology’, in which reality is always ‘in the making’ and ‘people and things are not so different after all’ (Pickering, 2010: 18).
While cybernetics is usually framed as emerging from Second World War military technologies and post-war digital computing, the media historian Bernard Dionysius Geoghegan (2023) suggests that liberal technocrats’ inter-war visions of social welfare delivered via “neutral” communication techniques shaped the informatic interventions of both WWII and the Cold War.
Stretching cybernetics’ timeline to encompass the 1920s and 1930s also foregrounds the importance of colonial relations, eugenics, and indigenous cultures to the rise of cybernetic methods and epistemologies – while shifting the focus of analysis from engineering, computing, and statistics to anthropology, psychology, linguistics, and critical theory (Geoghegan, 2023).
Topics: Culture, Initiatives, Innovations, Systems
Generative AI
AI systems that generate texts, images, and other media in response to a user's prompt or request
Generative AI technologies are framed as mobilising ‘gut feeling’ in ways that ‘mimic human intuition’: Like the ‘seasoned detective who can enter a crime scene and know right away something doesn’t seem right’, advanced machine learning algorithms identify ‘correlations and anomalies between data points’ to discover ‘unknown unknowns’ (TNW, 2020).
If human intuition is associated with situated embodied and sensory knowledge, generative AI, including Large Language Models (LLMs), acquires ‘a kind of intuitive knowledge’ derived from ‘the intricate and extensive connections that it builds up from the inferences it makes from its training dataset’ (Hayles, 2022: 648-9).
“Artificial intuition”, in this context, is linked to what the mathematician and philosopher Charles Sanders Peirce called ‘abductive reasoning’: a kind of inference involving the ‘generation and evaluation of explanatory hypotheses’ amid uncertainty (Thagard, 2007: 226).
As the political geographer Louise Amoore suggests, within the latent space of machine learning, endless possibilities for abductive experimentation emerge from the algorithm’s ‘exposure to an archive of cloud data, condensed via the infinitely malleable value system of weights, probabilities, thresholds and bias’ (Amoore, 2020: 78).
The contemporary promise of “artificial intuition” in generative AI is not only that it undertakes abductive reasoning akin to a seasoned detective, but also that it can attune to emergent aspects of reality that elude human perception, cognition, and sense-making.
For Big Tech, such innovations herald a future in which AI’s ability to ‘find patterns that would otherwise be impossible to detect’ will ‘help doctors reduce medical mistakes, farmers improve yields, teachers customize instruction and researchers unlock solutions to protect our planet’ (Smith and Shum, 2018: 6).
Digital media scholars emphasise, however, that generative AI technologies are embedded with error, bias, and prejudice at the levels of logic, procedure, and data (Chun, 2021; Klein et al, 2025).
Moreover, for the critical data studies researchers Emily Bender et al., what is troubling about LLMs like ChatGPT is not only that they ‘encode bias’ via their training procedures, but also how, despite being able to output seemingly sophisticated and coherent textual responses, they are in fact devoid of meaning.
The LLM is, as Bender et al put it, ‘a stochastic parrot’: a system ‘for haphazardly stitching together sequences of linguistic forms it has observed in its training data, according to probabilistic information about how they combine, but without any reference to meaning’ (2021: 617).
Nonetheless, attuning to the algorithmically mediated forms of intuition central to generative AI may also be ‘key to grasping the circulation of the present as a historical and affective sense’ (Berlant, 2011: 20).
The fact that LLM outputs depend on an external prompt raises interesting questions concerning the affective and socio-technical relations, agencies, and infrastructures such systems both depend on and generate. While the prompt constitutes a provocation or affective relation to the machine, the LLM ‘exercises considerable creativity in fashioning responses that can be remarkably complex in style and conceptual structure’ (Hayles, 2022: 659).
If, for the philosopher Henri Bergson, intuition is an immersive engagement with the world which connects us with ‘what is unique’ and ‘consequently inexpressible’ in an object ([1903]1912: 7), the recursive logics of generative AI invite us to contemplate what it means to intuitively coincide with ‘the [trained] thing’ itself as a unique object (Pedwell, 2023).
Topics: Culture, Ethics, Governance, Innovations, Systems
Haunting
The return of what resists incorporation into computational systems
Haunting refers to the possible return of that which resists translation into computational form in AI systems.
One of the main functions of algorithmic architectures is to ‘render calculable some things that hitherto appeared intractable to calculation’ (McKenzie, 2017: 8) – dynamics which encapsulate what the digital media scholar Ed Finn (2015) calls the ‘computational imperative’: a wager that all complex systems could be modelled quantitatively that finds its roots in Alan Turing’s Universal Machine.
While management psychology in the 1970s and 1980s focused on how intuition, as a human capacity, could be measured and indexed (Lussier, 2016; Pedwell, 2023), artificial intuition deals only with what is legible in computational terms and discards everything else.
In the course of the algorithmic operations that constitute “artificial intuition”, then, something is inevitably elided, lost, or repressed – there is always a remainder which resists translation into computational form and therefore “haunts” the system.
We might then consider the afterlife of that which is elided or repressed within “intuitive” computational systems. If that which machine learning algorithms ‘leave behind reside[s] uneasily in limbo, known and unknown, understood and forgotten at the same time’ (Finn, 2015: 51), under what conditions might such elements return and with what critical implications?
Such questions resonate with the political geographer Louise Amoore’s call for an experimental and processual approach to AI ethics which ‘involves reopening the multiplicity of the algorithm’ to reinstate ‘the partial, contingent and incomplete character of all algorithmic forms of calculation’ (2020: 162, 21).
They also align with the media psychologist Lisa Blackman’s exploration of how the ‘queer aggregations’ of haunted data can be ‘mined, poached, and put to work in newly emergent contexts and settings’ (2019: xiii).
Topics: Culture, Ethics, Systems
IAS Meteorology Project
Weather computing project using ENIAC and the IAS computer to simulate atmospheric models
The IAS Meteorology Project (1946–1957), led by the mathematician John von Neumann and the meteorologist Jules Charney at the Institute for Advanced Study in Princeton, pioneered numerical weather prediction using early computers, including the ENIAC machine and the IAS computer.
Building on the mathematican Norbert Wiener’s cybernetics, it developed algorithms for weather forecasting and created the world’s first climate modelling software, with lasting impacts on climate science and predictive AI technologies.
A familiar narrative of twentieth century meteorology is that, with the advent of digital computing in the late 1940s, forecasting was transformed from an intuitive art into a computational science (Edwards, 2010). Yet an affective genealogy of weather prediction instead illuminates how these different modes of knowing and anticipating the world become newly interrelated within nascent algorithmic forecasting technologies.
As the IAS mathematician Herman Goldstein suggested in a 1947 letter to Pentagon officials, with its unprecedented computational power, the IAS computer would allow researchers to test their intuitive hunches about fluid dynamics, including the vexing problem of turbulence, through statistical means (IAS Electronic Computer Project archive).
Cybernetics, intuition, “electric brains”, and weather prediction become entangled in this post-war period, as digital computing enables new modes of perception and prediction in an atomic world riven with uncertainty.
Topics: Initiatives, Innovations, Milestones, Systems
Intuitive expertise
Cognitive model explaining expert performance through rapid, unconscious recognition of patterns
Intuitive expertise, as conceptualised by the philosopher of science Hubert Dreyfus, refers to tacit knowledge gained through experience, which he argued AI cannot replicate.
Between the 1930s to the 1980s, intuitive expertise consolidated across Britain and the United States as a honed capacity for pattern recognition enabling leaders to make effective decisions (Pedwell, 2023).
Scholarship on expertise across psychology, philosophy, cognitive science, and management studies has examined how ‘human experts, after years of experience, are able to respond intuitively to situations in a way that defies logic’ (Dreyfus and Dreyfus, 1988: xiv).
The growing adoption of personal computers in homes, schools, and workplaces from the early 1980s set the stage for enhanced theories of intuitive expertise amid growing public interest and anxiety concerning people’s changing relationships with “new” technologies.
Extending earlier cybernetic thinking, the psychologist and computer pioneer Herbert Simon held that the digital computer highlighted how, for human experts and AI systems alike, rapid and intuitive decision making depends on the honed recognition of ‘chunks or patterns stored in long term memory’ (1987: 61). The central task ahead, Simon contends, is to catalogue ‘the knowledge and cues used by experts in different kinds of managerial tasks’ so that this information can be automated by computers (1987: 39).
Others, however, were more wary of artificial ‘decision aids’ and ‘expert consultants’ (Simon, 1987: 61), as well as wider claims concerning the possibility of automated intuition.
In his 1985 book, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, for instance, Dreyfus famously argues that computers ‘can apply rules and make logical inferences at great speed and with unerring accuracy’, but lack ‘intuitive intelligence that enables us to understand, to speak, to cope skilfully with our everyday environment’ (Dreyfus and Dreyfus, [1985]1988: xx).
Dreyfus therefore questions the very possibility of automated intuition if intuition is, by definition, an embodied, visceral, and situated capacity – laying conceptual groundwork for future philosophers, sociologists, STS scholars, and digital media researchers to address the (im)possibilities of designing AI with real contextual awareness.
Topics: Culture, Ethics, Systems
Large Language Models
Language model that generates text by learning statistical patterns in massive corpora
Large Language Models (LLMs), like ChatGPT, leverage neural networks and machine learning to generate human-like text.
Evolving from the 1960s chatbot ELIZA, LLMs excel in reasoning and prediction, powering automated home assistants and computer vision.
Following the advent of transformer models in 2017, which employ an attention mechanism that ‘consists of several attention layers running in parallel’ (Vaswani et al. 2017: 4), generative AI – including LLMs – can now produce text, images, other media in response to a prompt.
Focussing on ‘a word in the context of a sequence’, LLMs generate ‘probability for the importance of a word relative to other words in the phrase or sentence’ (Hayles, 2022: 639), essentially seeking to compute ‘human context, meanings, patterns of behaviour and possible futures’ (Amoore, 2020: 89).
For the computer scientist and literary theorist N. Katherine Hayles, generative AI acquires ‘a kind of intuitive knowledge’ derived from ‘the intricate and extensive connections that it builds up from the references it makes from its training dataset’ (Hayles, 2022: 648–649).
Although the nature of the external prompt will significantly shape an LLM’s output, the systems’ algorithmic “hunch” about how to respond will manifest in a novel or ‘unrepeatable’ form, which is dependent on ‘how the neurons are weighted’ among other factors (Hayles, 2022: 645).
While the prompt constitutes a provocation or affective relation to the machine, the LLM ‘exercises considerable creativity in fashioning responses that can be remarkably complex in style and conceptual structure’. ChatGPT-3 has, for instance, been observed to “flip the script” when it ‘senses a note of antagonism in the prompt’ (2022: 659, 658).
Topics: Culture, Innovations, Milestones, Systems
Mathematical intuition
Mathematicians have long contemplated the role of intuition in computational logic and reasoning.
Mathematicians have long contemplated the role of intuition in computational logic and reasoning.
Such debates galvanised around a series of logic problems laid out by the mathematician David Hilbert in 1900 which, alongside the publication of Principia Mathematica (1910–1913) by the mathematician-philosophers Alfred North Whitehead and Bertrand Russell, explored the possibility of formalising all mathematical logic to eliminate theoretical uncertainties.
In response, the founder of the philosophy of “intuitionism”, mathematician L.E.J. Brouwer ([1927]1975), defended intuition as a cognitive activity vital to mathematical knowledge-building which runs counter to the automated theorem proving entailed by formalism.
In his 1939 paper ‘Systems of Logic Based on Ordinals’, the mathematician Alan Turing argued that mathematical reasoning depends on an iterative relationship between intuition and ingenuity. While ingenuity consists of ‘suitable arrangements of propositions’, intuition involves ‘making spontaneous judgments which are not the result of conscious trains of reasoning’ and is vital to mathematical discovery (Turing, 1939: 214-15).
From the 1960s, mathematicians began to approach intuition as fundamentally linked to context and experience and, in that sense, trained. Writing in Science, for instance, the mathematician R. L. Wilder describes mathematical intuition as ‘an accumulation of attitudes derived from one’s mathematical experience’ which is formed ‘by the cultural environment’ and is ‘of immediate importance to creative work’ (Wilder, 1967: 605-606).
On one hand, these accounts of mathematical intuition seem to preserve it as an immanent human propensity resistant to formalisation, mechanisation, or codification. On the other hand, the very notion of intuition as trainable bolstered interest within computer science concerning how intuitive knowledge and decision-making could be engineered in machines (Pedwell, 2023).
The political geographer Louise Amoore argues that Turing’s account of intuition, in particular, can be interpreted as signalling how, even in the mid-twentieth century, ‘the human and machinic elements of mathematical learning … are not so readily disaggregated’ (Amoore, 2020: 57).
In this way, histories of mathematics reflect intuition’s more-than-human qualities in ways that anticipate how the ‘extended intuition of machine learning’ entangles human and algorithmic capacities to ‘feel its way towards solutions and actions’ (Amoore, 2020: 67).
Topics: Initiatives, Innovations, Systems
MIT Digital Intuition Project
Intuition-focused AI project exploring how machines can mimic human insight and gut feeling
The MIT Digital Intuition Project explored AI systems that mimic human intuition, building on techniques in natural language processing.
The project recognised that ‘people use their intuitive knowledge of the world and the experiences they’ve had in the past to react intelligently to the world around them’ and investigated how computing systems might be designed with such capabilities (MIT Media Lab).
Led by the computer scientist Catherine Havasi at MIT Media Lab from 2011-2013, this endeavour was an offshoot of MIT’s Open Mind Common Sense (OMCS) project, which was set up by Marvin Minsky, Push Singh, and Havasi in 1999 and active until 2016.
To generate intuitive common sense knowledge, the OMCS system combined a semantic network, ConceptNet, ‘built from a corpus of knowledge collected and rated by volunteers on the Internet’, with a reasoning engine, AnalogySpace, which ‘used factor analysis, to build a space representing the large-scale patterns in common sense knowledge’ (Havasi et al, 2014: 4-5)
This design, its creators claimed, offered ‘a distributed approach to the problem of common sense knowledge acquisition’ which enabled OMCS’s applications to ‘achieve “digital intuition” about their own data’– to, that is, make inferences over multiple sources of data simultaneously, taking advantage of the overlaps between them (Havasi et al, 2014: 25).
Topics: Initiatives, Innovations, Systems
Neural network
AI architecture inspired by brains, enabling learning from data through layered connections
Neural networks, inspired by biological neurons, mimic cognition through interconnected nodes, learning patterns via algorithms.
The cyberneticists were the first to bring computational and neurophysiological processes formally together – most influentially in Warren McCulloch and Walter Pitts’ (1943) early work on neural networks. Their research not only introduced the computer as a generative (if partial and problematic) model for the brain but also ‘consolidated the notion that computers ought to be built as digital machines’ (Wilson, 2010: 11, 13).
In 1958, the psychologist and engineer Frank Rosenblatt’s perceptron, an algorithm for supervised learning of binary classifiers, pioneered the idea of using neuroscience to guide learning machines.
Amid influential critiques of perceptron models (Minsky and Papert, 1969) and Rosenblatt’s untimely death 1971, however, work on artificial neural networks stalled until the 1980s when it was revived by “connectionist” researchers to explore how these models could learn and handle information in a more flexible and intuitive way than symbolic processing AI.
In 1986, the psychologist David Rumelhart, with computer scientists Geoffrey Hinton and Ronald Williams, advanced a method for training neural networks called ‘backpropagation’ that works by attributing reduced significance to an event as it moves further back in the chain of events. This enabled the development of machine learning algorithms for natural language processing, visual image classification and analysis, and machine translation.
The post-millennial rise of “artificial intuition” is linked primarily to the capacity of advanced neural networks to learn, experiment, classify, and predict by continually ‘extracting features from [their] data environments’ (Amoore, 2020: 65). It entails the ability of machine learning algorithms to form complex correlations across large sets of data in real-time, enabling anticipation of associated behaviours to be tracked, amplified, and/or optimised.
Topics: Ethics, Initiatives
Numerical weather prediction
Computational approach to weather forecasting, realised with post-war digital computing
Early in the twentieth century, the British mathematician, physicist, and meteorologist Lewis Fry Richardson dreamed of the possibility of predicting the future of the atmosphere by applying physical theories and advancing ‘computations faster than the weather advances’ (Richardson, 1922: 1).
Richardson’s methodology was to render the physical principles underlying the behaviour of the atmosphere as a system of mathematical equations and apply finite difference methods to produce quantitative approximations of atmospheric states on which predictions could be based.
This approach, he hoped, would make intuitive weather forecasting more accurate by rendering relevant meteorological dynamics subject to iterative mathematical modelling.
Richardson’s initial attempt to apply numerical weather prediction methods to actual atmospheric events in Europe, however, was disastrous: the ‘phenomenal volume of numerical computation’ required exceeded existing human and machinic capacities (Lynch, 2008: 3433). It was not until the advent of digital computing in the late 1940s that numerical weather prediction would be realised.
Sponsored by the U.S. Office of Naval Research, the Institute for Advanced Study (IAS) Meteorology Project ran from 1948 to 1956 under the direction of the mathematician John von Neumann and the meteorologist Jule Charney, with a focus on ‘preparing mathematical models of atmospheric systems so that rapid forecasts could be made’ (Wing, 1979).
After a violent storm hit Princeton in November of 1950, Charney’s team developed a three-level model of that particlar weather system (Wing, 1979). The results of four 24-hour forecasts, run on the ENIAC computer in Aberdeen, Maryland, confirmed that ‘the large-scale features of mid-tropospheric flow could be forecast barotropically with a reasonable resemblance to reality’ (Lynch, 2008: 3456).
As each 24-hour weather forecast required about 24 hours of computation, the team was ‘just able to keep pace with the weather’ (Lynch, 2008: 3456). By 1953, the IAS computer could produce 24-hour forecasts in six minutes (Wing, 1979). Richardson’s vision of numerical weather prediction had finally materialised.
The post-war relationships among mathematics, digital computing, and atmospheric prediction, however, were not straightforward or uncomplicated. Although the advent of numerical weather prediction is often narrated as transforming forecasting from ‘an intuitive art into a computational science’ (Edwards, 2010), hunches, heuristics, and guesswork continued to play a significant role in computational meteorology through the twentieth century.
Today, a particular kind of intuition informs weather and climate modelling via the use of proxies, which ‘stand-in for something that cannot be modeled directly but can still be estimated, or at least guessed’ (Edwards, 2010: 338). This allows atmospheric forecasting to expand ‘the estimable beyond the calculable’ (Hoyng, 2025: 9).
Topics: Governance, Innovations, Milestones, Systems
Ontopower
A mode of power led by pre-emption; supported by technologically enhanced perception
Coined by the philosopher Brian Massumi, ‘ontopower’ is a newly consolidated mode of power led by pre-emption.
Supported by technologically enhanced means of perception ‘that detect the slightest signs of enemy action’ (Massumi, 2015: 11-12), ontopolitical governance inhabits shifting affective and geo-political atmospheres to apprehend threats before they emerge.
Concurrent with the transatlantic rise of neoliberalism, ontopower is operationalised at scale, in Massumi’s framing, with the George W. Bush administration’s “War on Terror” following the events of September 11th 2001. While the logic of pre-emption continues to operate implicitly in US military practice, it has also now moved into the domestic realm (Massumi, 2025).
As curated by both states and capital, ontopower entails what the philosopher Gilles Deleuze (1992) termed ‘control’: a form of power characterised by environmental control that is more processually intense and far-reaching than sovereign power, disciplinary power, and biopower.
It is an intuitive power to incite and orient emergence that ‘insinuates itself into the pores of the world where life is just stirring, on the verge of being what it will become and yet barely there’ (Massumi, 2015: xviii).
Topics: Culture, Ethics, Governance, Innovations
Open Mind Common Sense
Crowdsourced AI project by MIT collecting common sense knowledge for use in intelligent systems
The Open Mind Common Sense (OMCS) project was launched by computer scientists Marvin Minsky, Push Singh, and Catherine Havasi at MIT Media Lab in 1999 and was active until 2016.
It built on previous AI common sense initiatives such as the mathematician John McCarthy’s envisioned ‘advice-taker’ (1959) and the CyC project, inaugurated by the computer scientist Doug Lenat and colleagues in 1984.
Unliked these previous projects, OMCS drew on machine learning techniques in Natural Language Processing to build an “intuitive” language-based AI system which drew on external data to ‘infer additional pieces of common sense knowledge’ not already part of its data base (Havasi et al, 2014: 25).
The OMCS system combined a semantic network, ConceptNet, ‘built from a corpus of knowledge collected and rated by volunteers on the Internet’, with a reasoning engine, AnalogySpace, which ‘used factor analysis, to build a space representing the large-scale patterns in common sense knowledge’ (Havasi et al, 2014: 4-5).
From 2007, internet volunteers rated common sense statements from OMCS using the set “generally true”, “sometimes true”, “not true”, “doesn’t make sense”, and “not true but amusing” (Havasi et al, 2014, p. 32) – data which was fed back into the system to refine and expand its knowledge base.
OMCS led to other intuition-based AI initiatives, such as MIT Media Lab’s Digital Intuition Project, which ran between 2011 and 2013.
Topics: Culture, Ethics, Initiatives, Systems
Post-cybernetic logic
Leveraging computational uncertainty to generate value
The mathematician Norbert Weiner’s cybernetics proposed a capacity for recursive feedback as what links humans and machines with ‘sensory organs’ (Wiener, 1948).
Alan Turing’s ‘Imitation Game’ (Turing, 1950), in turn, inaugurated a ‘simulative paradigm’ (Fazi, 2020) for artificial intelligence in which biological and mechanistic processes of cognition came to be figured comparatively or analogically.
Today, a new techno-social paradigm is consolidating in which what constitutes thinking, sensing, or intuiting in machine learning is not expressible in human terms, and algorithmic systems have become too immense, complex, and unwieldy to control via feedback in the way first-order cybernetics imagined.
While twenty-first century media collect ongoing data about human sensibility which is used to produce new (or amplify existing) affects, behaviours, and atmospheres, these processes do not coincide with human time, space, or sense perception (Hansen, 2015); rather, they involve ‘inexperiencable experience’ (Chun, 2016: 55).
What is vital to post-cybernetic logic is, as the media scholar Patricia Clough suggests, not ‘the reliable relationship between input and output’ (2018: 104), but rather the speculative capacity to generate value through leveraging computational uncertainty itself.
Contemporary algorithmic dynamics operate in a post-probabilistic mode animated by randomness and incomputable data. The focus is less on tracking past patterns that can be projected into the future and more on identifying emergence, potentiality, and ‘the merely possible’ – an imperative enabled by data mining practices that employ ‘association rules’ between transactions across databases (Amoore, 2013: 43).
In such conditions, machine learning-enabled “artificial intuition” incorporates noise, doubt, and uncertainty into its algorithmic operations to generate previously unknown associations (Amoore, 2020). Generative AI undertakes a prehensive prompting of futurity which is not adequately captured by anthropocentric reference points.
The fact that the statistical operations involved in “artificial intuition” operate with distinction between inside and outside ‘the system’ (Clough et al., 2015) raises the question of what constitutes “the environment” in social worlds increasingly ordered by computational technologies (Pedwell, 2021b).
Topics: Culture, Governance, Innovations, Systems
Proxies
An entity that stands in for another entity; common in modelling and machine learning
Utilised in different kinds of modelling, a proxy is an entity that takes the place of another entity, or is understood to have the same meaning or purpose as it.
When something cannot be modelled or calculated directly, a proxy may be used to approximate or “intuit” its dynamics amid uncertainty. As the media scholar Dylan Mulvin suggests, proxies bear ‘the promise of creating controllable renditions of an unpredictable and unknowable’ (2021: 4).
Weather and climate models often use proxies. Paleoclimatologists, for instance, mobilise proxies for temperature ‘such as tree ring measurements, ice cores, ice melts, and local weather records’ (Chun, 2021: 127); a necessary practice in the face of incomplete or uncertain data which has informed the development of Earth Systems Models (Lynch, 2008; Edwards, 2010).
Within machine learning-enabled “artificial intuition”, proxies expand ‘the estimable beyond the calculable’ (Hoyng, 2025: 9) for the purpose of discovery. In this context, and more generally, proxies both ‘reduce and introduce uncertainty’: they enable researchers to expand what is knowable yet, in doing so, introduce new forms of indeterminacy (Chun, 2021: 125).
As a ‘culturally conditioned practice of consistently using some things to stand in for the world’ (Mulvin, 2021: 5), “proxification”, however, can also reinforce stereotypes and inequalities.
As the digital media scholar Wendy Chun suggests, statistical techniques employed within machine learning systems to “intuit” previously unknown associations, such as K-means testing, for example, draw on proxies ‘to compensate for ignorance or lack of evidence’ (2021: 57). Proxies used in social data analytics for marketing, policing, insurance provision, and electoral engineering (e.g. the use of postal code or ‘age of first arrest’ to infer race or brand preferences to infer ‘low IQ’) are often discriminatory and entangled with eugenic histories.
The fact that proxies are vital to understanding urgent phenomena such as climate change but also underlie discriminatory analytics underscores their fundamental ambivalence – and that of correlative modes of intuiting, knowing, and predicting more widely.
Topics: Culture, Ethics, Innovations, Systems
Sensory apparatus
Perception system, often technical, for sensing and interpreting physical or digital environments
Sensory apparatus refers to how humans and machines perceive environments, relevant to computer vision, neural networks, and “artificial intuition”.
While cyberneticians wanted to engineer what the mathematician Alan Turing called ‘thinking machines’, they also sought to transform how humans sensed and perceived the world through via their interactions with machines with ‘sensory organs’ (Wiener, 1948).
As the media historian Orit Halpern (2014) discusses, cybernetic thinkers like Norbert Weiner were concerned with refiguring perception, intuition, and the sensory apparatus within a post-war structure of feeling animated by new sensing technologies such as radar and digital computing.
In his best-selling 1948 book Cybernetics, Wiener writes about the need to cultivate a more-than-human mode of perception fit for the new digital age, which moves away from an Enlightenment preoccupation with taxonomy, precision, and rational analysis to produce a more intuitive ‘flow of ideas’ capable of deducing patterns.
Wiener’s new mode of perception is resonant with the philosopher Henri Bergson’s (1903) earlier account of intuition as a process of connecting with change as it unfolds. In contrast to Bergson, however, Wiener is interested in how intuition can be engineered into computers through breaking down its operations into discrete elements.
Today, proliferating sensor technologies underpinned by machine learning algorithms are transforming sensing practices through ‘ensembles of multiple humans and more-than-humans, environments and technologies, politics and practices’ (Gabrys, 2019: 273).
As the digital sociologist Jennifer Gabrys suggests, these practices (re)mediate sensing as ‘an ongoing and collaborative undertaking’ in ways that refigure ‘the traditional categories of expert, citizen and participation’ (2019: 276, 275) – and inform contemporary practices of “artificial intuition”.
Topics: Culture, Ethics, Innovations, Systems
Stored-program digital computer
Von Neumann model in which memory stores both data and code, enabling flexible computation
The first stored-program digital computers, engineered in the 1940s, stored programme instructions electronically, enabling flexible computation. Realised in machines like the Manchester Baby and the Manchester Mark 1, built at Manchester University in 1948 and 1949, this architecture underpins modern computing.
During the Second World War, Britain and the United States collaborated to develop digital technologies for urgent military purposes, from decrypting the German Enigma cipher to guiding the trajectory of ballistic missiles to running high-speed calculations to inform the atomic bomb.
The mathematician Alan Turing had, of course, already sketched the theoretical foundations of digital computing in 1936 with his Universal Machine.
Credit for conceptualising the first stored programme digital computer, however, is usually assigned to the mathematician, physicist and Manhattan Project modeller John von Neumann of the Institute of Advanced Studies (IAS) in Princeton, who was aware of Turing’s work and met with him during Turing’s doctoral study at Princeton in the late 1930s.
Along with the IAS mathematicians Herman Goldstein and Arthur Burks, von Neumann published ‘Preliminary Discussion of the Logical Design of an Electric Computer’ in 1946, a conceptual blueprint for a digital computer consisting of four key ‘organs’, a processing unit, a control unit, memory and external mass storage, and input and output mechanisms (Burks, Goldstein, and von Neumann, 1946).
Known as ‘the von Neumann architecture’, this design informed stored programme digital computers on both sides of the Atlantic in the late 1940s and early 1950s, including the IAS machine built by the engineer Julian Bigelow and team in Princeton between 1946 and 1952.
While earlier digital computers, such as the ENIAC, built at the University of Pennsylvania’s Moore School of Computing, were designed to calculate artillery firing tables, the IAS computer was inspired by the workings of human nervous system and wider cybernetic principles. It was to be a ‘thinking machine’ (Turing, 1950) that would extend perception, intuition, and knowledge beyond existing human capacities.
As Goldstein suggested in a 1947 letter to Pentagon officials, with its unprecedented computational power, the IAS computer would allow researchers to test their intuitive ‘hunches’ through statistical means. It would, that is, enable scientists and mathematicians to speculate, experiment, and ‘develop a feeling for the data’ (Turkle, 1995: 64) to furnish scientific breakthroughs in a new nuclear age.
Topics: Initiatives, Innovations, Systems
Structures of feeling
Cultural concept from Raymond Williams describing shared, affective experiences within a historical moment
Structures of feeling, coined by the literary theorist Raymond Williams, describe shared affective experiences shaping cultural patterns and forms.
Although reflecting a particular atmosphere of the age, structures of feeling are difficult to articulate in ordinary language as they hover ‘at the very edge of semantic availability’ (Williams, 1977: 134).
Williams’ structures of feeling aligns with the philosopher Henri Bergson’s earlier account of intuition. As the cultural theorist Gregory J. Seigworth (2006) suggests, both thinkers are interested in how we encounter ‘pre-emergent’ social and material forces and relations; in how we might sense change as it is actually happening.
Today, we might say that “the algorithmic” animates an emergent structure of feeling for our times. Resistant to comprehensive elucidation, it is more likely to be fleetingly glimpsed or intuitively sensed; a flickering awareness of one’s feelings, choices, and actions being tracked, anticipated, and shaped by pervasive machine learning architectures (Pedwell, 2022).
Topics: Culture, Milestones
The halting problem
Computability problem proving that no algorithm can decide if a program halts on all inputs
In his pivotal account of the Universal Machine in his 1936 paper ‘On Computable Numbers’, the mathematician Alan Turing pointed to “the incomputable” in mathematics, showing inherent limits to computation.
In doing so, he articulated ‘the halting problem’: that a general algorithm will never exist which can determine in advance whether an arbitrary computer programme will finish running or continue on forever. The halting problem is foundational to computer science, influencing AI models and algorithms.
Digital media scholars, in turn, explore the speculative promise of computational undecidability. The computer scientist and literary scholar N. Katherine Hayles argues that Turing’s vital legacy has been to illustrate the generative limits of pre-emptive control in algorithmic systems: ‘the more control is codified and extended through computational media, the more apparent it becomes that control can never be complete’ (Hayles, 2017: 203).
Recognising the extent to which unknowability and contingency characterise computational architectures is important for understanding how these systems could operate otherwise, for how recursivity could be opened ‘up to many different and thus transversal epistemologies’ (Parisi and Dixon-Romàn, 2020: np).
Also important to foreground – as cybernetics, STS, actor-network-theories, affect studies, and ecological media scholarship have long illustrated – is that beneath any technology presented as “artificial”, “automatic”, or “autonomous” are dense networks of affective and socio-technical relations (Wilson, 2010; Amoore, 2020).
Topics: Innovations, Milestones, Systems
The Imitation Game
AI test proposed by Alan Turing to assess machine intelligence via human-like conversation
The Imitation Game is a thought experiment conceptualised by the mathematician Alan Turing to imagine the possibility of machine intelligence.
Opening with the provocative question ‘Can machines think?’, Turing’s most famous paper, ‘Computing Machinery and Intelligence’, was published in 1950 in the philosophy journal Mind.
Quickly deeming this original question too imprecise to operationalise, Turing suggests an imaginative thought experiment instead: The Imitation Game would ‘be played by three people, a man (A), a woman (B), and an interrogator (C)’, who would ‘stay in a room apart from the other two’ and whose objective would be, through strategic questioning, to determine which ‘is the man and which is the woman’ (Turing, 1950: 433).
Having outlined the ground rules, Turing then asks, ‘What will happen when a machine takes the part of A in this game?’ (1950: 434). When a digital computer can fool an interrogator into believing it is a human subject seven times out of ten, he suggests that we will have to accept the existence of machine intelligence.
Turing’s longstanding interest in “thinking machines” was linked to the emergence of the first general purpose digital computers at the end of the Second World War – including the Small-Scale Experimental Machine at the University of Manchester, to which he contributed directly (Hodges, 1983).
Although these early computers lacked the memory capacity, electronic speed, and programming sophistication to be serious contenders in the Imitation Game, Turing’s interest in 1950 is primarily speculative rather than empirical: ‘we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well’ (Turing, 1950: 436).
Turing’s imperative is not to operationalise the imitation game – later known as the “Turing Test” – to prove the existence of machine intelligence, or to provide criteria by which we might definitively either equate or distinguish “human” from “machine” (Pedwell, 2022).
Rather, as the cultural theorist Elizabeth A. Wilson (2010) argues, Turing voices a plea for imaginative expansiveness concerning the possible futures of machine intelligence – futures in which the boundaries between organic and inorganic, biology and technology, and human and machine are complex, emergent, and undetermined.
Topics: Culture, Initiatives, Innovations, Milestones
The incomputable
Computation limit referring to problems no algorithm can solve.
In his 1936 paper, ‘On Computable Numbers’, the mathematician Alan Turing demonstrated that ‘computable numbers’ indicated the existence of ‘the incomputable’: problems that no algorithm can solve.
In his PhD thesis, ‘Systems of Logic Based on Ordinals’, completed under the supervision of the mathematician Alonzo Church in 1938, Turing uses ‘computable functions’ to mean ‘a function calculable by a machine’. He suggests that human mathematical intuition achieves something that machines cannot reproduce.
In contemporary media theory, ‘the incomputable’ is broadened to refer to that which exceeds algorithmic parameters or is lost when dynamic relations are rendered computational.
Advocates of “artificial intuition” claim that, in being able ‘to “tell” what is latent in a scene’ (Amoore, 2020: 106), advanced machine learning architectures can now “compute the incomputable” – rendering affective, bio-physiological, socio-political, and ecological life in algorithmic terms.
In algorithmic modelling, proxies are frequently used to ‘stand-in for something that cannot be modeled directly but can still be estimated, or at least guessed’ (Edwards, 2010: 338). This allows artificial intuition to expand ‘the estimable beyond the calculable’ (Hoyng, 2025: 9).
Topics: Culture, Innovations, Milestones
The personal computer
Tech milestone that brought computing to individuals, transforming work, leisure, and identity
The personal computer, popularised by machines like the IBM PC and the Apple II, partially democratised computing in Britain, North America, and beyond.
If, in the 1940s and 1950s, digital computers were seen as ‘too fragile, valuable and complicated for nonspecialists’ (Rheingold, 1985: 14), the 1970s and 1980s mark the moment when these new “thinking machines” start to permeate public consciousness as they enter homes, schools, and workplaces (Turkle, 1984, 1995).
This period is animated by optimism about the role personal computers might play in extending human potential, connectivity, and political engagement, alongside fear concerning the prospect that they would rapidly usurp human qualities, labour, and expertise (Pedwell, 2022).
Scientists and engineers imagined how, beyond its role as a calculating machine, the digital computer might enhance our intuitive capacities ‘to speculate, build and study models, chose between alternatives, and search for meaningful patterns in collections of information’ (Rheingold, 1985: 15).
For others, however, intuition consolidates in this context as a lived, embodied, and sensory capacity which distinguishes humans from machines.
In interviews with the sociologist Sherry Turkle in the 1970s and 1980s, for example, adults and children similarly define what it means to be human on the basis of what computers can’t do, which centres on ‘intuition, embodiment, emotions’ and the possibility of ‘spontaneity’ (Turkle, 1995: 83).
Topics: Culture, Milestones, Systems
The Universal Machine
Turing model of a programmable device, foundation of modern general-purpose computers
The Universal Machine, proposed by the mathematician Alan Turing, is a theoretical device underpinning the idea of a general-purpose digital computer.
It emerged from Turing’s 1936 response to the then unsolved Hilbert’s problem – the Entscheidungsproblem – concerning whether mathematics was “decidable” which, in this context, referred to ‘the quality of being fixed in advance, in such a way that nothing new could arise’ (Hodges, 1983: 123).
To approach the problem of decidability, Turing abstracted the quality of being determined and applied it to the manipulation of symbols. Could we, he asked, imagine an automatic machine which would employ a mechanical process involving symbols to read a ‘mathematical assertion presented to it, and eventually writ[e] a verdict as to whether it was provable or not’?
By devising a novel formulation of the old concept of “algorithm”, Turing’s computational thought experiment showed that ‘there was no “miraculous machine” that could solve all mathematical problems’, and therefore the answer to Hilbert’s question was ‘no’ (Hodges, 1983: 124). Yet the dramatic potential remained for a Universal Machine that, through its algorithmic programming, was capable of simulating the actions of any other machine.
Today the logics of the Turing machine have been actualised in computing technology, the internet, and a wide range of computational architectures. At the time, however, the universal machine was not a tangible electronic device, but rather a speculative machine; a mathematical description of a possible future automated technology (Pedwell, 2022).
Topics: Innovations, Milestones, Systems
What Computers Can’t Do
Hubert Dreyfus's critique suggesting why symbolic AI can't capture the intuitive and embodied nature of human thought
The philosopher of science Hubert Dreyfus’ influential 1972 book, What Computers Can’t Do: A Critique of Artificial Intelligence, argues that logic-based AI cannot replicate human intuition and contextual understanding.
Dreyfus first articulated this position in his combative 1965 review of Alan Newell and Herbert Simon’s AI research for the RAND corporation, the national research thinktank offering analysis to the US military.
Drawing on phenomenology, Dreyfus suggests that human intelligence is fundamentally different from computer intelligence and that, without embodied knowledge, computers will be incapable of intellectual tasks that require intuition and experience.
Expanding this argument in What Computers Can’t Do, and its sequel What Computers Still Can’t Do, Dreyfus predicts that logic-based AI will fail to achieve robust flexibility and intuitive sense-making because of its faulty account of ‘intelligence as a passive receiving of context-free facts into a structure of already stored data’ (Dreyfus, 1992: 34).
Human learning, from his perspective, entails broad embodied “know-how” shaped by contextual particularities, unfolding moods, and sensory-motor skills. A key problem with symbolic processing AI’s model of learning is that ‘one cannot substitute an extractable web of beliefs for the whole cloth of our concrete everyday experiences’ (Dreyfus, 1992: 54).
Published in 1985, Dreyfus’s follow-up book, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, co-authored with his brother Stuart Dreyfus, argues that AI expert systems require anticipation of all possible contingencies in order to code ‘rules for response’ and, as such, the litheness and immanent insight of expert human intuition is ‘forfeited’ (Dreyfus and Dreyfus, 1985: 31).
Topics: Culture, Governance, Innovations, Milestones
Wiener's Cybernetics
Bestselling 1948 book by Norbert Wiener that popularised cybernetics
The mathematician Norbert Wiener’s bestselling 1948 book Cybernetics: Or Control and Communication in the Animal and Machine inaugurated the field of cybernetics for the American and international publics.
Synthesising resources from mathematics, engineering, computer science, information theory, neuroscience, neurophysiology, psychology, and decision theory, Wiener pioneered a statistical probabilities-based approach to communications engineering and introduced the concept of feedback as what fundamentally links humans and certain kinds of machines.
“Feedback” is a process in which the outcomes of past actions are taken as inputs for future action (Wiener, 1948, 1950), and it is this recursive cycle that constitutes the “learning” of contemporary machine learning algorithms.
Early cyberneticists built physical machines that could respond “intuitively” to their unfolding physical and algorithmic environments – whether the feedback mechanisms involved were ‘as simple as photoelectric cells which change electronically when a light falls on them’ or as complicated as those found within ‘high-speed electrical computing machines’ (Weiner, 1950: 22, 24).
Alongside the contributions of mathematicians Kurt Gödel, Alonzo Church, and Alan Turing, Weiner’s cybernetic thinking was central to the twentieth century’s ‘transition from certainty to probability’ – as well as the emphasis on indeterminacy and the insight that ‘observation always affected the system being observed’ (Finn, 2015: 27) which second order cybernetics of the 1960s and 1970s powerfully highlighted.
Cybernetics played a foundational role in actualising the immanent, more-than-human, and technological aspects of “artificial intuition”; albeit in ways that could lapse into too-easy human-machine equivalencies and gloss over vital embodied and situated particularities (Pedwell, 2022).
Topics: Culture, Innovations, Milestones