resources
Read about the rise of “artificial intuition” and browse archival and scholarly sources

Digital histories and futures of intuition
- Intuition, speculation, and affect
- Training, discovery, and agency
- The rise of artificial intuition
- Abductive reasoning and machine learning
- More-than-human intuitive agencies
- The computational politics of intuition
- Prediction, chaos, and the counterfactual
Project resources

Intuition, speculation, and affect
Colloquially, we might associate intuition with direct sensing or fast-thinking that bypasses rational deliberation. We might also describe intuition as an affective premonition, an embodied hunch, or a gut feeling based on experience.
Yet intuition has a long intellectual history. Within Western philosophical traditions, it dates back at least as far as Plato, who understood intuition as intellectual perception, distinct from sensory perception, which corresponds to ‘the eternal’.1
The publication of the French philosopher Henri Bergson’s An Introduction to Metaphysics in 1903, however, offered something quite different. For Bergson, intuition is a form of immersive engagement with the world premised on the experience of duration, process, and change.
Bergsonian intuition brings together ‘experience and experiment’ to produce speculative knowledge about new and specific problems as they unfold in time.2
Bergson’s interest in movement, temporality, and speculation resonates with contemporary affect theories. As the cultural studies scholar Gregory J. Seigworth has argued, the Welsh cultural theorist Raymond Williams’s influential account of ‘structures of feeling’3 aligns with Bergsonian intuition.
Bergson and Williams are each interested in how we encounter “pre-emergent” social and material forces – in how we become affectively attuned to that which hovers ‘at the very edge of semantic availability’.4
Both thinkers, then, explore how we might sense change as it is happening5 – an imperative brought to life by the more recent affect scholarship of Kathleen Stewart, Erin Manning, and Lauren Berlant, in their varying modes of intuitively inhabiting the unfolding sensations of everyday life.6

Training, discovery, and agency
In Cruel Optimism, the cultural theorist Lauren Berlant describes intuition as a ‘process of dynamic sensual data gathering’ though which ‘we make reliable sense of life’. From this perspective, intuition is shaped recursively through lived experience and ‘visceral response is a trained thing’.7
In this particular way, Berlant’s vision intersects with cognitive psychologies and philosophies which understand intuition as a trained mode of action-perception.8
Think, for instance, of how, as the psychologist David G. Meyers puts it, ‘thanks to a repository of experience, a tennis player automatically – and intelligently – knows just where to run to intercept the ball, with just the right racquet angle … a near-perfect intuitive physics’.9
Or consider how, as a classic 1973 study by the computer science pioneers Herbert Simon and William Chase demonstrated, expert chess players can reproduce the chess board layout after a mere five-second glance.10
Yet, if mainstream cognitive psychologists, philosophers, and behavioural economists assume a bounded individual and pay scant attention to the politics of intuition, Berlant is more interested in collective practices of anticipation in which ‘affect meets history, in all its chaos, normative ideology, and embodied practices of discipline and invention’.7
Across these theories and philosophies, intuition involves the ongoing interplay of conscious and non-conscious modes of thought. Throughout his writing, Henri Bergson suggests that it is, in part, the less-than-conscious aspects of behaviour that enable ingenuity, creativity, and discovery.11 Bergson’s framework overlaps, in this respect, with mathematical intuition.
For the British mathematician Alan Turing, intuition is a mathematical faculty that ‘consists of making spontaneous judgements that are not the result of conscious trains of reasoning’.12 Or, as the nineteenth-century French mathematician Henri Poincaré puts it: ‘It is by logic that we prove. It is by intuition that we discover’.13
One key implication of these accounts is an imperative to relinquish our persistent attachment to human-centric notions of will, agency, and intentionality – to move away more decisively, as the philosopher Erin Manning puts it, from ‘the notion that it is the human agent, the intentional, volitional subject who determines what comes to be’.14

The rise of artificial intuition
In theorising intuition beyond “the human”, these perspectives resonate with digital media scholarship which observes the range of computational processes and systems that now entangle human and non-human modes of sensing, perception, and cognition.
The computer scientist and literary scholar N. Katherine Hayles, for instance, conceptualises ‘a planetary cognitive ecology’ in which cognition is engaged in by ‘technical systems as well biological life-forms’ and agency is therefore more-than-human, distributed, and ‘punctuated’.15
Within computer science, research on ‘artificial intuition’ in decision-making enabled by deep learning defines it as an automatic process ‘which does not search for rational alternatives, jumping to useful responses in a short period of time’.16
Drawing on network representations of past knowledge and experience, artificial intuition ‘combine[s] logic and randomness’ to assess problem contexts characterised by ‘partial information’ and select effective courses of action.17
This work has been enabled by the consolidation of machine learning technologies over the past four decades. From the 1980s, high-level collaboration between mathematics, economics, and neuroscience led to the integration of probability and decision theory into AI—including the development of Bayesian networks.18
Developing insights from the eighteenth-century mathematician Thomas Bayes, who offered ‘a novel way to reason about the probability of events’, Bayesian networks proved a powerful tool in machine learning technologies; often combining with neural network algorithms to allow ‘AI to learn adequately despite imperfect data’.19
For the political geographer Louise Amoore, the re-making of eighteenth-century rules of chance via Bayesian inference models, alongside the development, from the 1990s, of advanced data mining techniques, signalled the infiltration of ‘the intuitive and the speculative within the calculation of probability’.20
In conjunction with the design of advanced evolutionary algorithms, such developments have been crucial to the post-millennial rise of artificial intuition.

Abductive reasoning and machine learning
Across AI, computer science, and the technology press, artificial intuition is understood to be abductive rather than deductive. Unlike ‘deductive reasoning by hypothesis testing’, advanced machine learning systems ‘deploy abductive reasoning so that what one will ask of the data is a product of patterns and clusters derived from the data’.21
Artificial intuition may thus be less relatable metaphorically to Bergsonian intuition than it is to the abductive reasoning associated with the twentieth-century American mathematician and pragmatist philosopher Charles Sanders Peirce.
Variously linked throughout Peirce’s oeuvre to ‘hypothetical thinking, imagination, intuition and guessing’,22 abduction consolidates for him in the 1890s as a kind of inference involving the ‘generation and evaluation of explanatory hypotheses’.23
In this context, artificial intuition is understood to be fundamentally experimental and generative; using advanced forms of pattern recognition it discovers previously unknown associations. Often working with raw and unlabelled data streams, deep neural networks map the structures and patterns of their input data and rapidly identify hidden correlations.24
Hayles suggests, in this vein, that generative AI acquires ‘a kind of intuitive knowledge’ derived from ‘the intricate and extensive connections that it builds up from the references it makes from its training dataset’.25 It produces a kind of ‘tacit knowledge’ developed from ‘countless indexical correlations, embodied in indirect and direct ways’.26

More-than-human intuitive agencies
The digital computing pioneers Alan Newell and Herbert Simon announced in 1958 that ‘intuition, insight, and learning are no longer the exclusive possessions of human beings and any large high-speed computer can be programmed to exhibit them also.’27
Yet various intuitive human gestures and capacities have proven stubbornly difficult to replicate with machine intelligence, given the contextual embodied awareness they seem to require.28
Some media scholars suggest that current algorithmic modes of “thinking”, “sensing” and “speculating” offer less an augmentation of “the human” than they do radically different modes of operation which are not subject to comparison to, or comprehension by, anthropocentric processes and capacities.
As the digital media theorist Luciana Parisi puts it, ‘soft(ware) thought’ involves the automated prehension of infinite data that cannot be fully compressed, comprehended, or sensed by totalities such as “the mind”, “the machine” or “the body”.29
If Alan Turing’s imitation game inaugurated a ‘simulative paradigm’30 for AI in which biological and mechanistic processes of cognition came to be figured comparatively or analogically, and Norbert Weiner’s cybernetics proposed a capacity for recursive feedback as what links humans and machines with ‘sense organs’, a new techno-social paradigm is now consolidating.
In this emergent landscape what constitutes intuition or speculation in machine learning is not expressible in human terms, and algorithmic systems have become too immense, complex, and unwieldy to control via feedback in the way first order cybernetics imagined.
What is vital to post-cybernetic logic is, as the media theorist and sociologist Patricia Clough articulates, not ‘the reliable relationship between input and output’31 (2018: 104), but rather the speculative capacity to generate value through leveraging computational uncertainty itself.

The computational politics of intuition
Critical computational literatures examine the workings (and risks) of abductive reasoning within algorithmic governance, decision-making, and capitalisation – in which machine learning increasingly acts ‘to control the flow of actions and future events’,32 and human experience is claimed ‘as free raw material for hidden commercial practices of extraction, prediction, and sales’.33
Significantly, these everyday computational systems work unevenly, (re)producing hierarchical modes of (non)humanity through their biopolitical and geopolitical logics. As such, they compel attention to the regulation, exclusion, and violence which algorithmically-mediated intuition may entail.34
Data-driven “hunches” frequently reproduce a recursive loop of dominant cultural associations35 or, as the digital media scholar Wendy Hui Kyong Chun has shown, make probabilistic speculations on the basis of iterative biases and prejudices projected into the future.36
What is at stake in the consolidation of artificial intuition, then, is not only to the ability of corporations and governing bodies to nudge, shape, and control the future, but also to recursively constitute ‘the very conditions of the intelligible and the sensible’.37
From this angle, we might argue that digitally mediated intuition today seeks to create all-encompassing computational ecologies which train more-than-human modes of thinking and feeling that serve dominant political, economic, and ideological interests.
And yet, an analysis of artificial intuition that engages contextual nuance and the messiness of lived experience must recognise, in the spirit of Berlant’s account, that visceral response is immanently trained in multiple ways with diverse, and often contradictory, affective, material, and socio-political effects.
A vital ongoing challenge is how to approach complex political and ethical quandaries wherein the primary unit of investigation is neither ‘ir(responsible) human’ nor ‘errant machine’ but instead emergent ‘human-algorithm composite’ – within affective atmospheres in which intuition ‘never meaningfully belonged to a unified “I” who thinks’.38

Prediction, chaos, and the counterfactual
Salient questions emerge concerning what kind of uncertainty, complexity, or chaos artificial intuition is equipped to address – and what happens when unruly or unknowable dynamics are flattened by the statistical tools of machine learning.
Statistical climate models, for example, encounter difficulties in predicting ecological futures because they rely on past data and many atmospheric changes currently afoot are unprecedented and chaotic.
Although machine learning systems are said to specialise in abductive discovery, what is being claimed by their advocates is arguably a mode of abduction that is both highly self-referential and ex post facto.
As Louise Amoore suggests, deep learning operates via a retroactive logic ‘of beginning with an end target and abductively working back to adjust the parameters of the model in order to converge on the target’. The potential for different visions of the future to emerge from such models is therefore ‘radically foreclosed’.39
Amid far-right disinformation and growing allegiances between political authoritarianism and Big Tech, an everyday reality in which truth ‘is by nature retroactive’40 is amplified as conditional AI models come to inform a wider socio-political sensibility and environmental mode of operation.
To work effectively with ‘inconsistency, subjectivity, and generally noisy data’, counterfactual reasoning within machine learning systems may draw ‘rough conclusions’ based on analogies, similarities, and tendencies rather than on an ‘assumption of absolute truth’.41
What have been called AI “hallucinations”, in turn, refer to how the predictions of generative AI systems such as large language models produce outputs that are ‘factually wrong yet linguistically fluent and seemingly coherent’.42
Further issues arise from the use of synthetic data, and the production of new outputs immanently fed back into generative AI systems, which can amplify bias, error, and misinformation at scale as such systems become increasingly self-referential.42
A significant risk, from this perspective, is that emergent forms of artificial intuition mobilise noise, error, and the counterfactual towards immanent optimisation – but in ways that restrict the worldly problematisations and possibilities that can materialise.
In the balance at present, then, is how narrow, self-referential, retroactive, and conditional forms of prediction come to stand in for all prediction which, in turn, may colonise and corrupt what it can mean to intuit, anticipate, speculate, and navigate atmospheric and socio-political dynamics.

Archival research
In addition to library-based scholarly texts, this project draws on a range of archival resources. So far, the following collections have been consulted:
- The Alan Turing Papers, Archive Centre, King’s College, Cambridge University
- The Turing Digital Archive
- The Electronic Computer Project Papers, Shelby White and Leon Levy Archives Center, Institute for Advanced Study, Princeton
- The Julian Bigelow Papers, Shelby White and Leon Levy Archives Center, Institute for Advanced Study, Princeton
- The Norbert Wiener Papers, MIT Libraries Distinctive Collections
- The Wellcome Collection, London


Project bibliography
Amaro, R. 2022. The Black Technical Object: On Machine Learning and the Aspiration of Black Being. Sternberg Press.
Amoore, L. 2013. The Politics of Possibility: Risk and Security Beyond Probability. Duke University Press.
Amoore, L. 2020. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Duke University Press.
Amoore, L. 2023. ‘Machine learning political orders,’ Review of International Studies, 49(1): 20–36.
Andrejevic, M. 2013. Infoglut: How Too Much Information is Changing the Way We Think and Know. Routledge.
Atkinson, P. and Barker, R. 2021. ‘“Hey Alexa, what did I forget?”: Networked Devices, Internet Search and the Delegation of Human memory,’ Convergence 27(1): 52–65.
Bender, E., Gebru, T., McMillan-Major, A. and Shmitchell, S. 2021. ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,’ FAccT’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency: 610–623.
Benjamin, R. 2019. Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life. Duke University Press.
Bergson, H. [1889]2015. Time and Free Will: An Essay on the Immediate Data of Consciousness. Marino Fine Books.
Bergson, H. [1896]1991. Matter and Memory. MIT Press.
Bergson, H. [1903]1912. An Introduction to Metaphysics. Trans. T.E. Hulme. New York: The Knickerbocker Press.
Bergson, H. [1934]2019. The Creative Mind. Trans. M. L. Andison. Read Books Ltd.
Berlant, L. 2011. Cruel Optimism. Duke University Press.
Berlant, L. 2022. On the Inconvenience of Other People. Duke University Press.
Biddle, S. 2018. ‘Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document’, The Intercept, 13 June.
Biggs, T. 2019. ‘Amazon’s Alexa Wants to Ease Your “Cognitive Overload”,’ Sydney Morning Herald, 14 February, https://www.smh.com.au/technology/amazon-s-alexa-wants-to-ease-your-cognitive-overload-20190213-p50xib.html
Blackman, L. 2019. Haunted Data: Affect, Transmedia, Weird Science. Bloomsbury Academic.
Brouwer, L.E.J. 1975. Collected Works 1. Philosophy and Foundations of Mathematics. A. Heyting (ed.). North-Holland.
Bucher, T. 2017. ‘The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms,’ Information, Communication and Society 20(1): 30–44.
Bucher, T. 2018. If … Then: Algorithmic Power and Politics. Oxford University Press.
Burks, A., Goldstein, H., von Neumann, J. 1946. ‘Preliminary discussion of the logical design of an electronic computing instrument’, original copy (Electronic Computer Project archive, Institute for Advanced Study, Princeton).
Cantwell Smith, B. 2019. The Promise of Artificial Intelligence: Reckoning and Judgement. MIT Press.
Choi, Y. 2022. ‘The Curious Case of Commonsense Intelligence,’ Dædalus, 151(2) Spring: 139–155.
Chudnoff, E. 2013. Intuition. Oxford University Press.
Chun, W. H. 2016. Updating to Remain the Same: Habitual New Media. MIT press.
Chun, W. H. 2021. Discriminating Data: Correlation, Neighbourhoods, and the New Politics of Recognition. MIT press.
Clough, P.T. 2018. The User Unconscious: On Affect, Media and Measure. University of Minnesota Press.
Clough, P. T., K. Gregory, B. Haber, and R. J. Scannell. 2015. ‘The Datalogical Turn’. In P. Vannini (ed.) Non-Representational Methodologies: Re-envisioning Research. Routledge, pp146–164.
Coleman, F., Bühlmann V., O’Donnell, A., and van der Tuin, I. 2018. The Ethics of Coding: A Report on the Algorithmic Condition. European Commission.
Coleman, R. 2008. ‘A Method of Intuition: Becoming, Relationality, Ethics,’ History of the Human Sciences21(4): 104–123.
Coleman, R. 2017. ‘Theorizing the Present: Digital Media, Pre-emergence and Infra-structures of Feeling,’ Cultural Studies 32 (4): 600–622.
Collins, H. 2010. Tacit and Explicit Knowledge. University of Chicago Press.
Cooper, B. 2012. ‘Incomputability After Turing,’ Notice of the AMS. 59(6): 776–784.
Conway, F. and Seigelman, J. 2005. Dark Hero of the Information Age: In Search of Norbert Weiner the Father of Cybernetics. Basic Books.
Copeland, B.J. 2016. ‘Cyc,’ Britannica, https://www.britannica.com/topic/CYC
Crawford, K. 2021. Atlas of AI. Yale University Press.
Crehan, K. 2011. ‘Gramsci’s Concept of Common Sense: A Useful Concept for Anthropologists?,’ Journal of Modern Italian Studies 16(2): 273–287.
CyCorp. (n.d.). ‘Cyc Technology Overview’, https://Cyc.com/wp-content/uploads/2021/04/Cyc-Technology-Overview.pdf
Davis, E. 1990. Representations of Commonsense Knowledge. Morgan Kaufmann.
de Freitas, E. 2022. ‘The Role of Abduction in Mathematics: Creativity, Contingency, and Constraint’. In L. Magani (Ed.), Handbook of Abductive Cognition. Springer Nature, pp. 1–24.
Deleuze, G. 1992. ‘Postscript on Societies of Control’, October, Winter (1992): 3–7.
Dreyfus, H. [1972]1978. What Computers Can’t Do: A Critique of Artificial Reason. HarperCollins Publishers.
Dreyfus, H. 1992. What Computers Still Can’t Do: A Critique of Artificial Reason. MIT Press.
Dreyfus, H. and Dreyfus, S. [1985]1988. Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. The Free Press.
Dry, S. 2019. Waters of the World. The Story of the Scientists Who Unraveled the Mysteries of Our Oceans, Atmosphere, and Ice Sheets and Made the Planet Whole. Chicago University Press.
Edwards, P. N. 2010. A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming. MIT University Press.
Englebart, D. 1963. ‘A Conceptual Framework for the Augmentation of Man’s Intellect’. In P. Howerton and D. Weeks (eds.), Vistas in Information Handling Volume 1: The Augmentation of Man’s Intellect by Machine. Spartan Books.
Erikson, P., J. Klein, L. Datson, R. Lemov, T. Sturm, and M. Gordin. 2013. How Reason Almost Lost its Mind: The Strange Case of Cold War Rationality. University of Chicago Press.
Eubanks, V. 2017. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martins Press.
Fan, S. 2019. Will AI Replace Us? A Primer for the 21st Century. Thames and Hudson.
Fazi, B. 2020. ‘Beyond Human: Deep Learning, Explainability and Representation,’ Theory, Culture and Society, advance proof: 1–23.
Finn, E. 2015. What Algorithms Want: Imagination in the Age of Computing. Cambridge: MIT Press.
Fortes, G. 2022. ‘Abduction’. In V.P. Glăveanu (ed.), The Palgrave Encyclopaedia of the Possible. Palgrave Macmillan.
Foucault, M. 2008. The Birth of Biopolitics: Lectures at the Collège de France, 1978-1979. Springer.
Fussell, S. 2018. ‘Alexa Wants to Know How You’re Feeling Today’, The Atlantic, 12 October, https://www.theatlantic.com/technology/archive/2018/10/alexa-emotion-detection-ai-surveillance/572884/
Gabrys, J. 2019. ‘Sensors and Sensing Practices: Reworking Experience across Entities, Environments, and Technologies,’ Science, Technology, & Human Values 44 (5): 723–736.
Galison, P. 1994. ‘The Ontology of the Enemy: Norbert Weiner and the Cybernetic Vision,’ Critical Inquiry21(1): 228–266.
Galloway, A. 2006. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.
Garg, M. 2024. Spiritual Artificial Intelligence (SAI): Towards a New Horizon. Springer Nature.
Gazit, M. 2020. ‘The Fourth Generation of AI is Here and It’s Called “Artificial Intuition”,’ TNW, 3 September, https://thenextweb.com/news/the-fourth-generation-of-ai-is-here-and-its-called-artificial-intuition
Geoghegan, B. D. 2023. Code: From Information Theory to French Theory. Duke University Press.
Gramsci, A. 1971. Selections From the Prison Notebooks, Q. Hoare and G. Nowell Smith, (eds.). Lawrence and Wishart.
Hall, S. and O’Shea, A. 2015. ‘Common-sense Neoliberalism’. In S. Hall, D. Massey, and M. Rustin (eds.), After Neoliberalism? The Kilburn Manifesto. Lawrence and Wishart, pp. 52–68.
Hallinan, B, and Striphas, T. 2016. ‘Recommended for you: The Netflix Prize and the production of algorithmic culture’, New Media & Society 18(1): 117–137.
Halpern, O. 2014. Beautiful Data: A History of Vison and Reason since 1945. Duke University Press.
Hansen, M. 2015. Feed-Forward: On the Future of Twenty-First Century Media. Chicago: Chicago UP.
Havasi, C. 2014. ‘Who’s Doing Common-Sense Reasoning and Why it Matters,’ TechCrunch, 9 August, https://techcrunch.com/2014/08/09/guide-to-common-sense-reasoning-whos-doing-it-and-why-it-matters/?guccounter=1
Havasi, C., Pustejovsky, J., Speer, R. and Lieberman, H. 2014. ‘Digital Intuition: Applying Common Sense Using Dimensionality Reduction,’ IEEE Intelligent Systems, July/August, 24–35.
Hayles, N. K. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago University Press.
Hayles, K. N. 2017. Unthought: The Power of the Cognitive Unconscious. Chicago University Press.
Hayles, K. N. 2022. ‘Inside the Mind of an AI: Materiality and the Crisis of Representation,’ New Literary History 53(4): 635–666.
Hayles, N. K. and Sampson, T. D. 2018. ‘Unthought Meets the Assemblage Brain: A Dialogue Between N. Katherine Hayles and Tony D. Sampson,’ Capacious 1(2): 60–84.
Hodges, A. [1983]2014. Alan Turing: The Enigma. Vintage.
Hoyng. R. 2025. ‘Computing the Cosmos and Us: Uncertain Models of Ecological Crisis.’ In P. Banerjee, D. Chakrabarty, S. Seth, and L. Wedeen (eds). Oxford Handbook of Cosmopolitanism. Oxford University Press.
Hui, Y. 2021. Recursivity and Contingency. Rowman and Littlefield.
Jefferson, G. ‘The Mind of Mechanical Man,’ British Medical Journal. 1. No, 4616 (1949): 1105–1110.
Johanssen, J. and Wang, X. 2021. ‘Artificial Intuition on Tech Journalism on AI: Imagining the Human Subject,’ Human-Machine Communication 2: 173–190.
Johnny, O., Trovati, M., and Ray, R. 2020. ‘Towards a Computational Model of Artificial Intuition and Decision Making.’ In: Barolli L., Nishino H., and Miwa H. (eds.) Advances in Intelligent Networking and Collaborative Systems. INCoS 2019. Advances in Intelligent Systems and Computing, vol 1035. Springer.
Keeling, K. 2007. The Witch’s Flight. The Cinematic, the Black Femme, and the Image of Common Sense. Duke University Press.
Klein, L. et al. 2025. ‘Provocations from the Humanities for Generative AI Research,’ arXiv:2502.19190.
Latour, B. 1995. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford UP.
Le Cunn, Y. 2021. ‘A Path Towards Autonomous Machine Intelligence,’ Open Review Archive Direct Upload, 27 June, https://openreview.net/forum?id=BZ5a1r-kVsf
Lemke, Thomas. 2021.The Government of Things: Foucault and the New Materialisms. NYU Press.
Lenat, D., Prakash, M., and Shepard, M. 1985. ‘Cyc: Using Common Sense Knowledge to Overcome Brittleness sand Knowledge Acquisition Bottlenecks,’ The AI Magazine, 6(4), 65–85.
Lorenz, E. 1963. ‘Deterministic Nonperiodic Flow,’ Journal of Atmospheric Sciences 20: 130–141.
Lundy, C. 2018. Deleuze’s Bergsonianism. Edinburgh University Press.
Lussier, K. 2016. ‘Managing Intuition,’ Business History Review 90(4): 708–718.
Lynch, P. 2008. ‘The Origins of Computer Weather Prediction and Climate Modeling,’ Journal of Computational Physics 227: 3431–3444.
Manning, E. 2016. The Minor Gesture. Duke University Press.
Marcus, G. 2018. ‘Deep Learning: A Critical Appraisal,’ arXiv preprint: 1801.00631.
Marr, B. 2024. ‘The Next Breakthrough In Artificial Intelligence: How Quantum AI Will Reshape Our World,’ Forbes, 8 October: https://www.forbes.com/sites/bernardmarr/2024/10/08/the-next-breakthrough-in-artificial-intelligence-how-quantum-ai-will-reshape-our-world/
Massumi, B. 2015. OntoPower: War, Powers and the State of Perception. Duke University Press.
Massumi, B. 2025. ‘Preemption Today,’ Theory & Event 28(2): 160–174.
McCarthy, J. 1959. ‘Programs with Common Sense,’ Proceedings of the Teddington Conference on the Mechanization of Thought Processes, Her Majesty’s Stationary Office, 75–91.
McCarthy, J. 1983. ‘Some Expert Systems Need Common Sense,’ Annals of the New York Academy of Science 426, 1983: 129-37.
McCulloch, W. and Pitts, W. 1943. ‘A Logical Calculus of the Ideas Immanent in Nervous Activity,’ Bulletin of Mathematical Biophysics 5:115–33.
McKenzie, A. 2017. Machine Learners: Archaeology of a Data Practice. MIT Press.
McLuhan, M. [1964]1994. Understanding Media: The Extensions of Man. MIT Press.
McPherson, T. 2012. ‘U.S. Operating System at Mid-century: The Intertwining of Race and UNIX’. In L. Nakamura and P. Chow-White (eds.). Race After the Internet. Routledge, pp21-37.
Meyers, D. G. 2002. Intuition: Its Powers and Perils. Yale University Press.
Minsky, M. 1984. Afterword. In Vinge, V. True Names. Bluejay Books.
Minsky, M. and Papert, S. [1969]2017. Perceptrons: An Introduction to Computational Geometry. MIT University Press.
Mulvin, D. 2021. Proxies: The Cultural Work of Standing In. MIT University Press.
Munn, L. 2018. ‘Alexa at the Intersectional Interface,’ Angles: New Perspectives on the Anglophone World 7,http://journals.openedition.org/angles/861
Noble, S. U. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
Parisi, L. 2013. Contagious Architecture: Computation, Aesthetics, and Space. MIT Press.
Parisi, L. 2019. ‘Critical Computation: Digital Automata and General Artificial Thinking,’ Theory, Culture & Society 36 (2): 89–121.
Parisi, L. and Dixon-Román, E. 2020. ‘Recursive Colonialism and Cosmo-Computation,’ Social Text, Periscope. https://socialtextjournal.org/periscope_article/recursive-colonialism-and-cosmo-computation/
Paasonen, S. 2021. Dependent, Distracted, and Bored: Affective Formations in Networked Media. MIT Press.
Pearl, J. 1985. ‘Bayesian networks: A model of self-activated memory for evidential reasoning,’ Proceedings of the 7th conference of the Cognitive Science Society: 15–17.
Pedwell, C. 2019. ‘Digital Tendencies: Intuition, Algorithmic Thought and New Social Movements,’ Culture, Theory and Critique 60(2): 123–138.
Pedwell, C. 2021a. Revolutionary Routines: The Habits of Social Transformation. Montreal: McGill-Queen’s University Press.
Pedwell, C. 2021b. ‘Re-mediating the Human: Habits in the Age of Computational Media’. In T. Bennett, B. Dibley, G. Hawkins and G. Noble (eds.) Assembling and Governing Habits. Routledge.
Pedwell, C. 2022. ‘Speculative Machines and Us: More-than-Human Intuition and the Algorithmic Condition,’ Cultural Studies. 38(2): 188–218.
Pedwell, C. 2023. ‘Intuition as a “trained thing”: Sensing, Thinking and Speculating in Computational Cultures,’ Subjectivity 30, 348–372.
Pedwell, C. 2024. ‘The Intuitive and the Counter-intuitive: AI and the Affective Ideologies of Common Sense,’ New Formations. 112: 70–93.
Peirce, C.S. 1958. Collected Papers. Harvard University Press.
Petryna, A. 2022. Horizon Work: At the Edges of Knowledge in an Age of Runaway Climate. Princeton University Press.
Picard. R. 1997. Affective Computing. MIT Press.
Pickering, A. 2010. The Cybernetic Brain: Sketches of Another Future. Chicago University Press.
Piloto, Luis S., Ari Weinstein, Peter Battaglia, and Matthew Botvinick. 2022. ‘Intuitive Physics Learning in a Deep-Learning Model Inspired by Developmental Psychology,’ Nature Human Behaviour 6: 1257–1267.
Prigogine, I. and Stengers, I. [1979]2018. Order Out of Chaos: Man’s New Dialogue with Nature. Verso Books.
Prokpchuk, Y., Nosov, P., Zinchenko, S., and Popovych, I. 2021. ‘New Approach to Modeling Deep Intuition,’ Materials of the 13th Scientific and Practical Conference ‘Modern Information and Innovative Technologies in Transport (MINTT-2021)’. Kherson, Ukraine: XSMA. 37–40.
Rheingold, H. 1985. Tools for Thought: The History and Future of a Mind-Expanding Technology. MIT Press.
Richardson, L.F. [1922]1963. Weather Prediction by Numerical Process. Dover Publications Inc.
Sampson, T. D. 2020. A Sleepwalker’s Guide to Social Media. Polity Press.
Sampson, T. D. 2023. ‘Nonconscious Affect: Cognitive, Embodied or Nonbifurcated Experience’. In G. Seigworth and C. Pedwell (eds.), The Affect Theory Reader 2: Worldings, Tensions, Futures. Duke University Press.
Sedgwick, E. and Frank, A. 1995. ‘Shame in the Cybernetic Fold: Reading Silvan Tomkins,’ Critical Inquiry, 21(2): 496–522.
Seigworth, G. J. 2006. ‘Cultural Studies and Gilles Deleuze’. In G. Hall and C. Birchall (eds), New Cultural Studies: Adventures in Theory. Edinburgh University Press, 107–126.
Seigworth, G. J. 2025. ‘ALL THAT IS SOLID MELTS INTO ARIEL KARATE: Environmentality, strange intimacy, and the banal unconscious,’ Angelaki 30(3): 106–124.
Serres, M. 2015. Thumbelina: The Culture and Technology of Millennials. Trans. D. W. Smith. Rowman and Littlefield.
Shotwell, A. 2011. Knowing Otherwise: Race, Gender, and Implicit Understanding. Pennsylvania State University Press.
Simon, H. 1945. Administrative Behaviour: A Study of Decision-Making Processes in Administrative Organization. John Wiley & Sons.
Simon, H. 1987. ‘Making Management Decisions: The Role of Intuition and Emotion,’ The Academy of Management Executive 1 (1): 57–64.
Simon, H. and Chase, W. 1973. ‘Skill in Chess: Experiments with chess-playing tasks and computer simulation of skilled performance throw light on some human perceptual and memory processes,’ American Scientist 61(4): 394–403.
Smith B. and Shum H. 2018. The Future Computed, New York: Microsoft.
Stark, L. 2018. ‘Algorithmic Psychometrics and the Scalable Subject,’ Social Studies of Science 48(2): 204–231.
Striphas, T. 2015. ‘Algorithmic Culture,’ European Journal of Cultural Studies 18(4–5): 395–412).
Suchman, L. 2007. Human-Machine Reconfigurations: Plans and Situated Actions. 2nd ed. Cambridge University Press.
Suchman, L. 2011. ‘Subject Objects,’ Feminist Theory, 12(2): 119–145.
Suchman, L. 2019. ‘Demystifying the Intelligent Machine.’ In Teresa Heffernan (ed.), Cyborg Futures: Cross-disciplinary Perspectives on Artificial Intelligence and Robotics. Palgrave Macmillan.
Suchman, L. 2024. ‘The Neural Network at its Limits’. In R. Dhaliwal, T. Lepage-Richer and L. Suchman (eds.), Neural Networks. University of Minnesota Press, pp. 87–112.
Toffler, A. 2018. ‘Forward: Science and Change’. In Prigogine, I. and Stengers, I. Order Out of Chaos: Man’s New Dialogue with Nature. Verso Books.
Troy, D. 2023. ‘The Wide Angle: Understanding TESCREAL – the Weird Ideologies Behind Silicon Valley’s Rightward Turn’, Washington Spectator, 1 May, https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/
Turing, A. [1936]1937. ‘On Computable Numbers, with an Application to the Entscheidungsproblem,’ Proceedings of the London Mathematical Society 42: 230–65.
Turing, A. 1939. ‘Systems of Logic Based on Ordinals,’ Proceedings of the London Mathematical Society2(45): 161–228.
Turing, A. 1950. ‘Computing Machinery and Intelligence’, Mind, New Series. 59(236): 433–460.
Turing, A. 1951. ‘Can digital computers think.’ Lecture delivered on BBC Radio Third Programme, 15 May (Turing Digital Archive, AMT B/5).
Turing, A. 1952. ‘Can automating calculating machines be said to think?’, Annotated script of BBC Radio discussion between Alan Turing, Max Newman, Geoffrey Jefferson, and R. B. Brathwaite, 14 January 1952 (Turing Digital Archive, AMT B/6).
Turing, A. 1953. ‘Digital Computers Applied to Games.’ In BV. Bowden (ed.), Faster than Thought. Pitman Publishing.
Turkle, S. [1984]2004. The Second Self: Computers and the Human Spirit. Twentieth Anniversary Edition. MIT Press.
Turkle, S. 1995. Life on the Screen: Identity in the Age of the Internet. Simon & Schuster.
Vaswani, A. et al. 2017. ‘Attention Is All You Need,’ Proceedings of the 31st International Conference on Neural Information Processing Systems, 1–15.
Von Neumann, J. 1955. ‘Can We Survive Technology?’, Fortune, 1 June.
Wahneema, L. 1997. ‘Black Nationalism and Black Common Sense: Policing Ourselves and Others’. In L. Wahneema (ed.), The House that Race Built: Black Americans, U.S. Terrain. Pantheon Books.
Whitehead, A.N. & Russell, B. 1910-1913. Principia Mathematica. Cambridge University Press.
Wiener, N. [1948]2013. Cybernetics or, Control and Communication in the Animal and Machine. 2nd ed. Martino Publishing.
Wiener, N. [1950]1954. The Human Use of Human Beings: Cybernetics and Society. Da Capo Press.
Wiener, N. 1960. ‘Some Moral and Technical Consequences of Automation,’ Science 131: 1355–1358.
Wilder, R.L. 1967. ‘The Role of Intuition,’ Science, 156(3775): 605–610.
Williams, R. 1977. Marxism and Literature. Oxford UP.
Wilson, E. A. 2010. Affect & Artificial Intelligence. University of Washington Press.
Wing, W. 1979. Letter from William G. Wing to Julien Bigelow, 13, November, with enclosed chapter draft for Bigelow’s comment (Julien Bigelow papers, Electronic Computer Project archive, Institute for Advanced Study, Princeton).
Zuboff, S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books.
Footnotes
- Chudnuff, 2013: 2
- Seigworth, 2006: 118; see also Lundy, 2018
- Willimas, 1977
- Willimas, 1977: 134; see Seigworth, 2006
- Seigworth, 2006
- See, for example, Stewart, 2007; Berlant, 2011, 2023; Manning, 2016
- Berlant, 2011: 52
- See Dreyfus and Dreyfus, 1985
- Meyers, 2002: 29
- Herbert and Chase, 1973
- Bergson, 1889, 1896, 1903
- Turing, 1939
- See Meyers, 2022: 63
- Manning, 2016: 3
- Hayles, 2017: 3. See also Hansen, 2015; Hayles, 2022, 2023; Pedwell, 2022, 2023
- Johnny et al. 2020: 464
- Johnny et al. 2020: 466-7, 470
- Pearl, 1985
- Fan, 2019: 46
- Amoore, 2013: 144
- Amoore, 2020: 47
- Fortes, 2022: 1
- Thagard, 2007: 226
- Amoore, 2020
- Hayles, 2022: 648-649
- Hayles, 2022: 649
- See Dreyfus and Dreyfus, 1985: 3
- Suchman, 2011, 2019
- Parisi, 2013: xviii; see also Parisi, 2019; Fazi, 2020
- Fazi, 2020
- Clough, 2018: 104
- Bucher, 2018: 28
- Zuboff, 2019
- See, for example, Benjamin, 2019; Chun, 2021; Pedwell, 2021a, 2022
- Hallinan and Striphas, 2016; see also Pedwell, 2024
- Chun, 2021; see also Pedwell, 2023
- Bucher, 2018: 3
- Amoore, 2020: 67
- Amoore, 2023: 2809
- Massumi, 2025
- Havasi, 2014: 27
- Klein et al, 2025: 6