about the project - Speculative Machines and Us

about the project

about the project

What happens when intuition becomes algorithmic?

Speculative Machines and Us: Intuition, AI, and the Making of Computational Cultures is an ongoing research project led by Carolyn Pedwell, Professor in Digital Media in the School of Arts at Lancaster University. It was funded by a British Academy Mid-Career Fellowship in 2024-25.

Carolyn Pedwell speaking at the ‘Speculative Machines and Us’ symposium at Imagination Lab, Lancaster University on 17th July 2025.

The emergence of artificial intuition

Whether understood as an embodied hunch, direct sensing, or fast-thinking without rational deliberation, intuition is vital to how we anticipate, know, and navigate our world. But what happens when intuition becomes algorithmic?

Algorithmically mediated intuition is now central to everyday life, as self-taught software and context-aware sensors embedded in pervasive computational devices and infrastructures attune to our unfolding moods, habits, and desires.

As this project investigates, the emergence of “artificial intuition” illuminates how sensing, thinking, and speculating now involve deep entanglements of humans and digital technologies. Bringing social theories, affect studies, and speculative philosophies to bear on computational cultures, the project draws on archival research to develop an affective genealogy of twentieth century human-machine relations with critical insights for contemporary AI.

Throughout public and consumer culture, machine learning innovations are presented as making AI technologies more intuitive and speculative. In tech journalism artificial intuition is a buzzword referring to how AI systems ‘make intuitive choices and respond intuitively to problems’ through subconscious pattern recognition. In computer science, artificial intuition is understood to be fundamentally experimental and generative; using advanced forms of pattern recognition it discovers ‘associations and relations otherwise unknowable’.

If human intuition is associated with situated embodied knowledge and machine intuition with how artificial neural networks learn, classify, and predict by extracting features from data environments, this research explores how computational cultures increasingly produce more-than-human forms of intuition – such that human sensory and behavioural data shapes immanent machine learning decisions and human feelings, actions, and insights are infiltrated by algorithmic parameters and probabilities.

The project situates artificial intuition within histories and atmospheres of techno-social encounter in Britain and North America, spanning the first digital computers, the advent of personal computing, and the consolidation of advanced algorithmic architectures.

It traces the continuing sensorial, socio-political, and ethical implications of efforts across mathematics, meteorology, management, psychology, cybernetics, computer science, and AI research between the 1920s and early 2000s to make intuition a quantifiable form of anticipatory knowledge – amid colonial legacies, World War II, the Cold War, nuclear politics, neoliberalism, and shifting social inequalities.

This research explores how such socio-digital dynamics shape contemporary forms of algorithmic life. It also pursues the possibilities of ‘counterintuitive AI’, which examines the sensory-social potential of that which exceeds intelligibility within normative and profit-oriented computational infrastructures.

Image is based on a photograph taken by Carolyn Pedwell at the Royal Ontario Museum’s Chihuli exhibit, which ran from Jun 25, 2016 to Jan 8, 2017. Available to be reused under a CC BY-NC-SA 4.0 license.

Questions and cases 

The project explores the following research questions:

  1. How have debates about the nature of intuition mediated unsettled and changing relations between humans and digital technologies in Britain and North America?
  2. What are the continuing affective, political, and ethical implications of twentieth century efforts to quantify and/or codify intuition?
  3. How does artificial intuition work in contemporary computational cultures and with what potential effects?
  4. What alternative possibilities for human-machine relations might emerge through counterintuitive AI?

It is oriented around three key cases:

1) Training more-than-human intuition: This case maps how, across twentieth century mathematics, management, psychology, computer science, and neuroscience from the 1930s to 1980s, intuition consolidates as a honed capacity for pattern recognition enabling expert decisions. It examines how within public and scholarly debates, hopes and fears concerning intuition’s amenability to automation mediate changing human-machine relations amid the rise of personal computing, expert systems, and neoliberalism. The case addresses what is distinctive and troubling about the speculative training of human-algorithmic capacities via machine-learning – while exploring how a historically-informed understanding of intuition as distributed, collaborative, and recursive attunes us to deep socio-affective and bio-machinic entanglements in computational cultures.

2) AI and the quest for common sense: This case examines initiatives to make AI ‘more intuitive’ by endowing machines with common sense from the 1950s to early 2000s. Situating John McCarthy’s ‘advice-taker’ and the CyC Project within post-war imperialist-capitalist relations, Cold War anxieties, and superpower antagonisms, it examines how ‘codifying common sense’ reframed embodied and sensorial processes as schematic and cognitive – and embedded ideology and prejudice in AI systems. Through adjudicating intelligibility and sensibility as matters of algorithmic fit, MIT’s Open Mind Common Sense Project also aligned common sense with social normativity while narrowing possibilities for public contestation. How, this case asks, could algorithmically mediated common sense become a ‘site of political struggle’?

3) Predicting the unpredictable: This case explores how the digitalisation of intuitive sensibilities is bound up with histories of meteorology, beginning with Lewis Fry Richardson’s 1920s vision of a human ‘forecast-factory’. A familiar narrative of twentieth century meteorology is that, with the invention of digital computing in the late 1940s, “objective” numerical weather prediction replaces “subjective” intuitive forecasting with increasingly accurate results. Yet an affective genealogy of atmospheric prediction instead encounters ‘the infiltration of the intuitive and the speculative within the calculation of probability’. Attending to the affective, sociotechnical, and geopolitical dynamics of post-war meteorology and their links to both the colonial origins of statistical forecasting and Cold War debates around climate control, this case examines the shifting status of intuition and (un)predictability in the making of transatlantic computational cultures.

Image is based on a photograph taken by Carolyn Pedwell at the Science Museum’s ‘Making of the Modern World’ gallery in 2022. Available to be reused under a CC BY-NC-SA 4.0 license.

A ‘space of potential’

The rise of artificial intuition elicits pressing concerns about the logics and reach of computational prediction and pre-emption given how ‘algorithms now reconstruct and efface legal, ethical, and perceived reality’ according to assumptions shielded from public view – and how frequently machine learning classifications (re)produce social hierarchies.

What is at stake now, this project suggests, is not only the ability of states and corporations to control the flow of future actions and events, but also the capacity of pervasive algorithmic architectures to reconstitute what is intelligible and sensible in the world.

And yet, as this research explores, AI futures are neither singular nor pre-determined. Automated home assistants, for example, may ‘automate us’ as we are trained to intuitively think and speak in the language of capital. Our entanglement with computational devices may also, however, enable ‘an innovative and enduring intuition’ which disrupts settled accounts of the world to connect us with moving events as they unfold – illuminating how intuition is trained within machine learning ecologies with diverse, and often contradictory, effects.

This project wagers that addressing these emergent complexities, risks, and potentialities requires attuning to the transatlantic legacies of twentieth century cybernetics, digital computing, and AI – and their current global implications. If digital futures are not fixed, neither are digital pasts; rather, through its distinctive affective genealogical approach, this research treats socio-digital histories as ‘a space of potential’.

Inhabiting the ambivalent affective atmospheres surrounding twentieth century digital innovations, the project tracks how intuition has mediated changing relations between humans and “new” technologies – itself being (re)made to suit shifting socio-technical and politico-economic interests. Exploring how intuition is recursively trained within intelligent systems spanning first and second wave AI illuminates how the affective, ideological, and technological have become intertwined; pertinent dynamics for understanding current political economies of generative AI and “the digital”.

The project probes what “truths”, feeling states, and modes of common sense are generated by intuitive learning architectures – while illuminating affective potentialities for transformation within these unfinished transatlantic and global histories.

Image is based on a photograph taken by Carolyn Pedwell at the Science Museum’s ‘Making of the Modern World’ gallery in 2022. Available to be reused under a CC BY-NC-SA 4.0 license.

Future directions

Carolyn Pedwell is currently conducting further archival research on intuition and histories of digital computing and artificial intelligence in the UK and North America. She is completing a research monograph, Speculative Machines and Us: Intuition, AI, and the Making of Computational Cultures, and various shorter-form writing projects.

For information on recent and upcoming talks and events associated with the project, please see our News & Events page.

If you would like to know more about this ongoing research project, or discuss possibilities for collaboration, please contact Carolyn via email: c.e.pedwell@lancaster.ac.uk.