UTFacultiesBMSEventsPhD Defence Jesse Benjamin | Machine Horizons: Post-Phenomenological AI Studies
cover-image includes artwork by Matthew Plummer-Fernandez

PhD Defence Jesse Benjamin | Machine Horizons: Post-Phenomenological AI Studies

Machine Horizons: Post-Phenomenological AI Studies

The PhD Defence of Jesse Benjamin will take place in the Waaier building of the University of Twente and can be followed by a live stream.
Live Stream

Jesse Benjamin is a PhD student in the department Philosophy. (Co)Supervisors are prof.dr. C. Aydin and dr. M.H. Nagenborg from the faculty of Behavioural, Management and Social Sciences and prof.dr.ir. P.P.C.C. Verbeek from the University of Amsterdam.

In this dissertation, I combine philosophical analyses, technical readings and design research to propose post-phenomenological AI studies as a program for investigating how contemporary artificial intelligence (AI) technologies shape the relations between human beings and their worlds. My research starts from the perspective of the post-phenomenological framework initially theorized by Ihde and substantially advanced by Verbeek, which studies the technological mediation of human-world relations. Being empirically oriented and bearing a pragmatic toolkit for interpretation, post-phenomenology has become applied in design, engineering, and ethics to study actual technologies. In this regard, it seems that post-phenomenology should play a key role in the design and assessment of AI technologies. After all, both ethical and practical challenges abound: the automation of work, the opacity of decision-making, the data-driven perpetuation of historical injustices; to name just a few. However, AI technologies also challenge the post-phenomenological framework: with techniques such as neural networks, probabilistically inferred models and subsequent actions are irreducible to the apparent artefacts of our experience, but shape the latter nonetheless.

Therefore, my proposal for post-phenomenological AI studies builds firstly on the expansion of post-phenomenonology’s interpretive scope. Through a review of the status quo of post-phenomenological investigations of information and AI technologies, as well as using design research as philosophy-in-practice, I find that the phenomenological concept of horizonality, i.e. the interdependence of given experience on particular embedding and limiting structures, is promising for an adequate interpretive post-phenomenological approach to AI technologies. In reviewing extant phenomenological approaches to horizonality, I find that Ihde’s concept of horizonality is a fundamentally spatial framing; whereas Husserl and specifically Heidegger pursue broader, temporal notions of horizonality. However, how the latter could be pursued is found to be an open question. I therefore next review contemporary attempts at expanding the interpretive scope of post-phenomenology, where I find promising yet inexhaustive pointers. However, in reflecting on Aydin’s recent work, I find that philosophical anthropology can scaffold further development of a genuinely post-phenomenological concept of horizonality.

Subsequently, based on a synthetic reading of Heideggerian philosophy of technology and Löffler’s philosophical anthropology, I propose a ‘horizonal’ interpretation of technological mediation. In this view, technological mediation is not exhaustively accounted for in any one particular artefact, but is rather seen as a commensuration of the structuring dimensions of human-world relations by ‘stretching’ across particular artefacts as well as absent technologies. To actually guide analyses, I subsequently furthermore draw on Seibt’s processual approach to philosophical modelling to develop a first heuristic model for carrying out interpretations. With a theoretical foundation and heuristic model for a horizonal interpretation of technological mediation in place, I begin the development towards post-phenomenological AI studies.

First, I argue that from a pragmatic standpoint, contemporary AI technologies are particular types of information technologies. Tracing the latter’s historical emergence through studies by Hacking and Hayles, I argue that by way of successive abstractions from prior technologies, information technologies require differentiation between multiple and always-already withdrawn technological intentionalities (i.e., how artefacts are ‘directed’ at the world), which I term poly-intentionality. To interpret the latter, I then propose a fundamental hermeneutic tool in the form of apparatic relations drawing from Flusser, which qualifies the components of information technologies (e.g., programs, outputs, architecture) for analysis in terms of technological mediation. Through technical readings, I then sketch out further interpretive tools in the form of poly-intentional structures, which reflect the phenomenological import of AI technologies’ technical specificities such as model approximation and predicted functions. Exploring such structures by reviewing actual artefacts generated from design research, I furthermore derive a corresponding ‘artefactual’ attribute in the form of thingly uncertainty; which conceptualizes how the poly-intentionality of AI technologies actually manifests.

With interpretive tools for the technological intentionality of AI technologies laid out, I then investigate what human-AI relations actually are. To this end, I first employ design research to discern epistemological and existential research trajectories. Following the discerned trajectories, I conduct two case studies comparatively with regards to canonical post-phenomenological analyses, through which I propose two human-AI relations that are readymade for further inquiries; and center on how AI technologies shape reasoning practices and practices of self-formation, respectively. At the same time, given the qualitatively distinctions discerned in comparison, I also argue that a more comprehensive concept is needed that definitively contextualizes the role of AI technologies in contemporary human-world relations, for which I propose the titular machine horizons. The latter conceptualize how, in a world increasingly textured by networked information technologies, AI technologies’ particular role lies in converging the potentiality of the latter with the actuality of human-world relations—leading to a temporal ’synching’ with the probabilistic models of AI technologies. In closing, I reflect on the horizonal interpretation of technological mediation as a contribution to post-phenomenology and philosophy of technology in general, review how the derived interpretive tools can be applied to AI technologies going forward, and reflect on the implications for philosophy-in-practice that can be drawn from my interdiscliplinary methodology.