UTFacultiesBMSDept TPSEthics and Epistemology of AI Initiative

Ethics and Epistemology of AI Initiative

Our aim

The Ethics and Epistemology of Artificial Intelligence (AI) initiative—or the “EthEpi of AI”, for short—aims to advance fundamental AI research, taking advantage of the best that an interdisciplinary research community has to offer. Our strategy is to address a methodological lacuna in the field of AI and to contribute to the resolution of pressing issues connected to the concept of “intelligence”. A significant part of this is helping ethics of AI out of the deadlock by addressing underlying problems with AI as an information-processing technology. We want to unite researchers from philosophy, computer science, and social sciences around the ethics-and-epistemology methodological approach, with two main goals in mind:

Our motivation

Methodologically, ethics has reached its limits in helping to tackle the increasing number of moral problems that AI raises. It becomes increasingly apparent that the key to tackle challenges with ethical problems with AI as an information processing technology lies through epistemology—the branch of philosophy studying the general principles of knowledge and knowledge production. To tackle AI’s epistemology, we need strong collaboration of researchers with unique expertise in related domains: philosophers (inlcuding, philosophers of science with their methods for interpretation of this practical knowledge in the wider social, human, and scientific contexts; and ethicists with their understadning of moral normativity), computer sciences (practical knowledge of the engineering side), and social sciences (knowledge of the societal processes and their regulation). We need a productive dialogue, based on common vocabulary and sound arguments, supported by meta-theoretical scaffolding through the philosophy of science – something that is catastrophically insufficient in the diverse and multi-disciplinary field of AI.

The coordination team

The initiative developed around a core of UT researchers from different sub-fields of philosophy (normative ethics, moral psychology, philosophy of science, philosophical anthropology, philosophy of mind, social philosophy), and the targeted cooperations with computer scientists in different sub-domains of AI. The EthEpi of AI is led by an enthusiastic group of three philosophers:

Prof. Dr. Ir. Mieke Boon


I think we need to 'bring the human back' into the picture.

I am the leader of the Science in Practice research line at the Philosophy Section and a co-founder of the EthEpi of AI initiative. With my dual background in scientific research and in the philosophy of science, I have spent many years working on a theory of scientific knowledge (i.e., epistemology). This includes understanding its nature, methods of production and justification, and the epistemic uses of knowledge in problem-solving. My two guiding questions in developing this epistemology are: “How is it possible that humans produce (relevant, reliable, and useful) knowledge about aspects of the world that cannot be observed?” and “What does science have to offer and what are its limitations when it comes to providing (certain) knowledge?”  

The philosophical insights gained thereby are directly relevant for developing an epistemology of AI, to questions such as “What is the nature of AI-produced knowledge?” “What are the consequences of its methods of production differing from human-made knowledge in terms of relevance, reliability and usefulness?" An important question, for example, is that of the epistemological relationship between the data and knowledge. And what are the limitations of data-drive approaches to our understanding of the world? How can such knowledge be epistemically meaningful to us, as humans who want to understand and explain things, and who want to use that knowledge for further reasoning about the problem at hand? AI pushes to think that knowledge is somehow produced through structuring data—but is an AI generated structure really knowledge? AI bypasses the use of concepts and categories when structuring and mapping data. But are these human-made concepts really redundant—a by-product of the limited human mind that can’t handle complex data-sets? Or are there relevant epistemological, pragmatic and ethical reasons why the human mind works with concepts? So, there is a concern that humans lose epistemic control when they hand over cognitive tasks to machines that deny humans to acquire (scientific) insight into a problem. In addition, it is essential to understand why simply throwing data into a machine and relying on AI to find something of value is not a good approach. Instead, it is necessary to understand what can be inferred from data and under what conditions.  

From this point of view, I think we need to “bring the human back” into the picture. Humans who are able to build a meaningful cognitive relationship with the world - where AI can assist as a powerful tool that, however, needs to be made suitable for that task. Basically, there is a bit of a romantic expectation that AI-systems are better thinkers than we are—that they can become more intelligent than us. But the essential question here is: Can you compare these two and make them compete? Is it really the same intelligence? Or should we rather think of AI —though beautiful and powerful —as a tool similar to the many others that aid human cognition such as pen and paper, books, a calculator, a library, or search engines on the Internet? And as with any tool, we need to understand its potential and limitations. This helps to assess what you can use it for and what not. And to gain this insight, we need, among other things, a detailed epistemology of AI. 


More about my approach:  

van Baalen, S., Boon, M., & Verhoef, P. (2021). From clinical decision support to clinical reasoning support systems. Journal of evaluation in clinical practice, 27(3), 520-528. https://doi.org/10.1111/jep.13541 

Boon, M. (2020). How Scientists Are Brought Back into Science—The Error of Empiricism. In M. Bertolaso, & F. Sterpetti (Eds.), A Critical Reflection on Automated Science: Will Science Remain Human? (Vol. 1, pp. 43-65). (Human Perspectives in Health Sciences and Technology; Vol. 1). Springer. https://doi.org/10.1007/978-3-030-25001-0_4 

Dr. Dina Babushkina


In order to understand how we are to normatively guide AI, we need to determine what are its limitations as a cognitive technology.

I am a co-founder and the coordinator of the EthEpi of AI Initiative. I work to shape its mission and research line, set up the core group, collaborations and activities. I also serve as a liaison between the group and Digital Society Institute (DSI). I coordinate the EthEpi of AI reading group.

I am fascinated by the Artificial Intelligence research and its enormous potential, but I am also concerned about the negative ways it may affect humans and societies. This is why I think it is important to conceptualise ethical principles and norms about how it is permissible and not permissible to use AI.  But in order to understand how we are to normatively guide AI, we need to determine what are its limitations as a cognitive technology, i.e. as a technology that claims to imitate elements of human intellect and even aims to substitute them in various contexts. In other words, we need to find out what is the relationship between what AI systems do (and their output) and knowledge (Is AI’s output knowledge? If not, then what is it?). This helps us understand the limitations of AI and helps us realistically evaluate what AI systems can and cannot do when it comes to the generation of knowledge. This insight then should inform our ethical reflection on AI. 

My motivation for ethics-through-epistemology of AI approach, among other things, comes from a unique angle on the ethics of AI. While most mainstream ethics of AI looks into the challenges of society as a whole, I want to create visibility and make the discussion of ways AI impacts individuals and how it transforms their life and lived experiences more prominent. I want to ask: What is there at stake for us as humans in the age of AI? And, as a result, I aim to enrich the ethics of AI with the consideration of what matters for an individual.


More about my approach:  

Babushkina, D. (Accepted/In press). AI, decisions, and Reasons to Believe. Philosophy of AI1(1).

Babushkina, D. & de Boer, B. (2024). Disrupted self, therapy, and the limits of conversational AIPhilosophical Psychology, 1–27. DOI: 10.1080/09515089.2024.2397004

Babushkina, D.; Votsis, A. (2022) Epistemo-ethical constraints on AI-human decision making for diagnostic purposes (co-authored with A.Votsis). In Ethics and Information Technology special issue on The Ethics and Epistemology of Explanatory AI in Medicine and Healthcare. DOI: 10.1007/s10676-022-0962.   

Babushkina, D. (2022) Are we justified to attribute a mistake in diagnosis to an AI diagnostic system? AI and Ethics. DOI: 10.1007/s43681-022-00189-x 

Dr. Koray Karaca


Ethical and epistemological challenges appear to be intertwined, and I believe that adequately addressing both requires adopting approaches integrating the concepts and frameworks of ethics and philosophy of science.

I am a co-founder of the EthEpi of AI initiative. As a philosopher of science, I am mainly interested in the impact of AI on the way scientific knowledge is produced and justified. AI algorithms are increasingly used for various purposes due to their unprecedented ability to extract information from what are often called big data sets. These algorithms process and analyse data sets in ways that are fundamentally different from those of the traditional methods used for data analysis. From the perspective of scientific methodology, this situation presents new epistemological challenges to be dealt with to characterize to what extent and under what conditions the kind of information produced by AI algorithms counts as scientific knowledge. While these challenges have the potential to bring new opportunities for the philosophical studies of scientific methodologies and associated practices, dealing with them requires new ways of understanding and conceptualizing how and what we learn from data.  

In addition to epistemological challenges, the use of AI algorithms also gives rise to ethical challenges, especially in application contexts where these algorithms are deployed to make high-stake decisions concerning the lives of individuals. In such contexts, ethical and epistemological challenges appear to be intertwined, and I believe that adequately addressing both requires adopting approaches integrating the concepts and frameworks of ethics and philosophy of science. Such approaches would also prove useful in anticipating and preparing for societal problems arising from the use of AI algorithms. 


More about my approach:  

Karaca, K. (2021) Values and inductive risk in machine learning modelling: the case of binary classification models. Euro Jnl Phil Sci 11, 102. https://doi.org/10.1007/s13194-021-00405-1 

Events

EthEpi of AI reading group. This is a research support group (Philosophy Section) that unites those interested in the intersection of ethics and epistemology of AI and the way the interplay of these two fields reveal itself in different domain of application of AI and related technologies (e.g., medical sphere and care, policing, smart cities, social engineering). The goal is to foster research and collaboration through peer-support, reading and discussion of relevant literature, as well as creative co-writing. The group meets roughly once a month. Would you like to join? Please contact Dina

EthEpi of AI workshops. TBA. If you have any questions or suggestions, feel free to get in touch.

Philosophy of Science in Practice Colloquium. This is a regular event within Science in Practice research (Philosophy Section) line that takes place once a month (usually on Thursdays).   



We are still updating our EthEpi of AI webpage; more information will follow.