1 – 2 June 2010
University of Twente, Enschede, the Netherlands
This workshop focuses especially on enhancing ‘reflection’ and broadening ‘deliberation’ about how technological innovations impact on human wellbeing. This is often understood to be the primary aim of the work of an ethicist in the scientific research context. We have distinguished four fields of interest that will be specifically addressed during the workshop.
In ethics, deliberation is usually understood to be a capacity of an individual agent: an individual is held responsible for his or her actions by virtue of being able to deliberate and choose freely. In the context of scientific engineering research, however, deliberation is a social process. Individual researchers work in a team, consisting of one or several PhD students, senior researchers, technicians and other parties –such as future users and producers, but also representatives of the funding institution- who monitor or take part in research activities during several stages of the research. Decisions about what research goals to pursue, what materials to use, how to set-up experiments, or how to design the tests on patients, are mutually discussed and taken collectively. This social deliberation process raises several questions for the development of a participatory ethics of engineering, such as: What is deliberation? (What useful theories are there about deliberation in philosophy and psychology?) What is responsibility and how is it distributed? How can reflexivity be enhanced and deliberation broadened? What is the merit of such an enhancement/broadening? (And what are its limitations?) Who should participate in a social deliberation process: only ethicists and scientists, or also other parties, like producers, users, the public at large? Is the question who should participate an ethical question, and if so, in what respect? How should a joint deliberation be structured, depending on the type of participants that are included/excluded?
2. Future scenarios
At the University of Twente we have an interest in the development of future scenarios, as a way to enhance the reflection of scientific engineers. During this workshop we would like to pay attention to this initiative and look at questions such as: how should such future scenarios be structured? What kind of information should they provide? And what is the merit or what are the limits of using such future scenarios to enhance reflection in the scientific research context? And under what conditions are these scenarios able to change deliberation about research choices?
3. Ethical language and communication
Engineering ethicists usually articulate their views in the form of judgments, or verdicts. Such judgments are mostly offered when a technology has already been created and is (ready to be) put on the market. Such a judgment or verdict has a negative heuristic -it tends to show why (certain features of) technologies are ethically unacceptable, unjustified or morally wrong- and serves to warn against the use of certain technologies, or to have their use forbidden. However, in the scientific research context ethical expressions are to enhance reflection about the impacts a technology can have on human wellbeing. They aim to broaden the scientific deliberation about research decisions. This raises questions such as: What type of ethical language should be adopted in the research context? What types of heuristics of ethical expressions can enable/frustrate processes of joint deliberation? What kind of normativity do such ethical expressions have? What is their authority in relation to other types of norms, such efficiency, economy, usability or scientific method? And should ethicists in the research context say anything substantive at all about norms and values, or should it only be their role to facilitate debates?
4. The institutional context
Whether processes of joint reflection/deliberation succeed to impact on research choices and to co-shape technology, depends largely on institutional constraints. When the relevant funding agency does not support scientists taking possible impacts of a technology on human wellbeing into account –for example by providing money for extra, or different patient tests, or allowing scientists an extension of their research time- it is difficult for scientists to do so. This raises questions such as: How do current institutional contexts enable/limit endeavours to deliberate together? How should a scientific research practice be institutionally structured in order to stimulate joint ethical deliberation? Are public/private funding institutions responsible for the impacts on the lives of human beings of the technologies whose research they fund? What are the limits of their responsibilities?