WTMC workshop Technology Ethics

29/30-01-2004, Twente University

Tsjalling Swierstra

First, let me start by welcoming you all to this workshop – also on behalf of Philip Brey (who came up with the original idea for this workshop) and Charlene Versluys (who put in a lot of work for us.). We are very excited that so many of you were willing and able to come. There are a lot of different theoretical backgrounds gathered around the table today, so we can look forward to an exciting exchange of ideas. I want to thank the Research Institute for Science, Technology and Modern Culture for sponsoring this workshop. And I want to give a special welcome to our foreign guests, who were kind enough to travel to this technological artefact that we call The Netherlands. You know how they say: God created the world, but the Dutch created the Netherlands. Well, -let us be precise - only one half of it. And at this moment some dykes are leaking, so we are very lucky to be here in Enschede on the half that was created by God. However, a leaking dyke does serve to remind us that technology is not to be blindly trusted. Which brings us to the theme of this workshop: technology ethics.

The guiding questions for this workshop were all mailed to you at forehand, so there is no need for me to repeat them here. But I would like to stress the core question, because I hope we will give it ample attention during the next two days. And that question is:


Is a general technology ethics desirable? What could be its surplus value compared to established types of applied ethics that touch on technology more or less directly: for example environmental ethics, bio-medical ethics, business ethics, professional ethics, computerethics?

Let me try to develop this question a bit further by offering you two brief reflections on 1) technology, and 2) ethics.

a) technology

We live in a technological culture, and almost everything we do is touched upon by technology. Therefore: if we are unwilling to deliver ourselves into the hands of fate – this time in the guise a an quasi-autonomous technological progress - we should create room for critical evaluations and assessments of its pervasive influence. So we need technology ethics.

Or do we? One could object that it is exactly this omnipresence of technology that makes a separate discipline called technology ethics superfluous and impracticable. In such a case we could easily rename all ethics `technology ethics’. Not only would this amount to nothing more than a name-change, but is question is also another one. Does it really make sense to say that one and the same type of ethics should deal with technological phenomena as wildly divergent as the contraceptive pill, nuclear plants and computers? Yes, is there really anything that links them together except for their common denominator: technological? Why not leave these different forms of technology in the capable hands of the existing schools and forms of applied ethics? They deal with specific technologies, and ethics is of little practical use if it does not deal with particularities.

Let us be honest:


One does not really need technology ethics to assess the justice of the distribution of costs and benefits in the case of specific techological innovations.


One does not really need technology ethics if one wants to tell engineers that they should care for the safety of their products and that sometimes it is better not to do what your boss says, especially if he is a not an engineer but a manager and therefore not the be trusted anyway.


And if I hit you on the head, one does not need technology ethics to tell me that is wrong, even if I did so with a hammer instead of my hand.


And if I wonder where a kidney should go, to the adult or to the child, there is certainly technology involved but that does not make it a question for technology ethics.


Or if I wonder what the speed limits should be on our highways, I need facts and values, but do I need technology ethics?


And so forth…. You can add your own examples.

My point is that you do not need technology ethics to reflect ethically on specific technologies and their consequences. Put in another way: the simple fact that technology is omnipresent, does not warrant the conclusion that we are in need of a separate discipline called ‘Technology ethics’. I think for this conclusion to be drawn, a crucial condition has to be fulfilled, namely that specific technologies share certain characteristics, which together somehow constitute a phenomenon that is coherent and interesting enough to call for ethical reflection. Probably a shared essence is too much to ask for, but I want at the least some prominent family resemblances.

Only on that condition can we step up to our colleagues and say: yes, you are dealing with issues wherein technology plays an important role, but you somehow overlook technology itself. And only then can we begin by bringing together different theories, approaches, methods, insights, that up till now are mainly generated in separate ethical disciplines, because then we can say that these deal somehow with the same object.

So, can we identify some of these family resemblances? Hopefully the discussions in the next two days will bring some other candidates, but allow me to discuss one important candidate, And that candidate is `agency: Technology has to do (at the least) with material things that act. Technical things operate, they influence, they enable, they seduce, they force, they open some doors, they close others, they destroy, they create, they shift, and so on and so on.

It is striking how little is said in most types of ethics about this agency of technical things. Neither in the instrumentalist approach, nor in the continental substantive aproach, and less than you would expect in science and technology studies. In all these three approaches we witness a similar disappearing-trick when it comes to the agency of things.

In instrumentalist conceptions of technology, things, artifacts, are treated as passive and therefore neutral, their agency completely reducable to the wills of the humans that wield these technical instruments.

We owe it to the classical philosophers of technology – Heidegger, Ellul, and so on – that it is now accepted by most that technology is neither neutral nor passive. In their substantive approach, technology is portrayed as an super-human force, relentlessly striving after efficiency and efficacy of control, thereby treathening to alienate us poor humans from our deepest spiritual roots. Instead of serving us, technology is scheming to overthrow the natural order of humans and things by turning us into its slaves. The object becomes subject, and vice versa. Technology’s real character is shown in such excesses as Taylorism, Auschwitz, Hiroshima, rockets to the moon, and the destruction of the natural environment.

So, are we to consider classical philosophy of technology as a step towards recognizing the agency of things? Hardly so. Technology is treated here as a manifestation of, and is thus reduced to, a hidden, transcendental Will for more efficiency, efficacy of control. Agency is projected into an area behind the material technical things as it were.

In recent years, this classical philosophy of technology has come under attack by science and technology studies for its essentialism, for its a prioristic diagnosis and denounciation of Technology with a capital T. To me, this anti-essentalism and constructivism of STS are perfectly suited for describing a phenomenon like technology, which is constantly reconstructing and changing reality as we know it. Constructivism is like the perfect creed in a modernist technological culture that is marked by constant upheaval and dynamics.

So, do science and technology studies succeed in giving the materiality and agency of technology their theoretical dues? Sometimes, but certainly not always. STS can in practice come dangerously close to the previous two approaches by reducing the materiality and agency of technology to non-technological interests. This is especially the case when STS try to be normative in a way that closely resembles traditonal critique of ideology: you might think this technology is objective, but in fact it is not, it is subjective. It is not denied that technology is biased and contains a moral, but it is not conceded that the origin of this bias and moral lies in technology itself. Instead, bias and moral are reduced to social and political forces hiding behind the material façade of technology.

The lesson to be drawn from this – all too short - expose is, and I can paraphrase Nietzsche here: in the philosophy of technology, one should be weary of getting in too deep; one should have the courage to stay on the surface of things. Only there we can take into account their materiality and agency.

Now, how should we proceed to get a grip on this agency of things, which is so evidently evasive? A first, pragmatic, step would be to look out for all those instances where technology does not conform to our will, but thwarts our desires and belies our expectations. In one word: where it shows robusteness against our attempts to curve it to our desires and designs. Let me give you some examples of what phenomena I’m thinking of:


The ethical interest in technology was originally in a large part sparked by its unintended consequences - especially with regard to the environment. It is in these unintended consequences that the agency of technology itself can be detected. To us the task of learning to imagine, as far as possible, its quirks and tantrums and schemes. Are there any patterns detectable here?


One type of unintended consequences is of special interest to technology ethics, and that is the ways new technologies uproot, call into question and help to install social – including moral – routines. One quick example: though shalt not kill. Okay, but does this moral routine still holds in the case of patients in an irreversable coma? Or another: though shalt be chaste. But does this moral routine still make sense in the age of the condom and the contraceptive pill? Technology not only influences what is possible and what is not, but also what is moral obligatory and what no longer is, who is responsible for what and who no longer is for something else, who we are as individuals and as a collective, and what we hope to be in the future. In short: Technology co-shapes morality and ethics.


The same point but differently put: technology is not only dynamic itself, it is also a dynamizing factor in our culture, society, economy and what have you. This dynamizing effect is one of the surest signs of technology’s agency.


Artifacts may contain so-called scripts. That means: they prescribe – more or less forcefully – how we should act. By articulating such scripts, we uncover the agency of those things.


We try to use technology to realize pre-given goals, but in practice we always find that technological means have a tendency to co-define and change the goals.


Then there is the phenomenon of delegation. We can order things to perform our tasks, but in practice they usually do this with their own twist.


Technology development also has its own particular robusteness, that makes it hard, and I think: impossible, to control. This robusteness is to a large extent due to so-called path-dependencies and to the fact that technology development is a multi-actor proces, but also to the fact that technologies can operate on a time-scale and a space-scale that are so vast that they can not be overseen nor controlled from a point in the here and now. This is manifest in many public discussions on new technologies: the benefits are usually here and now, whereas the costs typically are far away in time and space. So you can imagine who is usually winning these debates.

This list is in no way exhaustive. I only gave these examples as a hint of what phenomena we should be looking out for, if we want to catch technology in the act – so to speak. Now please beware: alloting agency to technological things is not the same as alloting them omni-potence. What the results are of technical acts, is co-determined by a lot of other agents, including human ones. Whether something works, is dependent on the specific context. What may be intelligent technology in the city, may be utterly stupid in the middle of the jungle.

It is exactly at this point that technology ethics comes to the fore. How to influence – as good as we can, we are not omnipotent either –the outcomes of technology acting. But we can only play this game, if we start by acknowledging the

agency of our partner technology.

And it are phenomena like I have listed here that form instances of the agency that are shared by the contraceptive pil, nuclear plants and computers. So they should be at the heart of a ethics of technology.

b) ethics

Now, let me quickly say something about the second element of technology ethics, to witt: ethics.

I think we have to ask ourselves the fundamental question: how well equipped is ethics – understood as an established discipline with its own tools, methods, questions and principles –with regard to modern technology?

I understand ethics to be a practical discipline, constantly changing to deal with emergent practical problems. I am convinced that – like STS developed constructivism to describe the dynamic changings of a technological culture – ethics has to develop an equally anti-essentialist strategy. To the extent that practical problems are non-universal, that is to say: situated in time and space, ethical tools and so forth are themselves under a constant suspicion of being in need of an update in the light of newly emerging questions

At the very least, we should be wary of remodeling technology to fit our transmitted moral theories and preoccupations. This remodeling is a real problem. Let me offer you some quick examples of how ethics has remodeled technology to fit its pre-given schemata:


In engineering ethics there has been a disproportinate amount of attention for whistle blowing. Is this really because that is a daily phenomenon in engineering, or could it be that this focus is the result of an ethical theory that knows really well how to deal with individual responsibility, but is somewhat at a loss when it is confronted with the complex multi-actor processes that characterize modern technology development?


Alienation is still a key concept – although usually under a different name - in ethical concerns about technology. But to what extent is this fear the result of the conceptual structure of most ethics, in which autonomous human consciousness is the touchstone of all evaluation? To what extent does this touchstone block the assessment of the new and complex relationships between the social and the technical –as for instance exemplified in the cyborg?


There is a lot of attention given to risks and dangers of new technologies. No question: technology assessment is of the utmost importance. But are risks and dangers really all we should care about? We should not believe that ethics itself is immune to technology. Do we not under-estimate the extent to which technology has wholy different consequences, for instance the change in our morals, in our norms and values, in what we believe we are, what we have to do, and what we can hope for. All Immanuel Kants key-questions are touched upon by technology: what we can know, how we should act, and what may be hoped for – all these questions are more and more answered for us by technology. When are we ready to come to terms with that?


Autonomy is a leading principle in medical ethics, and most of times rightly so. However, is this really the best theoretical tool we can come up with if we want to deal with medical technologies like cosmetic surgery or enhancement medicine in general? Is it really fruitful to translate the most complex moral issues into the language of individual choice, however fond we moral philosophers are of that particular language.

These are just a few examples how ethics tailors technology to fit its pre-established conceptual schemata.. A more fruitful strategy, instead, would be to ask ourselves what ideas, concepts, tools and methods can deal adequately with the agency of material things. Some interesting candidates were raised in the papers, so I’m sure we will come to that.

To summarize: First, technology ethics only has a right to exist – alongside established forms of applied ethics – if it manages to bring into the light at least the agency which characterize technology but are all too often overlooked. And second, we should be willing to constantly check our transmitted ethical repertoires so as to do justice to technology, and more specifically: to the agency of material things.

Allow me to finish this introduction with a short remark on what the possible interest of such a technology ethics could be. At home I have a edited volume of essays on the philosophy of technology that bears the ambiguous title: Controlling Technology. I like the collection, I hate the title. I do not believe the question is whether technology controls us or us technology. That problem-definition is too much based on the essentialist asymmetry of the human versus the non-human. I think technology ethics should take its point of departure from a symmetric relationship: ethical norms and values help to shape the course and character of technology, as technology does help to shape our ethics. The fates of humans and technology have become intertwined, and there is nothing intrinsically wrong with that. It simply means that our goal should not be to control technology, but to maintain a satisfying relationship with it. Technology ethics is important, because it can help us doing just that.

Thank you