Design for Green - Ethics and Politics for Behavior-Steering Technology
Ching Hung is a PhD student in the department of Philosphy. His supervisor is prof.dr.ir. P.P.C.C. Verbeek from the faculty of Behavioural, Management and Social sciences (BMS).
While technology can be designed to improve people’s behavior, it always comes with the worry of technological intervention to human freedom and autonomy. This dissertation, taking the environmental crisis as an entry point, argues for behavior-steering technology as an ethically acceptable and politically feasible approach to improve people’s behavior. Such an approach develops through four research questions: 1) Why is behavior-steering technology necessary for mitigating environmental problems? 2) What kind of behavior-steering technology is most helpful and why is that? 3) How can we respond to ethical concerns about the design and implementation of behavior-steering technology? 4) What would a political framework that can accommodate the practice of behavior-steering technology look like? The following eight chapters are devoted to answer these questions.
After Chapter 1 as introduction, I argue in Chapter 2 that traditional strategies, represented by environmental education, to induce behavioral change tend to fail because they rely on the assumption that human knowing is the initiator of human doing, and neglect the fact that human behavior has to be supported or can be produced by the built environment. Such inattention to the knowing-doing gap reflects, and results from, the asymmetric treatment to anthropocentrism. While knowing- anthropocentrism has been overcome by environmental ethics, doing-anthropocentrism is nearly untouched. Given that in the Anthropocene humans unavoidably take the position of master, non-doing-anthropocentrism however cannot be reached by leaving nature out there free from any interference. A more realistic strategy to distance ourselves from doing-anthropocentrism is to employ technology to improve nature’s, as well as humans’, environmental behavior. Therefore, to tackle the environmental crisis, a focal shift from human knowing to doing is necessary, and the knowing-doing gap has to be bridged or bypassed by behavior-steering technologies.
To take advantage of behavior-steering technologies, Chapter 3 introduces three existing classifications concerning the relations between humans and technologies, and suggests a simpler but useful classification, which distinguishes two types of behavior-steering technology by their differences in mechanism. While IBSTs (informational behavior-steering technologies) take intrasomatic routes, mobilizing users’ consciousness or unconsciousness to generate behavior, MBSTs (material behavior-steering technologies) take extrasomatic routes, targeting users’ bodies as a locus to apply material conditions or constraints. The pros and cons of each type are clear: Qualitatively and quantitatively, IBSTs are less forceful and therefore regarded as soft and acceptable, but MBSTs are, in contrast, very forceful and tend to be regarded not just as effective but also as hard and ethically problematic. Its high effectiveness supports MBSTs’ potential to improve people’s environmental behavior in a collective way; it has been the main reason to set limits to the design, implementation, and development of MBSTs.
In the beginning of Chapter 4, I point out that the main worry concerning behavior-steering technologies is the loss of human freedom. To address the concerns, ethical guidelines and defensive arguments have been made for IBSTs. For both persuasive technology and nudge, the central principle is to avoid intervening with users’ ends, either by targeting their means only or by minimizing the cost of opting out. Unfortunately, such a principle cannot be applied to the case of the environmental crisis; as it requires collective changes in behavior, we have difficulties in aligning user’s ends with collective goals, because, due to the temporal and spatial dispersion, the causal relationship between one’s behavior and its environmental consequences is hard to identify. As a result, to significantly improve people’s environmental behavior, intervening users’ ends is however unavoidable. Such an ends-intervention is where MBSTs are competent. To be sure, ends-interventions can be done with personalized IBSTs, but such IBSTs are likely to make things worse due to privacy and transparency issues. The concreteness and tangibility of MBSTs make the approach of MBST the best option. However, these features also increase the cost of opting out and therefore provoke worry. In other words, while green MBSTs are indispensable in tackling the environmental crisis, ethical concerns about users’ freedom tend to rise against the design and its implementation. This is a dilemma that urges us to further understand the nature of human behavior.
The work of the psychologist B. F. Skinner helps us to take the next step. Chapter 5 firstly introduces Skinner’s theory of behavior and also the idea of behavioral engineering by referring to his novel Walden Two. As human behavior is shaped through the processes of contingencies of reinforcement, it is the product of its environment. This implies the possibility to modify a person’s behavior by changing the technologies s/he interacts with. The technique recommended by Skinner is positive reinforcement, not negative reinforcement, nor punishment. Skinner’s Beyond Freedom and Dignity, then, offers various arguments against the objections to behavioral modification. It shows us that the concepts of freedom and dignity are misguided not only because they too ambiguous in a scientific sense to be reliable terms to understand human behavior, but also because they seriously constrain the use and development of the technology of behavior and therefore leave humans to traditional punitive control.
With the help of Skinner, I argue finally on the one hand that behavior-steering technologies are a must for modifying people’s environmental behavior because the consequences of environmental problems are too remote to have shaping force, and on the other hand that the worry concerning human freedom can be largely reduced by employing the technique of positive reinforcement. However, Skinner still left two questions for us to solve. First, given that sustainability cannot be realized by creating pro-environmental behavior alone, which means the measures of aversive control are unavoidable to apply, what can we do about the issue of human freedom? Second, in taking an evolutionary perspective on values, Skinner helps very little to justify that being pro-environmental is good and therefore should be the direction in which people are going to be steered. That is to say, what goal should we design if being pro-environmental cannot be well defended?
The answers lie implicitly in Skinner’s experimental approach to behavioral engineering. By looking into two real-world Skinnerian communities—Twin Oaks and Las Horcones—in Chapter 6, I first strengthen the argument that behavioral modification rather than other strategies is the key to social change and that attaching a scientific view to human behavior is crucial to make such a change come true. Moreover, these two communities, together with Walden Two, reveal the most important dimension of Skinner’s behavioral engineering: small-scale experimentation. Scaling down to a manageable size not only makes the effects of behavior-steering technologies easier to transfer from human-technology relations to human-human ones, but also reduces the difficulty in testing, adjusting, or revoking the technology-in-design, by which unexpected side effects can be minimized. At this point, I then argue that Skinner’s behavioral engineering resonates with Karl Popper’s idea of piecemeal engineering. By allying with Popper, the two questions left in the previous chapter can be answered. As knowing what is good or right is epistemologically impossible, there is no need to justify being pro-environmental and design for it; rather, our task is to identify anti-environmental behavior and design to remove or decrease it. Moreover, without forcing everyone in one direction, we can make all other options enjoyable, pleasant, or satisfying. By doing so, the net consequences of the steered behavior can still remain positive; therefore, the concern about human freedom can still be largely reduced, even though positive reinforcement is not the only technique being employed. In short, piecemeal-behavioral engineering, a combination of Skinner and Popper, can underpin the approach of behavior-steering technology and make the approach not only adoptable but also acceptable for tackling the environmental crisis.
As a problem-oriented research, in Chapter 7, I turn the concepts, perspectives, and arguments developed through previous chapters into a list of seven design recommendations. To concentrate these recommendations into one sentence: Design and experiment with green MBST at the level of community to reduce anti-environmental behavior. With the example of Village Homes, I illustrate the usefulness, practicality, and soundness of the approach I have been constructing and defending for green behavior-steering technologies. The design of narrow-curvy-cul-de-sacs beautifully demonstrates the advantages and strengths of MBSTs. It not only makes driving less preferred for the residents as well as visitors, but also creates a beneficial condition for the remaining nature-friendly designs to take effect. Without such an MBST, the achievement of the community would never be reached, and the ethos of being green could never be realized. MBSTs, again, are proved to be a must in the advocacy of sustainability.
For readers who would like to take action immediately and apply the approach straightforwardly, it is not necessary to engage with the political implication of the design and implementation of green behavior-steering technologies. The approach developed through Chapter 2 to 7 is compatible with our liberal democracy. However, it might be worth taking on the political challenge posed by the environmental crisis. In Chapter 8, the discussion moves to the level of politics, in arguing for the need of an alternative approach to democracy. In prioritizing the individual instead of the collective, liberal democracy appears to be hopeless in solving global-scale environmental problems. However, turning to a communitarian democracy does not help, because this is also based on a Kantian, rationalistic image of the human being. From a Skinnerian perspective, this image is problematic, and hence it cannot serve as a basis for democracy. I then argue that a potential candidate for replacing the current models of democracy is agonistic democracy by the political theorist Chantal Mouffe. Grounded in “the political”, agonistic democracy is not about consensus, but in favor of confrontations for democracy to flourish and vitalize. Moreover, it does not require humans to be rational, and therefore allows for external influences. In an agonistic democracy, both the design and the implementation of behavior-steering technologies are counter-hegemonic practice, by which they justify themselves. Moral questions like whether being green is good or wrong are suspended and the answers to them are not a precondition for technological interventions. In such a way, agonistic democracy probably is the most proper political framework to accommodate the inevitable interweaving of humans and technologies.
To conclude, Chapter 9 puts this research into a broader context, discussing its implications for three closely related fields and practices. For sustainability advocacy, the concept of “educational environment” is proposed to be a complement to environmental education, by which people can be “taught” by their surroundings and practice environment-related skills. For design practice in general, the strategy “design to discourage” is suggested to emphasize the importance of removing problematic behavior rather than creating beneficial one; discouragement is more fundamental than encouragement in shaping people’s behavior to address societal problems. For STS, I make a proposal called “politicalizing technology” as opposed to the idea of democratizing technology; while technology’s politics is however inevitable, the question of how to make good use of such politics, not that of how to de-politicalize technology, is more relevant and beneficial to democracy.