I found myself in a peculiar situation the other day. I had been asked to lead a brief discussion on "The Social and Ethical Implications of Nanotechnology" at an event dubbed "Nanotech Day" - a meeting between researchers in a nanotech center at Georgia Tech and researchers from the CDC. The meeting turned out to be something like a four-hour, interactive infomercial.
This particular research group at Tech has a state-of-the-art clean room for nanoscale research and engineering and, as a node of the National Nanotechnology Infrastructure Network (NNIN), they have a responsibility to support others' research into nanotechnology. As a consequence, they are always on the hunt for "users" (their term) of the clean room, and on this particular day they were wooing CDC with the marvelous potential of nanotech to serve the goals of public health.
I found out that the discussion I was to lead was one of several break-out meetings in which participants were to brainstorm ways in which the clean room could be of use to the CDC. It was difficult - to put it mildly - to see how a genuine and frank discussion of ethics could fit into such a framework.
I probably violated the unstated social contract of the meeting, and I probably stepped on the toes of some influential people, but I forged ahead. In the three minutes I had to present the results of the group discussion - a discussion that was notably lacking in CDC personnel - I went ahead and talked about the meaning of nanotechnology, its promise and its perils, the meaning of progress, and the necessity of making responsible choices - including the choice to forego nanotechnology in many instances.
I noted that, even though we can do really cool things with nanotechnology, we might be able to alleviate more human suffering by low-tech means. For example, if it came to a choice between a high-tech nanoengineered system for aerosolizing vaccines and the distribution of mosquito netting throughout sub-Saharan Africa, we might be better off doing the latter.
(I also made something of a sales pitch of my own, talking about the various resources at Tech for grappling with the social and ethical aspects of technology.)
In preparing for the meeting, I spent some time reading and thinking about nanotechnology, and about technology more generally. It struck me again - and I don't pretend this is my own original insight - that people tend to hold two contradictory views of technology at the same time, both of which tend to make them complacent in their response to innovation. This strange alliance of assumptions may be especially prevalent among scientists and engineers who are directly involved in technological innovation - even though they ought to know better.
The first is the instrumental view of technology, according to which technology is a neutral and transparent medium through which human will can act in the world. This implies that technology itself has no ethical content: it may be used for good or ill, but it cannot itself be good or bad, and it cannot shape or reshape human will, human desires, human institutions. The instrumental view also implies that more powerful and sophisticated technology is desirable because it empowers the human will to act more precisely and more effectively in the world.
The second is the view of technology as autonomous. Technology develops as it will, each innovation following inevitably from the innovation before. This implies that technological progress is linear and inevitable, and woe to any knuckle-dragging Luddites who try to stand in its way. To paraphrase something an engineer at Tech once told me: sure, people will suffer and die as a consequence of technological innovation, but that's the price of progress; progress happens, and there's nothing you can do about it.
It's a real trick to hold both of these views simultaneously - how can technology be both autonomous and instrumental? - but it seems to me that many people do it all the time. Perhaps the first view is a way of coping with the consequences of the second view: we are becoming more powerful, whether we want to be more powerful or not, but at least we can choose how to use that power. To hold the latter view without the former is just too depressing - regarding which see the work of Jacques Ellul and other technological determinists.
In my discussion and presentation the other day, I deliberately reframed the issue. Instead of the social and ethical implications of nanotechnology, I insisted on studying the social and ethical aspects of nanotechnology. Again, this is not an original insight on my part, but talking of the implications of nanotechnology assumes that nanotechnology is already there, its meaning already fixed. The decisions we make today, I told them, will help to shape what nanotechnology becomes - which will in turn influence how nanotechnology shapes what we become.
I may as well have been speaking Martian.