The AskPhilosophers logo.

Ethics
Science

I'm a scientist. The results of my research may generate technologies that could potentially be used in both and offensive and defensive military applications. These same technologies could potentially help people as well. Here are two examples: (1) My work could potentially create odor-sensing devices to target "enemies" and blow them up, but the same work could aid land-mine detection and removal. (2) My work could help build warrior robots, but it could also help build better prosthetics for amputees. For any given project, I have to decide which agency(ies) my lab will take money from. I do not want to decide based on the name of the agency alone: DARPA has funded projects that helped amputees and killed no one, while I would bet (but do not know for sure) that some work sponsored by the NSF has ultimately been used in military operations. So I'd like to base my decision on something more than the agency acronym. How can I start to get my head around this? What sorts of questions should I be asking myself and others to get a better handle on the ethical issues involved? What should I be reading? What kinds of *concrete* steps can I take to ensure that my research does more good than harm, regardless of where my funds come from? Open, peer-reviewed publication (instead of secret reports) seems like a good start, but I'd like more ideas. A slightly more abstract question: If my funds come from an agency that [I feel] does significant evil, is my work -- even if used for more good than evil -- officially tainted? Which philosophers have something useful to say about this question in a useful, practical way?
Accepted:
May 27, 2009

Comments

Miriam Solomon
May 28, 2009 (changed May 28, 2009) Permalink

These are terrific questions and I hope someone else on the panel can also respond to them. The philosophy of science literature, and even the literature on values in science (Hugh Lacey, Helen Longino, Lynn Hankinson Nelson and others) is rather general and not sufficiently applied to give quick answers. I think you are going to have to do a good deal of the thinking yourself. But here are some questions and considerations.

The agency acronym is, indeed, not an infallible guide to the nature of the research. However, it is a rough guide and perhaps more important, it is *perceived* as affecting the content of the research done. The funding agency will influence who chooses to work with you (science is after all not an individual enterprise) and how people evaluate your research. On the other hand, DARPA money is easier to come by than NSF money (or so I hear) and you might prefer to do research with DARPA money than not do it at all (that is a question to ask yourself). The issue here is whether or not the ends justify the means, and there is a large literature on this in philosophy (the classical place to look is the debate between Mill and Kuhn, the contemporary place is Rawls's rejection of utilitarianism).

I don't think anyone can completely control the reception and application of their scientific (or other creative work). It is an empirical matter whether or not a new technology can be developed for ill (or good). Perhaps the best you can do is try to make the initial applications good ones i.e. try to set the technology going on an ethically positive note. You could also join progressive scientific societies (e.g. Union of Concerned Scientists) to network and transmit your ideas.

Open peer-reviewed publication is good for science (good for knowledge) but I don't know whether it has anything to do with developing technologies that benefit rather than harm people.

A final thought--the new Society for the Philosophy of Science in Practice (SPSP) is encouraging the kind of applied work that would be needed to explore your questions. You might like to find their website/go to their conferences/connect with their members. The next meeting is in Minneapolis, 18-20 June 2009. The program is already posted, and you could peruse the contents and contact speakers who seem to address your concerns.

  • Log in to post comments

Thomas Pogge
June 6, 2009 (changed June 6, 2009) Permalink

Adding to Professor Solomon's good points: One question that you seem not to be raising, but should, is whether research is alright when it does more good than harm. This cannot be universally correct. Think of the Tuskegee experiments. Or think of the horrific experiments German and Japanese doctors conducted on prisoners. The latter experiements apparently yielded very useful results -- so useful that the US offered immunity to doctors willing to share their knowledge and know-how. Still, participation in such experiments is generally wrong even if, in the long run, the benefits outweigh the harms. Philosophers have discussed these issues -- often in the context of criticizing or defending utilitarianism (or, more broadly, consequentialism) -- under two headings (which will enable you to retrieve relevant literature). They have debated whether negative duties (not to harm) have greater weight than positive duties (to help or benefit). And they have debated whether harms that are intended (as a means or as an end) have greater moral significance than harms that are foreseen but not intended by the agent.

To understand these debates correctly, one must hold fixed what is at stake for all parties: the positive duty to rescue a drowning child obviously has greater weight than the negative duty not to steal a pencil. So a suitable example of negative/positive would be: killing a child for the sake of avoiding a two-month jail sentence versus failing to save a child's life for the sake of avoiding a two-month jail sentence. And a suitable example of intend/foresee: ruining your friend's competitor in order to help your friend versus helping your friend while you know that doing so will ruin your friend's competitor.

One way of specifying and defending the moral significance of these two distinctions is the doctrine of double effect (see e.g., http://plato.stanford.edu/entries/double-effect/), which comes as close as philosophers get to a criterion that can support precise answers. Like most everything in philosophy, the DDE is much disputed.

On your final question of tainting, you might look at Bernard Williams' essay in Utilitarianism: For and Against (the story of George, the chemical scientist), which essay also interestingly illuminates the negative/positive duty distinction (the story of Jim). You might also look at some of the literature on (moral) integrity and on collective responsibility.

  • Log in to post comments

William Rapaport
June 7, 2009 (changed June 7, 2009) Permalink

I am happy to read Miriam's and Thomas's replies to this question, because it is one that I somewhat unexpectedly faced when I switched from being a professional philosopher to being a professional computer scientist (albeit one with a highly philosophical bent!).

The first time the issue came to light was when I gave a talk to computer and cognitive scientists at the University of Texas at Austin about 20 years ago. One of my hosts was Benjamin Kuipers, a leading researcher in artificial intelligence, who had done groundbreaking work, as a grad student supported by military funding, on "way finding": How to program computers to give and to follow geographic directions. He told me that after he got his Ph.D., he realized that, as a practicing Quaker, he could not in good conscience continue to take military funding, especially if that meant that he would have to fire grad students or postdocs who would be working under his direction if the military asked him to do something against his beliefs and thereby took away his funding. So he changed the entire line of his research to medical applications of AI, which were funded by such organizations as NIH. His full story and arguments in favor of not taking military funding can be found on his website in an article titled "Why Don't I Take Military Funding?" .

The second time the issue came to my attention was when a visitor to our computer science department at the University at Buffalo lectured about autonomous vehicles--automobiles, equipped with AI-programmed computers, that can drive themselves--a project funded by DARPA. During the question session after the presentation, one of my colleagues, Tony Ralston, asked this question: "How can you justify this research when obviously its main purpose is to develop autonomous military vehicles for warfare?" Of great interest to me was that during the reception afterwards, the conversation focused around two groups: students and faculty asking the visitor technical questions, and students and faculty asking Tony Ralston questions about the ethics of militarily-funded research. Another colleague, also a practicing Quaker, suggested that it was OK to take such funding on the grounds that the work she would do was not aimed at killing people--better that the military give their money to her than to someone with other ideas.

My personal decision has been to refuse military support. There have been some negative consequences (lack of funding, etc.), but I feel comfortable with my decision.

But read Kuipers's arguments--they're quite interesting.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/2720
© 2005-2025 AskPhilosophers.org