“. . . it is precisely the “objectivity,” the moral neutrality in which the sciences rejoice and attain their brilliant community of effort, that bar them from final relevance.”
— George Steiner
Reading Steiner is a slow business. Several times on any given page I come upon a sentence so arresting, I can’t go on, but have to pause and take it in and reflect, usually with surprise and gratitude, on what it has disclosed. This is one of them—not even the whole sentence, but a thought about science that might seem heretical to some of the scientists I know—or a harmless notion that only scientific outsiders would indulge in. Steiner is a philologist, so, though he is one of the finest minds of his generation, what does he know about science?
But I believe most of us know, and have known for a long time, that science is never morally neutral—that the ways information is gathered and analyzed and assessed and reported and put into practice are morally significant at every step. Who pays for research, how the experiment is designed, what is excluded, when results are deemed sufficient, and why some results never get reported all matter in more ways, I think, than we are trained to imagine.
We need that training—at least enough of it to ask the questions. We can do this ourselves. It doesn’t take expert instruction to teach ourselves to ask the fundamental questions that always deserve to be raised about human endeavor: What is the stated purpose? Might there be other purposes? Who is involved and what are their vested interests? What tradeoffs might be involved in pursuing this objective or outcome?
At the beginning of his fascinating book The Poetics of SpaceGaston Bachelard makes the deceptively simple claim, “There is no such thing as neutral space.” Every space humans inhabit is charged with intention and effect and meaning, or, if you’re inclined to think in these terms, with vibrations and energies, currents and flow and feeling. There is no such thing as a neutral lab. But I remember a more or less friendly argument with a scientist I knew who insisted with some vehemence that it was not his job to speculate about the moral implications or possible misuses of the information he managed to uncover. Einstein would have disagreed with him; his famous “Woe is me” after hearing of the bombing of Hiroshima was one of the more sobering moments in the moral history of science.
The friend who refused moral responsibility worked for a university, as many do. The results of his work belonged to the university. More and more, information is owned and protected, commodified and sequestered for the use of those who can profit from it. As soon as it becomes property, it ceases to be “objective,” if it ever was.
We in the West have invested a lot of faith in objectivity, but over the past century the myth of objectivity has given way to different understandings of how we come to “know” what we think we know, how qualified our knowing is, and how mysterious, how at the edge of our knowing we always face “the great mysterious.” Scientists who acknowledge that, who are capable of allowing for ambiguities and uncertainty, who are able humbly to qualify their certainties, who resist reductionism, are the ones I trust.
They’re out there. They pay attention to their intuitive moments. They know they’re subjects with subjective feelings and points of view. They know their curiosities emerge in the soil of personal history. When they have a choice they choose their research paths out of an attraction we might call love. They work meticulously with the tools they have, hoping not to do harm, knowing they could, treading lightly on what—even though the university or the industry owns it—is, after all, holy ground.