Wouldn’t it be great if complex social problems could be solved by technology? Alvin Weinberg suggested in 1967 that technical engineering could work better than social engineering; the argument advocated quick fixes to the most urgent problems of humanity at least to alleviate pain while more complete solutions were worked out. However controversial was this idea, our reliance on technology has only increased since then. Still, over the same period, we have also come to appreciate better the unanticipated consequences of technological advancement. In light of our experience leaping forward as well as our tripping and tumbling along the way, we should make two considerations in designing a technological fix.
Consideration 1: Serious attention to unwanted consequences
A consideration of first-order is the study of unwanted effects and tradeoffs introduced by the technology. Take for instance nanoparticles—particles in the range of one to a hundred nanometers—that enable new properties in materials in which they are mixed; for instance, maintaining permeability in fine-particle filtration to make available inexpensive water purification devices for vulnerable populations. Once these nano-enabled filters reach the end of their usable life and are discarded, those minuscule particles could be released in the environment and exponentially increase the toxicity of the resulting waste.
No less important than health and environmental effects are social, economic, and cultural consequences. Natural and social sciences are thus partners in the design of this kind of technological solution and transdisciplinary research is needed to improve our understanding of the various dimensions relevant to these projects. What is more, the incremental choices that set a particularly technology along a developmental pathway demand a different kind of knowledge because those choices are not merely technical, they involve values and preferences.
Consideration 2: Stakeholder engagement
But whose values and preferences matter? Surely everyone with a stake in the problem the fix is trying to solve will want to answer that question. If tech fixes are meant to address a specific social problem, those who will live with the consequences must have a say in the development of that solution. This prescription does not imply doing away with the current division of labor in technological development completely. Scientists and engineers need a degree of autonomy to work productively. Yet, input from and participation by stakeholders must occur far in advance of the completion of the development process because along the way a host of questions arise as to what trade-offs are acceptable. Non-experts are perfectly capable of answering questions about their values and preferences.
The market system provides to some extent this kind of check for technologies advancing incrementally. In an ideal market scenario, one of high competition, the stakeholders on the demand side vote with their wallets, and companies refine their products to gain market share. But the development of a technological fix is neither incremental nor distributed in that manner. It is generally concentrated in a few hands and it is, by design, disruptive and revolutionary. That’s why stakeholders must have a say in key developmental decisions so as to calibrate carefully those technologies to the values and preferences of the very people they intend to help.
Translating these considerations into policy
The federal government first funded in 1989 a program for the analysis of Ethical, Legal, Social implications (ELSI) within the Human Genome Project. The influence this program had in the direction and key decisions of the HGP was at best modest; rather, it practically institutionalized a separation between the hard science and the understanding of human and social dimensions of the science.[i] By the time the National Nanotechnology Initiative was launched in 1999, some ELSI-type programs sought to breach the separation. With grants from the National Science Foundation, two centers for the study of nanotechnology in society were established at the University of California Santa Barbara and Arizona State University. CNS-UCSB and CNS-ASU have become hubs for research on the governance of technological development that integrate the technical, social, and human dimensions. One such effort is a pilot program of real-time technology assessment (RTTA) that achieved a more robust engagement with the various stakeholders of emerging nanotechnologies (see citizens tech-forum) and tested interventions at several points in the research and development of nanotechnologies to integrate concerns from the social sciences and humanities (see socio-technical integration). Building upon those experiences, the future of federal funding of technological fixes must include ELSI analyses more like the aforementioned RTTA program, that contrary to being an addendum to technical programs are fully integrated in the decision structure of research and development efforts.
Whenever emerging technologies such as additive manufacturing, synthetic biology, big data, or climate engineering are considered as the kernel of a technological fix, developers must understand that engineering the artifact itself does not suffice. An effective solution requires also the careful analysis of unwanted effects and a serious effort for stakeholder engagement, lest the solution be worse than the problem.
[i] See the ELSI Research Planning and Evaluation Group (ERPEG) final report published in 2000. ERPEG was created in 1997 by the NIH’s Advisory Council on human genome research (NACHGR) and DOE’s Advisory Committee on biology and environment (BERAC) to evaluate ELSI within the HGP and propose new directions for the 1998 five-year plan. After the final report NIH and DOE ran ELSI programs separately, although with the ostensible intention to coordinate efforts. The separation between the technical and the social/human dimensions of scientific advancement institutionalized by the HGP ELSI program and the radical alternative to it proposed by RTTA within NNI, is elegantly described in Brice Laurent’s The Constitutional Effect of the Ethics of Emerging Technologies (2013, Ethics and Politics XV(1), 251-271).