Cyberscholarship; or, "A Rose Is a Rose Is a . . ."

min read
E-Content

©2009 Geoffrey C. Bowker and Susan Leigh Star. The text of this article is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License (http://creativecommons.org/licenses/by-nc-nd/3.0/).

EDUCAUSE Review, vol. 44, no. 3 (May/June 2009): 6–7

  • Geoffrey C. Bowker ([email protected]) is Regis and Dianne Professor and Executive Director, Center for Science, Technology, and Society, Santa Clara University. Starting September 1, 2009, he will be Mellon Professor of Cyberscholarship, iSchool, University of Pittsburgh.
  • Susan Leigh Star ([email protected]) is a Research Professor at the Center for Science, Technology, and Society, Santa Clara University. Starting September 1, 2009, she will be Doreen Boyce Professor in Library and Information Science, iSchool, University of Pittsburgh.

Comments on this article can be posted to the web via the link at the bottom of this page.

In the 1970s, a groundbreaking project at the University of California at Irvine put the complete canon of Greek literature on a CD: the Thesaurus Linguae Graecae (TLG). This changed the nature of scholarly practice in the community of Classical Greek scholarship. For example, a scholar could now search for all of the uses of the word agape across the range of literature in seconds rather than (as had been traditional) spending ten years visiting obscure libraries in beautiful locations. Given the collective concern today about carbon footprints, perhaps we should feel happy about this form of automation. However, there were some serious problems with this early electronic effort. For example, the TLG made a series of pre-fixed decisions about interpretations of several contestable terms, meaning that (much as with the Odyssey today) we were left with a definitive snapshot of a moving tradition.1 More significantly, a Classics scholar pointed out some years later that his colleagues were not asking new questions: they were asking the same questions as always but just getting quicker responses. Even though the project was "computerized," was it really doing new things? A rose is a rose is a . . .

This question has deep historical and organizational roots. JoAnne Yates and others have, with reason, argued that modern computing (from punch-card technology) arose with the insurance industry.2 These firms were trying to cut their marginal costs so as to reach their lower-middle-class and working-class clients. In turn, this expansion would allow them to become the largest companies in the world by the mid-nineteenth century. To do so, they needed to handle huge amounts of statistical data from which actuarial tables could be produced, permitting them (again, with minimal overhead) to quote a life insurance rate for any client. They made organizational changes, forming hierarchies of clerks and new labor divisions that delegated routine mathematical tasks to the lower ranks, thus allowing the managerial and research branches to maximize decision efficiency. In turn, these organizational changes formed a strong market for new information technologies, including standardized ways of dealing with "sub-prime" clients and unusual profiles.3 (Likewise, many believe that Simula, one of the first object-oriented programming languages, was intended to reflect how actual information flows in organizations were changing.)

This is a persuasive story, but in many ways it is a kind of just-so story. Consider a counternarrative. Paul David wrote a thought-provoking article about the "productivity paradox" and computing.4 The paradox was that as computers were widely introduced in the 1970s, productivity actually went down over a twenty-year period. Economists were troubled by this, to say the least. David compared this situation to the shift from centrally managed steam power generators in factories in the 1880s to more flexible, locally adaptable electric generators by the 1890s. In that instance, a similar productivity paradox occurred over two decades. David suggests that decades were required to "think" the new technologies. Electric generators were very bad steam generators, just as computers are in fact very expensive and buggy typewriters. (In the 1970s, Bowker was part of a company that decided to deinstall its computers and go back to IBM Selectric typewriters, on just these grounds.)

So can we align these two stories—one about organizational change causing technology developments and one about technology developments causing organizational change? The response often given is that there is no single line of causation: practices and the division of labor in and of themselves will not revolutionize computing, nor vice versa. With respect to cyberscholarship, the issue of causality is especially complex. A very wide range of organizational developments in industry, commerce, and higher education has occasioned interrelated changes in computing. In addition, the development of new forms of infrastructure for computing, information, and knowledge management has pushed and pulled many forms of these interrelated changes. For instance, the switch to Google as an ordinary part of most scholars’ and technology developers’ reference libraries has meant the often unconscious buy-in to the structures of Google, including the hierarchy of links, sponsored links, and the anonymous rankings of import. At the same time, users can refine and reconfigure searches, can ignore sponsored links, or can take Google as a kind of problematic encyclopedia: a good place to begin but never to end. The new opportunities that are so created are far wider than the translation of any one organization into technology. Google has changed how many of us work. Simply put, we can, for pragmatic reasons, put to one side "only" those changes in the academic or research loci that have led to the new computing technology (even though this is an extremely interesting topic, particularly with respect to interdisciplinarity). We are then logically free to concentrate on the affordances that the new technology offers cyberscholarship as a means of transforming the nature of scholarship.

The two of us are involved, as information scientists, in methodological and ontological research that may allow us to understand the interplay of various logics in an event. Let us start with a composite story from many cyberinfrastructure projects. Let’s take the case of a domain scientist—say, one of the world’s leading authorities on treefalls. The scientist knows what he really needs: a reliable relational database so that he can do all that cross-correlation he’s been meaning to do, as well as a solid group connection via e-mail and telephone to other treefall experts. For group projects, such as measuring the global treefall over the past fifty years, our scientist needs a shared nomenclature, application for data-sharing, and a set of standards so that the information can be compiled coherently. On the other hand, let’s take the case of a computer scientist in a supercomputing center. She knows what she needs: to offer cutting-edge computer science technology to domain scientists, which in turn will revolutionize a particular field and spawn tools that other fields may also use in the future. So she gives the domain scientist billion-pixel scientific visualizations and "always-aware" videoconferencing that permits permanent links between distributed facilities. As often as not, productivity grinds to a halt. This is a form of productivity paradox indeed, including not just the new forms of articulation work (delicately solving real-time problems so that the longer arc of work may continue), changes in routines, and new technologies but also a semantic problem that, like the problem itself, seems to recurse as soon as one gets close to it.

While both the domain scientist and the computer scientist want to push forward, and elaborate, their single fields, neither really understands what it means to transform a field through cyberscholarship. The domain scientist is caught up in doing stuff more quickly—à la TLG—and with an eye toward community cohesion. The computer scientist is interested in a seemingly more abstract problem: how to make tools that a broad swath of disciplines can use, that push well beyond single-user or even single-community solutions and toward sweeping infrastructural tools that may benefit as many as possible.5 Who will translate between these two scientists?

Here is a role for intermediation, of just the sort that library and information scientists, and ethnographers, have always done. We need to train information scientists to act as honest brokers between these two (or more) scientific communities. We must understand enough of the domain science to be able to think changes in it both short and long—to be able to take the point of view of emergency problem-solving and long-term tool development. Equally, we must understand enough of the more abstract computer science to imagine how those changes might operate. Finally, we must help to develop good pidgin languages that will enable genuine collaboration between the two scientific communities. Fortunately, one historical precedent confirms the possibility of this "re-intermediation" work. The participatory design community in Scandinavia grew out of the need to reconcile union and management desiderata in the implementation of new information technologies. This has led to the nascent growth of a new academic community trained in just these negotiation skills.6

Crucially, this intermediation work is not just "about" negotiation. It is genuinely intellectually creative work that can envision the future, a generative meshing of a series of now-singular trajectories in the present. This is both a service role and a theoretical role (ontology dialogue between fields, for example) that is traditional to Library and Information Science scholarship and will be central to the new iSchool (information school) scholarship. Refusing both universal and parochial solutions, we have the chance to develop new methods and new vocabularies.

Notes
  1. Karen Ruhleder, "Rich and Lean Representations of Information for Knowledge Work: The Role of Computing Packages in the Work of Classical Scholars," ACM Transactions on Information Systems, vol. 12, no. 2 (1994), pp. 208–30.
  2. JoAnne Yates, Control through Communication: The Rise of System in American Management (Baltimore: Johns Hopkins University Press, 1989). See also William Aspray, ed., Computing before Computers (Ames: Iowa State University Press, 1990).
  3. Martin Lengwiler, "Double Standards: The History of Standardizing Humans in Modern Life Insurance," in Martha Lampland and Susan Leigh Star, eds., Standards and Their Stories: How Quantifying, Classifying, and Formalizing Practices Shape Everyday Life (Ithaca, N.Y.: Cornell University Press, 2009), pp. 95–121.
  4. Paul A. David, "The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox," American Economic Review, vol. 80, no. 2 (May 1990), pp. 355–61.
  5. Judith Weedman, "Informal and Formal Channels in Boundary-Spanning Communication," Journal of the American Society for Information Science, vol.  43, no. 3 (1992), pp. 257–67.
  6. See Scandinavian Journal of Information Systems, <http://www.e-sjis.org/journal/volumes/volume01/volume01.htm>.