Advancing Scholarship and Intellectual Productivity:

min read

© 2006 Clifford A. Lynch and Brian L. Hawkins

EDUCAUSE Review, vol. 41, no. 3 (May/June 2006): 44–56.

Brian L. Hawkins
Clifford A. Lynch, Executive Director of the Coalition for Networked Information (CNI), is the recipient of the 2005 EDUCAUSE Award for Leadership in Public Policy and Practice, sponsored by SunGard SCT, an EDUCAUSE Platinum Partner. Part 1 of this interview, conducted by EDUCAUSE President Brian L. Hawkins in December 2005, was published in the March/April 2006 issue of EDUCAUSE Review (http://www.educause.edu/er/erm06/erm0622.asp). Comments on this article can be sent to the authors at [email protected] and [email protected].

Hawkins: Clifford, you travel a great deal, and you see many cutting-edge projects both here and abroad. What key initiatives have you seen recently that you find provocative and exciting?

Lynch: I do travel a great deal in the United States, and I get to Canada reasonably often, and I see some things that are going on in Europe. CNI has benefited from a long-standing, close, and enormously fruitful collaboration with the U.K. Joint Information Systems Committee (JISC), including our biannual U.S.-U.K. joint conferences; as a consequence of that, I feel fairly well informed about U.K. developments. The JISC has an extensive record of leadership in networked information. More recently, we’ve been collaborating with the SURF foundation in the Netherlands as well. But I have to admit that I see very little of what is happening in large parts of the world, including South America and much of Asia. So I would not pretend to have a well-informed global view. CNI is a somewhat international organization but far from a global one. Still, having said all that, I should add that one of the really wonderful things about my role at CNI is the window it gives me into what’s happening at many leadership educational institutions in the United States and Canada. Our members have been enormously generous and gracious in allowing me to observe and in some cases even to directly contribute to the shaping of these developments.

Efforts that give us insight into how networked information will change our society over time are particularly fascinating to me. For example, tremendously exciting large-scale digitization projects are opening up cultural memory as embodied in our library special collections, our museums and archives, and the public-domain segment of our published literature to unprecedented levels of access and use. I firmly believe that this is going to change the way we think about our cultural record in deep and sometimes subtle ways. We need to understand this, to go beyond just making digitized materials available, to the construction of the right kind of systems that let society interact with these materials. So I’m always on the lookout for projects that help us to understand this.

Let me give you just one example. Not too long ago, I was fortunate to be able to hear Ian Wilson, the leader of the now merged National Library of Canada and National Archives of Canada, give an eloquent and thought-provoking description of what happened when they digitized a lot of the materials—photos mainly, I think—documenting the Canadian government’s historic exploration of parts of the Canadian Arctic and of its encounters with some of the indigenous people—spoken of today as the “first peoples,” I believe—living there. The National Library and Archives put this material on the Internet. These events depicted are still on the margin of living memory, so people across Canada (and beyond) can look at this material on the Internet and start a conversation that begins to bridge a cultural gulf: “That unidentified village leader was my grandfather. Let me tell you about my grandfather.” I’ve heard similar anecdotes from other major libraries, such as the Library of Congress and the New York Public Library, that have made large historical image collections publicly available. That merchant, or that Confederate Army officer, anonymously portrayed in an old photograph becomes a point of departure for a new kind of history and cultural memory.

The whole question of how people interact with these cultural heritage documents and build memory, conversations, and reassessments around them is something that I find absolutely fascinating; we need to learn how to build systems to capture and facilitate and honor this kind of interaction on a very large scale.

Hawkins: In 2005, CNI spent a lot of time exploring institutional repositories. How are institutional repositories currently being deployed in academia? What are the key national policies and strategies that are shaping this deployment?

Lynch: This is an interesting and rapidly changing area. Institutional repositories, at least in the way that I think of them, are services deployed and supported at an institutional level to offer dissemination management, stewardship, and where appropriate, long-term preservation of both the intellectual work created by an institutional community and the records of the intellectual and cultural life of the institutional community. Now, other people hold narrower views of institutional repositories, which they see primarily as places to store and from which to disseminate the traditional published output of institutions: copies or preprints or postprints of material such as journal articles or books or other manuscripts produced by the faculty of an institution. Again, my personal view of institutional repositories is much broader and is driven in part by the implications of e-science and e-scholarship and by our growing ability to capture in digital form various aspects of campus life, ranging from performances and symposia to activities that have historically gone on in the classroom and are now at least partially represented and recorded in learning management systems.

In 2005, CNI conducted a major study with the JISC in the United Kingdom and the SURF Foundation in the Netherlands to try to understand the pattern of institutional repository deployment both nationally and internationally. We looked at thirteen countries in North America, Europe, and Australia. The results of that study are described in detail in two articles in the September 2005 issue of D-Lib Magazine. In some nations, we’re seeing a deliberate and systematic effort to ubiquitously deploy institutional repositories at higher education institutions. These efforts are at various levels of maturity. For example, the Netherlands and Germany have very high levels of deployment. The French seem to be pursuing a centralized national repository strategy, which is extremely interesting but quite different from the institutional efforts. In the United Kingdom, JISC and other higher education leaders clearly have an objective to set up a ubiquitous institutional infrastructure, but it still has a considerable way to go in deployment. Of course, in the United States, infrastructure in higher education typically emerges bottom-up, through the sum of many local choices and investments by colleges and universities that then seek to collaborate—often working through organizations like CNI, or EDUCAUSE, or the Association of Research Libraries, or Internet2—to leverage and coordinate those investments.

One of the problems we ran into with this study was the issue of who should answer questions about the state of deployment of institutional repositories in a given nation. This was not much of a problem in countries like the United Kingdom or the Netherlands, which have very centralized policy-making and funding. But in the United States, with its sprawling and diverse higher ed landscape, talking about a ubiquitous infrastructure of institutional repositories just doesn’t make a lot of sense. Only a limited sector within the U.S. higher ed landscape is going to care about institutional repositories, and we don’t yet fully understand where the boundaries of that sector lie. The best we could do in the United States was to survey the academic members of CNI. I’m confident that this group includes the vast majority of the early implementers of institutional repository strategies—and indeed includes most of the institutions that would thoughtfully evaluate the pros and cons of investing in such a strategy. Our findings showed that among the research universities that are CNI members, upwards of a third of them had institutional repository services deployed—perhaps only on a pilot basis, but something was deployed. Of the remainder of the CNI research universities, 80 percent or so had an active campus planning effort looking at the possible deployment of an institutional repository. There was very little implementation among liberal arts colleges as of early 2005. I suspect that most of these will get institutional repository services through various kinds of consortial or commercial approaches. Note that when I say “deployed,” I mean that the service is available at a given institution; that’s just the beginning. There’s huge variation in the extent and nature of the content populating these repositories, in the way they are being used from one institution to another, and we looked at that as well.

We’re seeing some evidence that the use of institutional repositories in the United States will be a bit different from the use in other countries, again tying back to the assignment of responsibility for managing data and for storing data sets that are produced as the result of scholarly activity. There are not a lot of data sets today in U.K. institutional repositories because the United Kingdom has national-level disciplinary repositories that were set up in the mid-1990s and that are now well established. In the United States, by contrast, considerably more data sets are parked in institutional repositories because the scholars have no place else to put them. Of course, the future trajectory of developments in the United States is going to be shaped by the National Science Foundation’s cyberinfrastructure program, the response to the National Science Board’s report on long-lived scientific data collections, and related developments. But at least today, most of the responsibility falls to higher education institutions.

When we talk about national policy with respect to institutional repositories, it’s important to recognize the depth and extent of the national policy debates that are taking place about open access to the scholarly literature and, closely related but really distinct, about whether public research funding should come with an obligation to make reports of the results and underlying data freely available to the public. These debates are happening throughout Europe, the United States, Canada, and many other nations. In the United States, there is a fairly soft mandate that “requests” publications resulting from research funded by the National Institutes of Health be deposited into the PubMed database at the National Library of Medicine within a year of publication. There is also discussion of stronger mandates. In the United Kingdom, there is serious discussion of very strong deposit mandates, and some major private foundations (the Wellcome Trust, for example) are establishing open-access requirements as part of their grants. Things are still fluid in this area, and I don’t want to get into the subject in depth here, since it’s likely the specifics will change by the time this interview is published. I mention it simply because institutional repositories are a natural infrastructure (but not the only one—national-level disciplinary repositories, like PubMed, are clearly another alternative) into which such deposits of publications might be mandated, so they can be viewed as enablers for much broader initiatives.

I’m also hearing, particularly from the leaders of the big state universities in the United States, much interest in institutional repositories as part of a public engagement strategy, making visible the range of economic, intellectual, and cultural contributions that the institution makes to the state that helps to underwrite it. And of course, many institutions are interested in the role of institutional repositories as showcases for prospective students, funders, alumni, accreditors, and others.

Hawkins: What are your thoughts about digital rights management software and the role that higher education should play in using such software?

Lynch: Digital rights management—one of the great misnomers of our day. Really, it’s more that a misnomer: it’s a cynical hijacking of language that would do George Orwell proud. I think it’s erroneous to refer to technology that restricts the use of content in various settings, especially consumer settings, as digital rights management. It has nothing to do with managing rights; it has to do with enforcing restrictions, sometimes in complete violation of the rights that people have under law—or if not legal rights, then certainly socially recognized and accepted behaviors. Sometimes people have replayed the acronym as digital restrictions management, which is at least more accurate. Definitions here are problematic and contentious. The classic entertainment-industry definition would emphasize downstream monitoring and controls on uses that are inextricably bound to a digital object, whereas I’d argue that fundamental access management technologies and technologies for rights documentation are important parts of the DRM technologies portfolio.

When you look at content-industry perpetual-control-style DRM, the first issue involved here is whether these technologies even work. Certainly they don’t have a good history of working very well for the purposes for which they’re intended, especially in the consumer sphere. They always seem to have holes. Further, to the extent that it works, DRM technology in a general-purpose computing setting essentially takes over control of your machine and hands it to someone else. I’m not sure that’s a desirable goal.

I would hate to see colleges and universities invest a lot of work in this area. I must admit that as a computer scientist, when I look at some of the attempts to build this kind of DRM technology, I’m reminded of the accounts I’ve read of the fascination that the design of the hydrogen bomb (they called it the “super” bomb) exercised over some physicists in the 1940s and 1950s. The “super” was so technically challenging, the solutions so technically elegant, that they felt that they had to try to build it, even though many of them harbored grave doubts about the policy and ethical aspects. I sense some of that same conflict among some computer scientists today as they look at the enormous challenges of DRM technology.

Having said all this, recognizing that higher education clearly needs to be investing in sophisticated access management, identity management, and security, and having noted that I don’t think institutions need to or should do a lot of investing in content-industry-style DRM technology, I would add that there is an allied set of technologies that I think are vital for institutions to be investing in. These are the technologies necessary to support what could fairly be described as a true digital rights management agenda (not the Orwellian digital rights management that is really about restricting what can be done with digital content). We need to be able to document and track the rights associated with digital objects, at large scale and across long periods of time. Who owns a document? What permissions do we have for it? Where do you go if you need to clear other permissions? Right now, we are struggling with what are called orphan works, which are in part the collateral damage of the Sonny Bono Copyright Term Extension Act. Literally hundreds of millions of photographs and books and poems and other items are technically under copyright; therefore, nobody is willing to make them digital or reuse them in creative or scholarly ways without getting permissions from the rights holders. Yet we have no idea who owns the copyright or how to find out. The items have no commercial value. Being able to tag a digital object with who created it and what permissions that person is giving the world—for example, whether the object may be freely reused for noncommercial or for research and educational purposes—would make a huge difference. The work that the Creative Commons is doing here is a wonderful example of innovation in this area. These rights documentation technologies can’t solve the retrospective lack of documentation, but they’ll be badly needed going forward, both for newer content and for representing the findings of costly research about orphan works.

Hawkins: Since Google’s announcement of the Google Library Project in December 2004, there has been a great deal of talk about mass digitization projects. What do you think about these projects and about the role of academia—as well as commercial enterprises—in pursuing such efforts?

Lynch: First, it’s very important not to equate massive digitization with the Google project. There are many efforts, with many sources of funding, conducting large-scale digitization. Google’s is and will be, assuming it goes to completion, one of the largest. Unquestionably, it is the one that has received the most publicity; it is also unusual in that it deals aggressively with copyrighted material, whereas most other projects work with out-of-copyright material. But in trying to understand the implications of all this, we need to recognize first that the Google Library Project is only one of a series of efforts. The Open Content Alliance and the Million Book Project are two others.

Such projects aren’t new; they’re simply happening now on a larger scale. I believe it was about 1970 when Michael Hart (a visionary in anyone’s book) founded Project Gutenberg to make out-of-copyright works publicly available; it’s hard to recognize today how astoundingly technically challenging this project was when it was launched. More than 17,000 books are currently offered in the Project Gutenberg catalog. Today, a number of nations have government programs under way. In the United States, IMLS (Institute of Museum and Library Services) funds substantial amounts of digitization of mostly out-of-copyright material. The Library of Congress has done some sizable digitization projects, and there are numerous other one-off projects (many underwritten by grant funding) at various research libraries. Note that massive digitization isn’t limited to print: there are projects in the area of video recordings, sound, and images, for example. Thus Google is just one big, admittedly very prominent example of this broader development—a development of extreme importance to higher education. The ability to comprehensively search and access these collections is going to change many, many aspects of scholarship and also the way in which the public interacts with this base of information.

A couple of other things should be noted about the Google Library Project. Google has entered into arrangements with five institutions: Oxford University, New York Public Library, the University of Michigan, Stanford University, and Harvard University. As I understand it, only two of these—Michigan and Stanford—are presently doing comprehensive digitization that encompasses in-copyright as well as out-of-copyright works. Only in the case of Michigan have the terms of the agreement with Google been made public on the Internet. I rather suspect that Michigan, a public institution, may have done so recognizing that it likely would have had to do so anyway under a Freedom of Information Act inquiry. But the leaders at Michigan—people like Paul Courant, Jim Hilton, and John Wilkin—have also been wonderfully articulate and open about the institutional thinking that went into their deal with Google and about what they hope will be the implications for their institution and for higher education more broadly.

There are many unanswered questions about what Google is doing with this project. Google’s stated goal is to index this corpus of material—not to digitize it and make it publicly available. The digitization is a byproduct of what is necessary to perform the indexing. In the defenses that Google has mounted in the copyright lawsuits filed by the Association of American Publishers, the Authors Guild, and others, Google has specifically stated that it is not going to make in-copyright works publicly available without the permission of the copyright holders. It is going to make available very small snippets of these works, along with pointers that will assist the searcher in purchasing a copy or in getting a copy from a library. The point is, Google is not talking about public access to the in-copyright works. I believe Google has said that it will provide some form of access to the out-of-copyright works online, but it’s not at all clear if you’ll be able to easily download entire works, or large numbers of such works, from Google. It’s interesting to note that the member organizations of the Open Content Alliance are having a very intense discussion about what they mean when they speak about content being open: is it open for massive downloading, for rehosting, or just for reading and small-scale, work-by-work downloading?

We have not yet absorbed the implications of the transition of entire literatures (as opposed to individual works) from print to digital form; we are still obsessed with re-creating the behaviors of the print era, the interactions between people and digital texts. Yes, there are improvements so massive in access and the ability to locate specific texts or passages of interest that these are qualitative, not quantitative. But ultimately this is the same old thing: people reading text. What we’re just starting to realize is that the important and truly new development is not about people reading text but, rather, about computers processing text: doing various kinds of analyses, cross-referencing, drawing linkages, and—what Google has historically done publicly—indexing. We now have about fifty years of investment in text analysis and text mining. The intelligence community is still spending heavily on these technologies, and industry is getting very interested for lots of reasons. For example, I’m told that the pharmaceutical industry is very interested in computational mining of the biomedical literature base. This is an important part of what is at stake in these massive digitization programs. Are we going to be able simply to read the digitized works, or are we going to be able to compute on them at scale as well? (Presumably, Google will be able to compute on everything it digitizes, even the in-copyright works. Almost nobody seems to have figured this out yet! What an amazing and unique resource. It’s not clear what the academy broadly will be able to compute on.) The answer will make a big difference for the future of scholarship. This move to computation on text corpora is going to have vast implications that we haven’t even thought about yet—implications for copyright, implications for publishers, implications for research groups. In fact, it may represent the point of ultimate meltdown for copyright as we know it today.

Hawkins: What do you see as the biggest challenges ahead in terms of the dream of networked information becoming a reality?

Lynch: I think the biggest and perhaps most intractable challenges involve copyright and intellectual property issues. But these issues are also the symptoms of something even deeper. We face a profound contradiction. We have used the tools of capitalism and commerce, the genius and power of marketplaces, and economic incentives fairly successfully over the last few centuries to encourage the creation of knowledge and art: these are part and parcel of the system of copyright we have built. Yet today, it is possible to make these materials so much more accessible, survivable, and usable—to honor our ideas about the power of education, the obligations and responsibilities of stewardship, about the value of universal access to all human knowledge. Indeed, if we can achieve these long-standing dreams, aren’t we obligated to do so, to the benefit of humanity? The main barriers aren’t technical or financial; the main problems involve coming to terms with the political, legal, and financial power of the rights holders. I think that many people today feel aspects of these conflicts and disconnects and are struggling to find the appropriate compromises and resolutions. And it’s hard to find these resolutions—or, indeed, even to have constructive discussions to explore possible ones—in an atmosphere that has been poisoned by litigious, greedy, and sometimes deceitful intellectual property extremists and the political and legal agendas that they have pursued in trying to retain and expand their economic and gatekeeping roles. We have made some tragically bad public policy decisions, which have made the way forward much harder. At the same time, some of the people focused on the opportunity and the vision are remarkably insensitive to the fact that they are talking about destroying the livelihoods of other people—not just large, faceless corporations trying to hold on to their historical monopolies and maximize revenue but a huge number of hardworking, often poorly compensated individuals who create wonderful intellectual and artistic treasures and many other people who are part of the overall economic matrix within which these creators function.

The history of the public’s engagement with the Internet since the mid-1990s can be usefully read as a history of collision and disconnect between the rapidly emerging technological ability to fulfill long-held dreams on the one hand and the entrenched and vested interests attempting to make sure that these dreams are not fulfilled—or, if you prefer, trying to protect their assets and their livelihoods and, indeed, leverage these to ever-greater profit—on the other. In the early twentieth century, people like H. G. Wells with his World Brain and the work of the documentalists such as Paul Otlet articulated a vision of a world in which technology would give everyone access to a comprehensive and universal library of knowledge. The excitement and sense of possibilities in the early days of mass access to the Internet—combined with boom economic times, a series of IT breakthroughs, and a certain amount of fin de siècle optimism—rekindled these dreams.

Imagine if we could take all of the music ever recorded, including all of the rare and impossible-to-find stuff, and put it up in a giant database that was freely available to anyone with a network connection. If you believe in the importance of music in human culture and human life, this is a tremendous vision. Actually, there was a pretty good attempt at doing just that. It was called Napster. This was clearly quite feasible on a technological level; it was not even very expensive or hard to build. But then all the established industries and the lawyers came along and said: “No, no, no. We can’t have this because it violates copyright, and copyright is carefully engineered to be an engine of creativity, an enabler and promoter of artistic and scientific and other creativity. We’d be insane to tear down a system like this that has served us so well.” And of course they have some reasonable and legitimate points; they shut down Napster, and many other systems, on fairly straightforward legal grounds. As a society, however, we failed to have an important public policy discussion about what goals a system like Napster would have advanced and what downsides it would have created. We failed to talk about whether existing law, in the transforming shadow of technology, continues to advance our goals as a society. I don’t want to get into that debate here, other than to highlight the disconnect between the dream and the reality and again to observe that as a society, we haven’t been very creative in thinking about how best to achieve the dreams or even about whether our dreams make sense. And finally, let’s get out of the realm of what some would characterize—trivialize?—as “mere” entertainment; let’s envision instead the potential construction of a massive, universally accessible canonical knowledge resource, a comprehensive corpus of scientific or medical texts and supporting images, data, and other materials, a knowledge resource with an absolutely compelling and unarguable justification. As a case in point, I think it would be a wonderful achievement to find a way to make the contents of our great research libraries available worldwide. I think this is worth doing. How does setting a goal like this change the public policy debate?

It’s intriguing to speculate about what might happen if such a huge public-knowledge resource were available, beyond the obvious impacts of having it accessible. Would the nature of authoring, of contributing new knowledge to the commons, change? Would there be less authoring in individual chunks that stand by themselves and more in contributions to massive encyclopedic representations of knowledge in an area? This picture gets enormously complex, and we certainly can’t explore all of the issues here. It’s also interesting to note that there have been many attempts to explore this, typically using only new materials (as opposed to most of the historic record of scholarship, which is so encumbered with copyright issues), at various different scales, ranging from open-access textbooks to Wikipedia. Creative cultural works would tend, I think, to stand alone and to be integral voices. But I also think that we would see a lot more interesting reuse, synthesis, and combining of material, particularly as we move beyond art and entertainment into areas like scientific scholarship. I want to be clear here that I’m thinking of something more coherent than simply digitizing all the books and journals in all of our libraries, for example—though this is also an important activity and an underpinning for synthetic resources.

Still, it seems fairly clear that under the current legal structures, we’re not going to build such a resource. I think there are some good arguments that the fundamental, underlying principles of copyright have been very successful for incenting the production of creative and scholarly works and the sharing of these works with the public. I would be cautious about throwing out these principles in pursuit of a vision of universal access to knowledge, though I think we need to have a careful public policy discussion about the alternatives and compromises here.

But in terms of the fundamental principles and goals of copyright, I think we’ve lost our way. We’ve forgotten why we set up the whole system of copyright, the underlying bargain—between creators and the public—that is expressed in the U.S. Constitution. We are at the point where the direct financial beneficiaries of this system are the ones who are redefining the purpose of the system. You can see this most clearly in copyright extension legislation. The current term limits for copyright are astounding. Today, in the United States, the copyright term is the life of the author plus seventy years. Not very long ago, the copyright term was a decade or two and might be renewable for another decade or two. This is a huge difference.

Copyright extension redefines the line of demarcation between where the term of copyright ends and where the public domain begins. This line used to move every year: each year, another year’s worth of material would come out of copyright and become part of the collective public domain, where it could be used and repurposed freely. But in 1998, the U.S. Congress passed the Sonny Bono Copyright Term Extension Act, which basically created a twenty-year moratorium on the entry of new material into the public domain as well as prospectively extending copyright terms for new works. It has been suggested that this action was the result of lobbying by a few corporations and individuals who had very profitable properties, like the early Mickey Mouse cartoons, about to fall into the public domain. What Congress did is profoundly inconsistent with the basic principles of copyright as a way of promoting progress in the sciences and useful arts. Rather than simply extending copyright terms for new works, which at least in theory might have incented the creation of more works by making these new works more valuable, Congress extended the terms of everything currently under copyright by twenty years. Put another way, except for a few works by government employees not subject to copyright, nothing will fall into the public domain between 1998 and 2019. The copyright line of demarcation remains frozen in 1923.

There’s a saying in Washington: “No matter how cynical you are, it’s impossible to keep up.” Nobody I have talked to seriously believes that Congress won’t extend the copyright term for another twenty or thirty years when this issue comes up again as we approach 2019. Nobody believes that Congress is going to permit the public domain to resume growth through systematic copyright expiration. The best hope is that we may see some legislative relief to help with the orphan works problem, which gets ever more severe as the term of copyright stretches toward infinity. Note that there are far less socially damaging ways that we could deal with the political need to award endless copyrights. For example, we could set up a ten-year term of copyright, renewable up to twenty times but with a requirement that a renewal has to be filed.

I don’t want to get too deep into the legal issues here; instead, let’s emphasize the implications for society and for higher education. As I look at what’s going on with Google, with the Open Content Alliance, with the Million Book Project, and with the work that the Library of Congress and the New York Public Library and other great libraries around the world are doing to digitize huge sets of images from their collections, it seems clear to me that very significant amounts of public-domain works—not only books but also magazines, newspapers, photos, and eventually sound recordings—will be made accessible. I believe that over the next decade or two, we are going to see steady progress toward making enormous amounts of our public-domain cultural heritage available worldwide for exploration and reuse. (We’ll also see public availability of some major in-copyright resources in cases where these copyrights have been given to cultural memory organizations.)

If we look at a future ten or fifteen years out, against the backdrop of these copyright developments, we can see that the mass digitization programs will continue to build up a staggering wealth of digital materials that are out of copyright and that students, scholars, and citizens of all kinds will be able to exhaustively understand and research and explore our intellectual and cultural record—up until about 1923. After that, the material is mostly locked up under copyright. This is going to be very disconcerting. We’ll have two cultural spheres, divided by this copyright line of demarcation. It’s interesting to me that the pre-1923 record is predominantly textual and that the more recent record is ever more carried in images, sound recordings, and video and film. One of our deepest challenges—as scholars, as educators, as a society—will be to figure out how to operate across these two parts of our collective cultural and intellectual memory, with their profoundly different characteristics and capabilities of access and use. We will have to explain to children why we seem to have two different pasts, arbitrarily divided in 1923.

Hawkins: Speaking of challenges . . . at this point in your career, what are your professional goals and interests, your current projects and passions?

Lynch: CNI takes up the vast, vast majority of my time and energy and is truly a compelling central theme and focus. But there are a few other things that are also important to me, such as my role as an adjunct faculty member at UC Berkeley’s School of Information; this has formed a very valuable complement to my work at CNI. I would like to be doing more writing than I am, particularly more writing that explores long-term, complex issues at length. I’ve been struggling to find time and energy to devote to a book that tries to build some bridges between what technology is doing in scholarship and higher education and the broader changes in society and public policy. I don’t have any interesting and unexpected or improbable hobbies to tell you about other than voracious reading. Perhaps I have been fortunate enough to find the perfect role for me, at least at this point in my life. It’s a huge opportunity, responsibility, and privilege to lead CNI. I believe our agenda is enormously important for higher education, for the future of scholarship, and for society as a whole.