As researchers in the early 21st century, we have become accustomed to exploiting technologies that enhance the scientific process.
We use them as the tools of the trade, as carpenters use hammers or chefs use electric mixers, to collect and analyse data, to organise our thoughts, to collaborate with colleagues and to disseminate our findings.
Technology liberates our cognitive activity and enhances our "humanness" by allowing us to delegate low-level tasks to computers, according to Clay Shirky, adjunct professor at New York University's interactive telecommunications programme and author of the recent book Cognitive Surplus: Creativity and Generosity in a Connected Age. The result for research is that we are now able to ask more complex questions, develop new theories and expand knowledge.
Indeed, web technology does help us to perform activities that were previously difficult, such as collaborating internationally and across disciplines. It also offers the world's knowledge at our fingertips. Yet, as we continue to integrate computers into the scientific process, there is a danger that we are placing too much faith in the machine. Technologies come with implicit design assumptions, evolved over time from the needs-based scaffolding put into place by an innovation's inventor.
Now, the technologies we apply - from statistical software to data scrapers to online social networks - shape our research practices and outcomes, and few of us are aware of what's under the bonnet.
For Kevin Kelly, futurist, author and founding editor of Wired magazine, the implications of this are revolutionary. He believes that the interests of Man and the machine are diverging, and that technology is the "Seventh Kingdom" of life on Earth, joining one-celled organisms, bacteria, fungi, plants and animals as a "species" to fit on the evolutionary genealogical tree.
There are lower forms of technology that humans use to augment the bodies that our genes build, such as clothing, telescopes and wheels; but it is the higher forms, such as computers, networked webs of communication and ultimately artificial intelligence, that form "a tangled economy of ideas and devices that support each other" and make the human mind dispensable. "There's a real sense in which the ideas underlying the phenotype of outward technology are no longer just the thoughts of humans," he proposed in a blog post in 2006.
Kelly argues that more advanced and intelligent technologies have auto-generated processes that are no longer in our control. They are creating a self-perpetuating evolution that builds technology upon itself and gives birth to a flosculous network of new platforms human beings couldn't have imagined.
The internet that Shirky celebrates for its potential to liberate the human mind is itself a progressive entity: it replicates and extends itself without human intervention; it troubleshoots and problem-solves behind the web interface, using computer-to-computer communication via the same throughways that we use to send messages to one another over Facebook. It does this at a rate that far exceeds our own technologically augmented interaction, begetting solutions, solving problems and, ultimately, removing us from the loop. Kelly's theory proposes that Man's interests in the machine have been made obsolete; we are no longer in control.
Despite the science fiction overtones, this proposed technological kingdom does raise real and concrete questions for researchers. As it gestates and metamorphoses, the logic the tools were imprinted with becomes more obscured by the logical heuristics that have evolved within the machine. These may no longer be relevant to our current questions, or indeed may be damaging to our needs. More importantly, without understanding the logic that drives a technology, we are no longer able to accurately specify and extract the variables that we are measuring, or to understand the results that emerge.
Take a common basic research practice: a search on Google. The technology will produce results based on the algorithm developed by Sergey Brin and Larry Page, which ranks websites on "relevance" based on the number of times they are linked to. However, a thriving search-engine optimisation industry has challenged the independence of the algorithm, programmed to adapt and evolve to maintain its standard. The friction between Man and machine may reduce its suitability for purpose.
It appears that researchers are aware of the potential shortcomings of technology for their work. A recent Research Information Network survey found that British scholars are not using Web 2.0 technologies as much as has been reported, citing concerns that the technology is "a waste of time", even "dangerous". Yet, most of those surveyed did engage with at least one technology.
Researchers' complicated relationship with computerised tools is at the heart of the British Library's Growing Knowledge: The Evolution of Research exhibition, which opens on 12 October. In addition to debates about relevance, ethics and open-science dissemination, visitors will have the opportunity to test technologies that are currently available, and preview those that are being developed. Crucially, their participation will be part of an evaluation exercise conducted by University College London's Ciber research group and the Joint Information Systems Committee, providing the library with the depth of information required to ascertain the usefulness of these technologies.
New technologies raise more questions for researchers than answers. We must be aware that we do not fully understand the implications of what we have created. Kelly proposes that the technologies will instill an evolutionary branch of innovation, diversity and efficiency. We will have to wait, and test, and see.