When the World Wide Web was created in 1989, its primary function was to share scientific knowledge; however, in the three decades since, researchers and scientists are still struggling to source open and reliable information for their work.
More importantly, in this age of emerging technologies, there isn’t a common solution for accessing peer-verified, cross-disciplinary research. It’s about time we use new tech to shake up the archaic processes that have trapped scientific research in the dark ages. Academia is being buried beneath the sheer weight of research that exists, and the current process for reviewing literature is far from adequate.
What would this look like? And where would we start?
We would need to harness the current academic community – consisting of people contributing content, including researchers, reviewers, artificial intelligence (AI) trainers, coders and anyone interested in the development of scientific research.
A community that operates beyond the great big walls of publishing houses and closed academic groups would be free from incentive misalignment and a review system that is in hock to fashionable ideas and privileged academics.
This would include software developers building tools on top of the community software, R&D departments, research institutes as well as individual researches – basically, anyone who can benefit from and contribute to the ecosystem.
The role of AI and machine learning
In 2016, over 2.2 million science and engineering articles were published, 46 per cent more than a decade earlier. However, the cumulative mass of information means that most of it will never be read or put to use. There is simply too much to be covered – and researchers are stuck reading papers that may not be relevant. With AI, there is no limit to the volume of knowledge that can be organised.
A neural network algorithm can be used to understand context and document similarity, semi-automating the arduous process of finding relevant scientific literature – a painstakingly manual process that is prone to error.
In short, the whole process will be streamlined. It may take several months to manually complete the task of finding, analysing and reporting on relevant research. Using AI, the very same process may take take two days.
But why blockchain?
While the deployment of AI is a powerful antidote to the problems of information overload, it does not address the root cause as to why this state of affairs has arisen in the first place. This is largely based in the fact that researchers have their funding tied to the quantity of research that they produce, rather than the quality of their output. ‘Publish or perish’ has become an accepted practice, but it comes at the cost of quality research.
While AI in academia may be able to help us cope with the vast and growing amount of scientific literature, it cannot solve the lack of good quality, reproducible research. To tackle the latter, we need to turn to another kind of technology: blockchain.
Blockchain has allowed the tokenization of various markets and, in doing so, has reversed the centralisation of power to large corporate entities – and it could be no different in the academic publishing market. This same tokenization model can be used to reward researchers for producing quality research and verifying that research. In doing so, we could create a validated repository of scientific papers.
Giving academia back to the people
Having a wealth of scientific literature available at your fingertips, is only useful if researchers can separate the wheat from the chaff if they can find relevant papers and more importantly, credible papers. A brave new community for researchers will be seen as a positive force for humanity – more research will mean better distribution of knowledge and more likelihood in solving humanity’s problems like finding alternative power sources or solutions to antibiotics resistance.