Not all science is good science. I realised this during my final year of university. I was performing a systematic review: a method of assessing all the scientific studies ever published related to a specific research question. The aim of my project was simple: find all the relevant studies, extract data from their methods and results, and analyse their findings to answer my research question. I’m now aware that was a gross over-simplification.
In about four months I had read the summaries of over 4,000 published studies to determine which were relevant, learned how to perform text mining so I could narrow down my dataset further, and reread the full texts of the studies that remained to check they were really what I was looking for. It was exactly as laborious as it sounds.
Eventually, when I was sure I had the studies I needed, I began to extract data from them. Where did the cells come from? What drug was tested and when was it given? How many times was the experiment repeated? My list of questions went on.
I remember looking up from the computer where I was inputting all this the data and asking my supervisor, ‘what do I do when they don’t report half of this stuff?’.
At the end of the project I drew a table of all the questions I had asked, and the answer from each study. Where the study didn't report an answer to my question, I added a big red ‘X’. My table was full of big red ‘X’s. Presenting the results of my review, I felt like I had failed. But I hadn’t – the studies had.
After graduating I was keen to continue investigating the research, already drawn in by a desire to learn and challenge what I was seeing. I spent four years at university being taught good experimental design, data analysis and reporting. I knew good research was happening, but in the real world, I realised it was diluted by the mountains of terrible science published every day.
The problem lies with scientific reporting. The time pressure to publish and poor experimental design leads to bad quality science. The problems I’d found were only the tip of the iceberg. But I was taught how to do all this at university, and even I wasn’t an expert. How were professors still getting it wrong?
Through my peers, I learned about the work that had been done in developing reporting and quality checklists for research on animals. Meanwhile, I learned that in vitro research – despite being prized by scientists and the public alike as the future alternative to animal research – had no standard reporting criteria.
Currently, my research is driven by finding new ways to assess the quality of in vitro research using systematic review. Many of the systematic review methods used to assess clinical trials or animal studies do not apply to in vitro research, so I’m having to assess not only the quality of the research but also my own methods.
For a while, I worried I was alone – a one-woman army standing against decades of scientific convention. I remember presenting my research at a scientific improvement conference in Berlin. Those big red ‘X’s still haunted me, until I realised I wasn’t alone. Scientists from all over the world were saying the same thing: research is in dire need of improvement.
Yet, in that group, I was still only one of a few scientists trying to improve in vitro research. People began asking me what I did, and what I thought. The experience was surreal: in under two years, I’d gone from knowing nothing about systematic review to sharing my opinion with other scientists and being listened to. It was - and still is - both exciting and terrifying. I’m still not an expert. I feel like there are so many people who know more than I do.
Nonetheless, I have hope. Whether or not I really am an expert or good at what I do, I know that I have a passion for good quality science. My ideal future would be one where there is a positive change in how in vitro research is reported. It may not be my findings or my voice that brings this problem to an end, but I hope that through continuing to identify and question bad research I can pave the way forward.