Re-engineering pharmaceutical research, a scientist’s opinion
Interview with Aroon Hingorani, Professor of Genetic Epidemiology at University College London.
Could you speculate what you think are the biggest causes of drug failure?
It is well recognised that the number one cause for drug failure is a lack of efficacy of the drug in the intended indication, accounting for about two thirds of late-stage clinical failures. This is more common than failure due to toxicity issues and unfavourable pharmacokinetic profiles. This lack of efficacy exposes the major difficulty in drug development, which is finding the right drug for the right target. The way drug development works is that a hypothesis is developed about a target and disease, which is then tested pre-clinically either in cells, tissues or animals. Drug development programmes are failing as these models can be poor predictors of success. For example, cells and tissues are single isolated systems, not whole organism systems, and animal models are not always representative. A second reason for failure is that there are many potential targets in the body, and typically these initial experiments are low throughout, with only a handful of targets examined at one time, and decisions have to be made about which potential targets to carry forward and prioritise. Thirdly, in biological sciences it is well known that there is a high rate of false discovery, leading to the so-called reproducibility crisis. This arises for example from inferences being inappropriately drawn from experiments, reflecting a poor appreciation of statistical tests.
What do you mean by ‘druggable genome’?
So to start with, the vast majority of medicines work on proteins in the body. A drug has to find this protein which is has been designed to target, by which it then ameliorates the disease. These proteins are encoded in the genome by genes. We think the human genome has 20,000 protein coding genes, but not all of the proteins made can be easily accessed or targeted by drugs. Traditional drugs are normally classed as either small molecule drugs like monoclonal antibodies, or larger molecular drugs such as specific antibodies. These classes can access a limited number of proteins, maybe about 4500. Essentially the ‘druggable genome’ refers to targeting the genes themselves that encode these proteins. Of course these proteins produced by the targeted gene need to be useful (i.e. targeting them will ameliorate disease), and the other half of the problem is that we need to know which proteins these are. Within a population, there is natural variation in people’s DNA, some which may not have a functional impact, but some may alter protein and drug actions. It is different to classically considered personalised medicine in which individual’s DNA can be assessed to see whether they will better respond to a certain therapy, or more likely to have side effects, the druggable genome approach on the other hand uses natural DNA variation in hundreds of thousands of people to understand which variants in the genome influence the expression or function of different proteins so as to understand which proteins are important in which diseases, given that proteins are the molecular targets of most medicines. This approach, although the formal idea had been around since maybe 2005, has had increasing use and interest in the last 5 years. Drug companies are starting to undertake collaborations with academia and using large population datasets to gain valuable insight.
How would this approach weigh up compared to current drug development in terms of costs?
Currently about 9/10 drug programmes fail and this failure has a knock-on consequence for drug pricing. A high proportion of the overall costs is essentially what is called ‘sunk cost’ i.e. the costs that can’t be recovered from failed programmes. One of the reasons for the costs of drugs is to recoup this sunk cost. If we can improve success rates by more reliably identifying the correct therapeutic targets this should reduce cost-pressures in drug development.
How do you think the EU institutions may be able to encourage better drug development processes?
A lot of the value of this paradigm or idea of the druggable genome is dependent on having very large population and patient data sets with genome analyses and disease endpoints. There is a great opportunity in the EU for large national biobanks to come together with important data sets for this type of work, for example the UK Biobank and the Estonian Biobank. These are general populations who are followed up for the occurrence of disease and additional data. These datasets can be particularly useful for conditions that are not very well studied. There is quite a lot of potential for cross-EU working in this way, but something that we must be aware of is how this data is used. As citizens donate blood samples and give consent for their data to be used, it is important that they then see some benefit from this contribution, for example in price reductions or widening of access to new treatments, or possibly even some financial return to healthcare systems. We may need to think about a new model of drug development, with more equal partnership between industry, academia and healthcare systems serving patients and populations.