- How would you say clinical trials have evolved in the last 10 years ?
The critical need for all trial sponsors today, as it has always been, is the access to information in real time. Should we proceed? Should we terminate? Could this subject be at risk from the combination of the IP and a concomitant medication? Could the trial be at risk from fraud at a particular site or from a randomization error? All too often, the answers come too late, when there is nothing to do but abort – course correction is no longer an option. The site is closed, the subject is deceased or the trial has run its course and billions of dollars have been wasted in an investment with no returns.
The traditional method for trial monitoring has relied more heavily on the human element than on the technical. However, in the last few years, we have made much progress as an industry to make clinical trials safer and more effective.
The endeavour to encourage Risk Based Monitoring by industry and regulators, the approach to the design of adaptive trials, and the support from AI and machine learning in the monitoring of known risks as well as the alerting to unknown risks are all attempts to safeguard subjects while increasing the effectiveness of trials. We are better equipped than ever before to make trial stakeholders accountable. It is far easier for a machine to identify fraud than it is for a human being monitoring over 10,000 CRFs in the span of a week. It is simpler for an algorithm to identify discrepancies in biomarker values across visits than it is for a monitor looking through individual visits one isolated datapoint at a time.
A human’s USP is intelligence, while a machine’s is predictability. We are finally in that era of technology where we can use these two in synthesis to ensure that complex and lengthy data are accurate, subjects are safer, science is more reliably proved or otherwise. Given the level of automation we can now harness, it should no longer be acceptable for a drug to be disallowed in the market because its clinical trial was ineffectively conducted. If a drug is worthy of being made available to society, we no longer have any excuses as an industry to deprive the world of this advantage. Everyone loses. Patients. Their families. Research. Innovation. Science. Society. We cannot afford more losses. We have everything we need now to start winning more.
- We certainly appear to have the tools on hand that we need for more successful trial outcomes. Would you say the industry has used RBM and adaptive trials to their fullest potential?
We are only just at the threshold. The well touted statistic of 01 success per 10 trials is all too familiar and far too many trials continue to fail. A large proportion of these failures can be attributed directly to clinical trial design, amendments to design or clinical trial conduct. There is a long way to go to incorporate these tools into our everyday processes and implement RBM effectively.
Still, things are looking up. The USFDA indicated that 46 drugs were approved in 2016, more than ever before. So we must be getting it right, one step at a time. However, we must take steps as an industry to remove the ambiguity that enshrouds many of the newer concepts and keeps us from adopting them for the betterment of clinical trials.
Take Risk Based Monitoring (RBM) for one. The scope of RBM is often misunderstood by several sponsors and CROs to apply simply to site monitoring, rather than the identification, assessment and monitoring of risks over the entire trial. Let me relate an incident from an actual sponsor’s experience. Their conclusion after attempting adhoc measures to reduce Source Data Verification (SDV) and to introduce remote site monitoring on a vaccine trial, was that Risk Based Monitoring is better in theory than in practice and has no place in the real world. The Risk identification and Assessment exercise, which should always be the starting point, didn’t even come into the picture! So how could this attempt at RBM yield expected results?
Let’s look at Adaptive Trials. We need more expertise and planning to go into the designing of a protocol to allow adaptations within a trial without the need for protocol or operational amendments. Take for example, a sponsor that required a dose change on an oncology trial. Because the study design did not allow for this right at the beginning, a protocol amendment, IRB and other approvals, database design amendments were all required to allow this lowering of dosage for exceptionally weak patients. The dosing for these subjects had to be stopped until the new protocol was in place. This is not desirable considering the stage and condition of these patients. Designing the protocol and the database to allow different doses to be captured would have allowed the trial to continue smoothly without undue delay or threats to patient safety. Designing of adaptive trials takes careful thought and the ability to look ahead.
- What part do you see Data Management playing in this changing tapestry?
There is no doubt that the Data Management function has a unique vantage point in terms of layout and timing of data. Sites enter one visit at a time. Data Management can see all visits entered into EDC in one comprehensive listing.
Biostatistics sees all data at once too. But the database is probably locked, the subject is gone and the site is closed now. Most data cannot be queried, clarified or amended at this point.
If you require identification, communication, clarification and correction of data issues in real time, your money would have to be on Data Management. Which puts huge pressure on Data Management to get it right. They can’t be caught napping. In addition, the Risk Assessment, identification and mitigation now becomes one of its critical responsibilities.
In nature, animals that live in packs or tribes often have one member on ‘guard duty’ to keep a look out for risks to the pack. This member of the pack will usually climb to a higher level than the other animals to keep an eye out in all directions. North, South, East West, Up and Down. Why does the monkey need to climb higher than the other members of its tribe? For a vantage point. A holistic view. The bigger picture. The ability to alert others in time for them to be able to protect themselves from oncoming threats.
Data Management is very much that guardian monkey responsible for the survival and success of its tribe. The tribe would never select an inexperienced monkey for the position. If the guardian monkey dozes or misses an oncoming threat, the tribe will suffer. Suffering in our industry may take the form of delays of upto years, data too inaccurate or illogical for analysis, trial costs way beyond allocated budgets, or rejection by a regulatory body, resulting in unavailability of a new drug for patients who could have had a better quality of life.
RBM is a very vast and interesting topic, and we could probably send a couple of hours talking about it, but what RBM is NOT, is just targeted SDV and remote site monitoring. RBM refers to the ongoing identification and monitoring of risks to the ENTIRE trial. And the function best poised to do this will always be the one with the best view from the top of the mountain.
- What are the implications of the EU’s General Data Protection Regulation (GDPR) on the clinical trial landscape and how do CROs such as yourself figure in this equation?
This is a pertinent question, given that we are days away from the requirement for implementation of GDPR. To put GDPR in perspective, the regulation effectively looks to strengthen the Data Protection Drive (DPD) which has been in existence since the ‘90s. The regulation looks to enforce, more stringently, broad principles that have long existed and should in essence already form the fundamentals of processes dealing with personal data for any organization in the clinical trial industry.
However, there is a need to augment these processes, through clear definition of responsibilities for both controllers and processors of data, and clearly worded contractual arrangements for compliance as well as requirements for timely communication and reporting around breaches.
Although CROs such as ours are typically processors, the onus is great to demonstrate the same accountability as a controller; any breaches on our part would make us liable to be penalized as controllers, not merely processors.
This is why, we at PPCE have been working over the last year, internally as well as with our European clients and partners, towards the creation of structures such as legally reviewed contracts, formalized communication channels and timelines for unintentional disclosure and aligning Standard Operating Procedures for data privacy that will give confidence in our privacy-by-design approach to the processing of clinical trial data when the regulation becomes enforceable on 25th May, 2018.
- Why did PPCE decide to be present at the 2nd GCT?
PPCE has a big role to play in the efficient conduct of trials and Europe is an exciting market. Its standards and international practices, on many counts, serve as benchmarks for the rest of the world.
Clinical trials are hugely expensive, more so in Europe than many other parts of the world.
At PPCE, we believe it is our joint responsibility as an industry to lower trial costs, and therefore the cost of drugs that reach the market. And this is what gives us our edge. A truly risk-based approach to trial management, increased levels of efficiency, a common language, and advantageous conversion rates can mean savings of up to hundreds of thousands of dollars per trial.
The conference was slated to be held in Barcelona this year, and this is why we’re here - to meet Biotech, Pharma and Medical Device companies with a common goal of optimizing the manner in which data is analysed and clinical trials are conducted.