Rho site logo

Rho Knows Clinical Research Services

Big Data: The New Bacon

Posted by Brook White on Wed, Nov 16, 2016 @ 04:10 PM
Share:

Dr. David Hall, Senior Research ScientistDavid Hall is a bioinformatician with an expertise in the development of algorithms, software tools, and data systems for the management and analysis of large biological data sets for biotechnology and biomedical research applications. He joined Rho in June, 2014 and is currently overseeing capabilities development in the areas of bioinformatics and big biomedical data. He holds a B.S. in Computer Science from Wake Forest University and a Ph.D. in Genetics with an emphasis in Computational Biology from the University of Georgia.

big data is the new baconData is the new bacon as the saying goes. And Big Data is all the rage as people in the business world realize that you can make a lot of money by finding patterns in data that allow you to target marketing to the most likely buyers. Big Data and a type of artificial intelligence called machine learning are closely connected. Machine learning involves teaching a computer to make predictions by training it to find and exploit patterns in Big Data. Whenever you see a computer make predictions—from predicting how much a home is worth to predicting the best time to buy an airline ticket to predicting which movies you will like—Big Data and machine learning are probably behind it.

However, Big Data and machine learning are nothing new to people in the sciences. We have been collecting big datasets and looking for patterns for decades. Most people in the biomedical sciences consider the Big Data era starting in the early to mid-1990s as various genome sequencing projects ramped up. The human genome project wrapped up in 2003, took more than 10 years, and cost somewhere north of $500 million. And that was to sequence just one genome. A few years later, the 1000 Genome Project started, whose goal was to characterize genetic differences across 1000 diverse individuals so that we can predict who is susceptible to various diseases among other things. This effort was partially successful, but we learned that 1000 genomes is not enough.

cost of human genome sequencingThe cost to sequence a human genome has fallen to around $1,000. So the ambition and scale of big biomedical data has increased proportionately. Researchers in the UK are undertaking a project to sequence the genomes of 100K individuals. In the US, the Precision Medicine Initiative will sequence 1 million individuals. Combining this data with detailed clinical and health data will allow machine learning and other techniques to more accurately predict a wider range of disease susceptibilities and responses to treatments. Private companies are undertaking their own big genomic projects and are even sequencing the “microbiomes” of research participants to see what role good and bad microbes play in health.

Like Moore’s law that predicted the vast increasing in computing power, the amount of biomedical data we can collect is on a similar trajectory. Genomics data combined with electronic medical records combined with data from wearables and mobile apps combined with environmental data will one day shroud each individual in a data cloud. In the not too distant future, maybe medicine will involve feeding a patient’s data cloud to an artificial intelligence that has learned to make diagnoses and recommendations by looking through millions of other personal data clouds. It seems hard to conceive, but this is the trajectory of precision medicine. Technology has a way of sneaking up on us and the pace of change keeps getting faster. Note that the management and analysis of all of this data will be very hard. I’ll cover that in a future post.

View "Visualizing Multivariate Data" Video

Scientists Search for Answers as Antibiotics become Obsolete

Posted by Brook White on Thu, Aug 18, 2016 @ 01:41 PM
Share:

Carrie Furr, PhD, RACCarrie Furr, PhD, RAC, is a Senior Director Operations at Rho. Day-to-day, Carrie supports pharmaceutical sponsors of clinical trials, leading an integrated product development program consisting of clinical, preclinical, chemistry, manufacturing and controls, and regulatory components. Before diving into the clinical trials industry, Carrie spent 7 years earning her PhD in biochemistry, bacteriology and bacteriophage biology at Texas A&M University. A biologist by training, Carrie’s dissertation and doctoral research focused on how a bacteriophage (phage) protein causes bacteria to die. In theory, phage proteins can be used in phage therapy to combat any bacteria-based disease. 

As anyone who has ever had a bacterial infection or seen an end-of-the-world survival movie knows, antibiotics are an important tool in a doctor’s arsenal. Since the discovery of penicillin in 1928, antibiotics have been extremely effective at treating and preventing a variety of infections. But imagine life without them – doctors unable to prevent infections after surgery, your child’s minor cut might morph into a major infection, and there wouldn’t even be a treatment for pink eye.

With the increasingly popular use of antibiotics, however, microbes have mutated and learned to resist the drugs. Penicillin was once extremely effective against most strains of bacteria, but now it is used far less frequently as many strains have built up resistance. In many cases antibiotics are becoming obsolete.

What can be done? Without antibiotics, a huge range of diseases -- from pneumonia to strep throat to syphilis – would become much more difficult, if not impossible, to treat. While there are a few things you can do to help prevent antibiotics from further increasing resistance – for example, taking all prescribed antibiotics even if you feel better or only taking antibiotics when you truly have an infection – it isn’t enough. Scientists need to work on other solutions.

phage, bacteriophageBacteriophages, or “phages” for short, may be able to help. Certain phage proteins cause bacteria to die. Researchers are working to determine if these proteins or whole phages can be safely converted into therapies to combat bacteria-based diseases. Developing an alternative treatment to antibiotics could have huge implications on the treatment of bacterial infections around the world. Additionally, phage therapy holds the promise of providing a dynamic solution to the dynamic problem of antibiotic resistance.

While academics and pharmaceutical companies work on the research, people like me are working on smoothing the road to U.S. Food and Drug Administration (FDA) approval for these novel therapies. As a postdoctoral researcher, I focused on how bacteriophages can kill certain bacteria. As a senior regulatory scientist, I work on the practical steps required to bring pharmaceutical products, including phage therapy, into the market to treat patients. Currently, the path to develop phage therapy through to regulatory approval is unclear.

In the face of antibiotic resistance in the U.S. and around the world, it is important to understand our alternatives and what must be done to advance alternative treatments.

Webinar: Tips for a Smooth NDA Submission

Top Trends in Drug Development from This Year’s DIA Annual Meeting

Posted by Brook White on Tue, Jul 12, 2016 @ 02:41 PM
Share:

During the last week of June, the Drug Information Association held its 52nd Annual Meeting in Philadelphia.  As one of the largest conferences in our industry, DIA covers a wide range of topics over the entire spectrum of drug development, and it would be nearly impossible to provide a comprehensive accounting of the meeting.  However, I will try to share the most notable trends and themes from the meeting.

Big Data

big data in drug developmentBig data was possibly the hottest topic this year.  Not only did FDA Commissioner Dr. Robert Califf participate in a panel session on the topic, but when listing his top four priorities, one of them was greater use of existing data in EMR/EHR systems.  At DIA, people were really talking about big data from a handful of sources—electronic medical records data, data from wearables, data from social and digital media and genomics (and other -omics) data.  FDA is taking a lead role in the use of big data and real world evidence through initiatives like Sentinel, which enhances the FDA’s ability to proactively monitor the safety of medical products on the market, and precisionFDA, a community platform for next generation sequencing (NGS) assay evaluation and regulatory science exploration.  Big Data is an idea that has been talked about for some time, but based on this year’s meeting it is clear we’ve moved beyond idea to reality.  For anyone wondering how soon we might see full genomic sequencing of all patients in a clinical trial, you will be interested to learn that the cost is now on par with a chest x-ray, Genentech has sequenced 30K genomes to date, and AstraZeneca recently entered into a partnership with Human Longevity to sequence 500K genomes over the next 10 years.

Patient Centricity is Still Big

patient-centricityPatient centricity was the theme of last year’s meeting, and continued to play a central role in this year’s meeting.  But while last year was big on ideas and optimism, this year saw early adopters sharing lessons learned from programs already up and running.  Patients and patient advocacy groups made up a noticeable group of attendees and were outspoken during sessions.  Several companies including Bristol Myers Squibb (BMS) and GlaxoSmithKline (GSK) shared specific programs and tactics they’ve been using to move to a more patient focused research model.  Some examples include creating frameworks that allow greater number of employees to engage with patients and the public about the work they are doing and developing minimum standards for patient engagement that reflect geographic and cultural differences.  From a regulatory perspective, patient-centricity made Dr. Califf’s list of his top four priorities.

7 Tips to Use Social and Digital Media to Recruit and Engage with Clinical  Trial Patients

The Swinging Pendulum on Outsourcing

swinging pendulum on outsourcingFor many years now, it seemed the trend was always to more and more outsourcing with innovator companies keeping fewer and fewer activities in house.  Several of this year’s outsourcing sessions are hinting that the pendulum on that trend may be starting to swing back.  From internal frustrations with outsourcing groups, to dissatisfaction with vendors in terms of both quality and performance, to the failure of preferred provider relationships to deliver on expected savings and improvements, the talk from a number of pharmaceutical and biotech companies is that they are keeping more work in-house. That said, there certainly is not agreement among sponsors or vendors/suppliers on this issue.  Many pointed to issues at sponsor companies such as refusal to hear feedback from CROs on the feasibility of their budgets, timelines, or study designs as well and disagreement between outsourcing personnel and study team personnel about the providers being selected.

Drug Development as a Calling

DIA opened with keynote speaker Dr. Larry Brilliant, a physician and epidemiology who participated in the World Health Organization’s (WHO) successful small pox eradication program.  Dr. Brilliant talked through a number of health research and outreach efforts that have dramatically changed the world for the better, including the small pox and polio eradication programs, the development of electrolyte solutions to treat cholera and diarrhea, and more recently the efforts of the Carter Center to eradicate guinea worm.  He brought into sharp focus the idea that what each of us in the pharmaceutical industry does has the potential to change the world for the better.  The idea of drug development as a calling was furthered by Dr. Califf’s call to all of us to donate the information in our electronic health records for the betterment of research and medicine—a reminder that we should be willing to open ourselves up in the same way that we ask patients and research participants to do.  Finally, several of the patient-centricity speakers focused on the value of identifying employees who themselves were patients or care-takers of patients in their private lives in addition to being part of the research and development process.  These people are uniquely qualified to help us better understand the patients’ needs and experiences.

Greater Engagement by FDA

Finally, it was interesting to me to see the level of participation by the FDA in this year’s meeting. While they always send some presenters and a larger number come just to attend, this year did seem different. Dr. Califf presented in multiple sessions and was open and engaging during Q&A sessions. Additionally, numerous sessions included speakers and panelists from the FDA providing valuable insight into their point of view.

Did you attend DIA this year? If so, let me know what you thought.

5 Tips for Conducting Feasibility for a New Clinical Trial

Posted by Brook White on Wed, Apr 20, 2016 @ 02:12 PM
Share:

Meagan Vaughn, PhD, Research ScientistMeagan Vaughn, Ph.D., Research Scientist,  designs and implements clinical trial feasibility assessments.  She has over 10 years of experience in scientific writing and editing, has authored and contributed to numerous peer-reviewed publications, and serves as a reviewer for several medical and public health journals.  

What does the word “feasibility” mean to you? It may seem like a simple question, but I have found that “feasibility” has many interpretations within the clinical research industry. When we work with a sponsor to conduct feasibility for clinical trial planning, our first task is to figure out what their definition of feasibility is, and more specifically, what questions they are trying to answer.

Most often, the question is “How many sites will we need to meet our enrollment target and timelines for this study?” Of course, this is an important question, but asking this question can be putting the cart before the horse. The foundation of a successful study is a protocol that is both scientifically sound and viable from an operational perspective. Assuming the former has been sufficiently vetted, the first goal of conducting feasibility should be to test the assumptions of the latter. This is the time to think through the logistics for the site and the subject, and consider the protocol requirements that might affect factors like enrollment, retention, and data quality. Use this exercise to formulate questions that will stimulate a dialogue around these issues with potential investigators. For this type of early stage feasibility, you also need to think about the right tool to gather the information needed, and a web-based survey probably isn’t going to cut it if you are looking for thoughtful feedback. This is the time to leverage relationships with investigators and coordinators to have some focused conversations using your questions as a guide for the discussion. More often than not, they will be able to quickly identify potential show stoppers in your inclusion/exclusion criteria, as well as assessments or design elements likely to result in frequent protocol deviations.

Once the feasibility of the protocol has been thoroughly evaluated, the next step is to examine the feasibility of the trial given the constraints of timelines and resources. To this end, a web-based survey can be a quick way to gather data to inform enrollment projections and come up with a list of candidate sites. Below are a few points to consider when crafting a feasibility questionnaire:

  • Asking the right questions is just as important as not asking unnecessary questions.   Stay focused on the key pieces of information needed.  If you aren’t going to analyze it, don’t ask the question.
  • A poorly written question will result in unreliable data.   Consider your audience and have several people review and test the survey before deploying.   For example, consider the question “How long does study startup typically take at your site?”  Without defining the starting point (receipt of the protocol, site selected, or receipt of the regulatory packet), the answers may vary widely.
  • Judicious use of skip logic and display logic in an electronic questionnaire can reduce the burden on respondents and provide cleaner data to the person on the receiving end.   For example, you can use skip or display logic to drill down into specific topics that may only be relevant for some sites (such as regulatory history for sites that have had an inspection).
  • Engage sites in the feasibility process by asking questions requiring their input (e.g., any question that starts with “In your experience…”).
  • Use the right tool to collect information.  At Rho, we use Qualtrics as a survey platform. This platform provides many advantages for conducting feasibility, including:
    • Responsive surveys (skip logic, display logic, survey branching),
    • Piped text (automatically fills in certain fields for sites that have responded to previous surveys), and
    • Real-time reports that can be published to the web for sponsor review

One strategy that we have found to be successful in helping sponsors meet timelines for study startup is to start feasibility and site identification activities under a consulting agreement during the RFP/bid-defense/contracting process. Once a CRO partner has been selected, the team can hit the ground running with site startup activities. This type of early feasibility effort can also facilitate protocol development by gathering site feedback on key operational parameters.

The take home message for feasibility? Spend a little time thinking critically about the key pieces of information that you need that are unique to your project and goals. This will help to hone your feasibility strategy so that you can ask the right questions using the most effective approach.

Protocol Design and Development Webinar: Follow-up Q&A

Posted by Brook White on Thu, Feb 04, 2016 @ 11:07 AM
Share:

Thank you to everyone who attended our recent webinar on protocol design and development.  During the webinar, we weren't able to get to all of the questions.  Below, Dr. Shoemaker and Dr. Kesler have answered the remainder of the questions.

If you didn't have an opportunity to attend the webinar, it is now available on demand.  

Watch Webinar

Why do you think the adoption of the PRM has been so long in the coming?

The Pharmaceutical industry is nototiously slow to adopt novel techniques due to the siloed structure and because the current protocol development process has been in place for decades. Not until the current protocol authors understand the concept of CDISC and the importance of generating consistent data across their program will their methods change. That will only happen if protocol authors are responsible for writing marketing applications.

What are the major consequences of redundancy in the protocol?

Inefficiency due to the need for redundant editing to ensure replacement of all instances and ultimately the cost of amendments if the redundant information is not edited correctly.

How long does it take to properly develop a clinical protocol?

Given adequate time to develop a novel protocol for a new indication with a new molecular entity depends on coordinating the time of all the people whose input is required. Depending upon peoples' priorities and availability it typically takes between one and two months.

If I am developing my drug as an add-on to an approved drug, why not conduct Phase I in patients (not healthy volunteers) taking stable doses of the approved drug? I want to know the safety of a range of doses of study drug when so administered. Pros/cons

Pros are that you save time and money with this approach. Cons are that you won't know if a safety event is due to your product, the approved product, or the combination. You also won't know whether the patients' compromised condition contributed in any way to the safety event.

It is said that no amount of good monitoring can fix a bad protocol. Do you have an example of such a situation and what should the monitoring team look out for to avoid such a situation?

By the time the monitoring team starts reviewing the data at the site or in house it is too late, the die has already been cast by the design of the clinical study. The monitors should endeavor to participate in protocol design to assist in mistakes made at this stage. Otherwise they can only make recommendations to amend the protocol if they see the data being generated is not answering the intended objectives of the study.

Is it advisable to write into the protocol the duration of acceptable periods during which study drug may be suspended without automatically discontinuing the subject?

If your study drug planned to be titrated within subject (e.g. some hypertension drugs) then it is advisable to have not only a duration of suspension, but also dose escalation/de-escalation processes as well. For other situations where study drug is being suspended due to concomitant events, like hospitalization, it is also advisable to have windows for the duration of acceptable suspension. If you don't have expected reasons for suspension and don't expect it to happen often, then it is probably a level of detail you don't need.

Do you have any template?

Yes we have an internal protocol template that we provide to all our clients developing protocols.

Please remind us what data we need to provide for you to determine a sample size for a clinical trial.

It depends on the type of primary outcome. If it is dichotomous, you need to provide the expected percent responding in both the active and control arms. If it is continuous, you'll need to provide the expected mean and variance (or standard deviation) for each group, or the expected difference in means. Other types of outcomes (e.g. survival, multiple categories) require additional information. All studies need a Type I level (alpha) specified as well as the desired power of the study. Estimates of the rate of dropout are also needed for most studies.

We will be conducting another webinar on Thursday March 17th at 1 PM ET on Clinical Research Statistics for Non-statisticians.  We will go into more depth about sample size calculations during that webinar.

Register Now

Is this protocol process impacted if/when combo solutions are involved? Combo is defined as drug/sensor based, or subcutaneous drug-illuting solutions.

Not really. Obviously you have to understand the combination product and its properties to the same extent that you understand your drug from a nonclinical and manufacturing properties perspective.

When is unblinded medical review warranted in Phase 2 studies?

There is a new guidance from FDA as of December 2015 advocating the use of a Safety Assessment Committee to review unblinded data from the totality of the data on your product and this should be implemented with the advent of controlled studies in Phase 2.

When can multiple repeat dose safety study be done with parallel dosing of multiple dose groups?

Never. Parallel dosing of multiple dose groups can be done for efficacy comparisons after safety has been demonstrated.

What is the proper endpoint for oncology trial now? it is overall response, tumor shrink, survival or quality of life?

It depends on the type of tumor being studied, but overall response is the preferred SURROGATE clinical endpoint in most cases for accelerated approval with follow-up measurement of survival used to validate this SURROGATE clinical endpoint. Quality of Life is usually montored with one of several patient reported outcomes (PROs) as a secondary clnical endpoint.

You mentiond that CDISC was advising avoidance of the use of "Day 0" terminology to describe intervention date and that this would be required after a certain date. Can you please restate when this goes into effect?

Trials started after December 2016.

Do you know of any company which can offfer to write protocol for their product?

Rho provides protocol design and development services. You can learn more on our website or by contacting us.

Check out our other on-demand and upcoming webinars here.

David Shoemaker, SVP R&DDavid Shoemaker, Ph.D.
Senior Vice President R&D

Dr. David Shoemaker has more than 25 years of experience in research and pharmaceutical development.  He has served as a Program Leader or Advisor for multi-disciplinary program teams and has been involved with products at all stages of the development process. Dr. Shoemaker has managed the regulatory strategy for programs involving multiple therapeutic areas, including hematology, oncology, cardiology, pulmonology, infectious diseases, genetic enzyme deficiencies, antitoxins, and anti-bioterrorism agents.  He has extensive experience in the preparation and filing of all types of regulatory submissions including primary responsibility for four BLAs and three NDAs.  He has managed or contributed to more than two dozen NDAs, BLAs, and MAAs.  Dr. Shoemaker has moderated dozens of regulatory authority meetings for all stages of development.  His primary areas of expertise include clinical study design and regulatory strategy for development of novel drug and biological products.

Karen-1.jpgKaren Kesler, Ph.D.
Assistant Vice President Operations

Dr. Karen Kesler earned both a Master’s and Doctoral degree in Biostatistics from the University of North Carolina at Chapel Hill and has over 20 years of experience in the industry.  Dr. Kesler currently serves as the Primary Investigator of the Statistics and Data Management Center for a NIH sponsored coordinating center researching asthma, allergies, autoimmune disorders, and solid organ transplant.  Dr. Kesler is deeply involved in researching more efficient Phase II and III trials and has led many adaptive studies including sample size recalculations, pruning designs, Bayesian dose escalation studies, and adaptive randomizations.  She has given numerous professional presentations and has over 25 publications and manuscripts to her credit.

4 Top Trends in Drug Development: DIA 2015 Recap

Posted by Brook White on Tue, Jun 23, 2015 @ 11:04 AM
Share:

Last week, the Drug Information Association held its 51st Annual Meeting.  As one of the largest conferences in our industry, DIA covers a wide range of topics over the entire spectrum of drug development, and it would be nearly impossible to provide a comprehensive accounting of the meeting.  However, I will try to share the most notable trends and themes from the meeting.

Patient-centricity

patient-centricClearly, one of the biggest take-aways from DIA was patient-centricity. While for many of us patients have long been the motivation for the work we do, patients now are playing a central role throughout the drug development process. In addition to their roles as patients in clinical trials and eventual consumers, patients increasingly are participating in all aspects of development, from study design to advisory committee meetings. As we make this transition, patient advocacy groups can be powerful allies in reaching out to patients.

Some keys to patient outreach and involvement are relevance, logistics, and psycho-social components. For patients to be on-board with trials, they need to understand why a study is relevant. Patients need to see the link between how their participation now can lead to improvements in treatment in the future. Often, patients are interested in the outcomes past their own participation. Keep them updated as trials complete and results are available.

Patients can also provide useful insights on logistics. What may seem like minor considerations to scientists and others involved in study design, could be significant when it comes to patient participation. Logistics around scheduling, childcare, and uncomfortable procedures can be a study’s downfall if patients aren’t willing to sign-up or eventually drop-out because of inconveniences.

There are psycho-social issues that should be considered for certain patient populations and conditions. For example, it is likely that diabetics would have little concern over using an injectable treatment. Many have already used injectable products or have at least considered the need to use them in the future. On the other hand, patients used to oral dosing may have objections.

Finally, one suggestion made by a patient advocacy group at the meeting was to have all members of the clinical study team spend a day with a patient from the population being study. Understanding their daily routines and struggles can provide important insights.

Social Media is Big—And Getting Bigger

social media and drug developmentNearly every track featured at least one presentation on social media. We are now moving past theoretical uses to real world applications in patient recruitment, medical information, safety monitoring, and even regulatory agencies.

Use of social media is becoming commonplace in patient recruitment. In addition to being a more cost effective option when compared to traditional media like broadcast, radio, and newspaper ads, it also allows for better targeting and reporting. For example, social media allows you to show ads only to those within appropriate demographic groups. Even demographic groups previously considered poor targets for social media, like the elderly or lower income populations, are increasingly online in one way or another. Additionally, Sponsors and CROs have largely found ways to address regulatory and privacy concerns.

Medical Information is another area where social media use is increasingly common. With several FDA guidance documents now in place, Medical Information professionals’ perspectives on social media are changing. Patients making contact with pharmaceutical companies are being seen less as a risk to respond to and more as an opportunity to engage proactively. While companies should still be careful to present scientifically-based balanced information, social media can provide an opportunity to correct faulty information and even respond to questions about off-label use in a non-promotional way.

Safety monitoring is another area primed for growth in social media use. With the question of how to deal with adverse event reporting through social media largely handled—be prepared for it and treat it the same way you would treat reports coming in through traditional channels—product safety professionals are turning their attention to ways they can use social media to improve patient safety. Dr. Ran Balicer of the Clalit Research Institute is pioneering a system to identify safety signals in social media and compare it to information being reported by clinicians and to regulators.

Regulatory agencies are embracing social media as well. FDASIA Section 1138 instructs the FDA to create a communication plan to better inform and educate consumers with a focus on communicating with underserved sub-groups. The working group at FDA is relying on social and digital media to build the core of this program.

Outsourcing Trends

Despite incredible growth in outsourcing over the past 20 years, Sponsors still struggle with the right balance of outsourcing models. Both Sponsors and CROs report dissatisfaction with outsourcing relationships. Strategic relationships aren’t delivering the promised cost and time savings. The number of companies entering into strategic alliances and functional service provider relationships has been steadily growing over the past few years, yet virtually all Sponsors admit that they are still making outsourcing decisions (full service, FSP, or niche providers) on a study by study basis.

key performance indicatorsOne major challenge identified by several speakers is the ability to select key performance indicators (KPIs) that both accurately measure the CROs performance and those for which CROs are prepared to provide the data necessary to produce the metric. Existing KPIs have largely been selected because they are easy to measure and report on regularly, but they are often a better measure of the study design and the Sponsor’s ability to manage the relationship than they are a measure of the CROs performance. However, ability to produce metrics and willingness to be transparent continue to be make-or-break for Sponsors when it comes to selecting preferred providers, entering into strategic alliances, and picking functional service providers.

eSource, eTMF, and Risk-Based Monitoring

Perennial favorites, there was no shortage of presentations on the topics of electronic trial master file (eTMF) solutions, eSource and electronic data capture (EDC), and risk-based monitoring. Although eTMF has been a hot topic for a number of years, adoption is still slow. Many companies are considering implementing an eTMF, but most are still using paper systems, network file systems, content management systems, or a combination. However, growing Sponsor expectations for remote access to TMF documents combined with improved audit readiness will continue to push CROs in this direction.

risk-based monitoringOn the other hand, risk-based monitoring has become a reality for many studies. Companies have been investing in tools and processes and regulators continue to show support. As a result, more studies are taking advantage of the benefits of risk-based monitoring.

Although it has seen widespread (and in some instances near complete) adoption, EDC has left many feeling it hasn’t lived up to expectations. It hasn’t reduced significantly the cost or time of trials. Because much of the information clinicians need to record during study visits isn’t recorded in the CRF, information is still being recorded in one place and then transcribed into EDC. eSource—a combination of ePRO tools, EHR/EMR integration with EDC, and other electronic sources—offers new hope to deliver on the original promise of EDC.

Were you there? If so, use the comments to let me know what you thought were the most important take-aways from DIA this year.

Free Webinar: ePRO and Smart Devices

Q&A: Clinical Trial Inclusion & Exclusion Criteria Webinar

Posted by Brook White on Fri, Sep 12, 2014 @ 11:27 AM
Share:

questions about inclusion and exclusion criteriaOn September 9th, we hosted a webinar featuring Senior Medical Officer Jack Modell about improving inclusion and exclusion criteria for your next clinical trial.  If you missed the webinar, you can click here to register and view the webinar on-demand.

Several questions came up during the webinar that Dr. Modell did not have time to address.  Additionally, during one part of the webinar participants were asked to submit possible inclusion and exclusion criteria that could be used for a particular scenario, and there were a couple of submitted criteria he did not have time to discuss.  Below, Dr. Modell has responded to the unanswered questions and provided some insight on additional inclusion and exclusion criteria that were submitted during the webinar.

Q: How can you extrapolate your intended patient population between different phases of the trials?

A: Generally, as drug development progresses and more safety information is available, the population can be expanded accordingly.  Thus, phase I trials are usually limited to healthy controls, pivotal trials are generally in a broad target population for whom the drug will be indicated, and phase 4 (post-marketing) trials often extend to previously untested populations (e.g., patients with comorbidities, other target diseases, etc.), safety permitting.

Q: What about adaptive trials, with a number of target groups?

A:Adaptive trials are fine as long as they are appropriately designed with adequate power to detect effects of interest in each group (and the exisiting safety data base makes use in the different groups acceptable).  Cautious use beyond known safety is often appropriate (otherwise we couldn't test or progress new drugs at all); but as always, the potential risks vs. benefits for the research subjects and intended patient populations must be carefully assessed.

Q:Once a patient is in the study, and it is determined that the patient was entered in to the study despite not meeting one of the I / E criteria, do you immediately terminate the patient from the study if it is not a safety issue?

A: Yes, I would generally terminate because you are then studying a subject for whom the study wasn't approved and/or the drug wasn't intended.  These subjects should, however, generally be followed as long as necessary (usually as specified by the protocol) for safety assessments.  Of course, one might counter, "Well, what if the I/E criterion that the subject failed to meet wasn't really that consequential -- 'no big deal, really'?"  To that I would have to ask why an "inconsequential" I/E criterion was included in the first place (maybe shouldn't have been?), but of course that's academic at this point.  Nonetheless, I/E criterion really shouldn't be "second guessed" for subjects once the trial is underway (barring global reassessments and protocol amendments to deal with that), so I would still say that the subject should be discontinued from the study except for safety follow up.  

Q: Is it ethical for the PI to be a participant in the study?

A: Great question.  I'm not sure that it is necessarily "unethical," but the question is whether there's a good reason for the PI not to be a subject.  And for most studies, I think it would be inadvisable because the PI has a vested interest in the outcome and so it would be very difficult for him or her to be completely objective. He or she is hardly representative of the "random" populations that we usually need for drug development.  On the other hand, there may be circumstances, such as when risks are minimal and there is no way that the data can be affected by the PI's vested interest (e.g., a study of height, weight, or other information that is fixed and objective) where the PI's participation might be acceptable.  In any case, if the study is subject to IRB approval, as most are, the PI's involvement should be cleared with the IRB ahead of time.

Q: Many people - not only in EU - consume low to moderate levels of alcohol even during times when they are undergoing treatment. People with blood alcohol should not be excluded because it is reflective of the actual real world population. Any comment?

A: A good point.  But please note that I didn't suggest excluding any alcohol use at all, but rather, only those who could not commit to abstaining for 8 hours before screening and/or randomization (generally not too much to ask considering these 8 hours would usually be in the morning) and those whose BACs are above .015 when tested because this either means that they didn't abstain when they said they would (which raises a question of compliance) or that they had levels 8 hours previous that could only have been attained by very heavy drinking.  One of course could quibble with the ".015" (why not .010 or .020?), but the goal is to do your best to exclude those likely to be heavy drinkers, realizing that you'll lose a few who aren't, but this is generally better than over-including those who do have a problem.  As for the point of testing a drug in the "real world" population of heavy drinkers, if this is really the goal, a separate study (adequately powered and with appropriate precautions) would be a better way to do this.

Exlusion Criteria: Excessive user of depressant drugs such as alcohol - no more than 1 drink a day

Yes, this would be an appropriate exclusion criterion, although would need to be more specific about use of depressant drugs.  One drink a day is reasonable, especially since many subjects will underreport drinking.  

Exclusion Criteria: No history of pre-syncope / syncope. Normal potassium / magnesium. QTcF < 450 msec

Reasonable given that there may be a QTc concern.  

If you have additional questions about the webinar, please submit them in the comments below.  Also, feel free to share additional inclusion/exclusion criteria based on the scenario shared in the webinar.

About the Speaker:

Dr. Jack Modell, Rho Senior Medical OfficerJack Modell, M.D.
Senior Medical Officer 
Dr. Modell is a board-certified psychiatrist with 30 years of experience in clinical research, teaching, and patient care including 10 years of experience in clinical drug development (phases 2 through 4), medical affairs, successful NDA filings, medical governance, drug safety, compliance, and management in the pharmaceutical industry. His specialties and expertise include neuroscience, psychopharmacology, drug development, clinical research, medical governance, and clinical diagnosis and treatment. 

Dr. Modell has authored over 50 peer-reviewed publications in addiction medicine, anesthesiology, psychiatry, neurology, and nuclear medicine. He has lead several successful development programs in the neurosciences. Dr. Modell is a key opinion leader in the neurosciences, has served on numerous advisory and editorial boards, and is nationally known for leading the first successful development of preventative pharmacotherapy for the depressive episodes of seasonal affective disorder.

Free Expert Consultation

Key Tips for Orphan Product Development

Posted by Brook White on Tue, Aug 27, 2013 @ 10:47 AM
Share:

David Shoemaker discusses orphan product developmentInformation for this article was contributed by David Shoemaker, Senior Vice President R&D. Dr.  Shoemaker has over 25 years of experience in research and pharmaceutical development.  He has managed or contributed to dozens of INDs/CTAs and over a dozen successful NDAs, BLAs, and MAAs.  Dr. Shoemaker has authored or overseen dozens of Orphan Drug Designation applications, has developed several successful Accelerated Approval programs, and has secured several Priority Review applications.

Selecting a partner for drug development is tricky.  This is especially true when selecting a CRO to assist with orphan product development.  Finding a partner that has both the experience and expertise needed as well as being a good cultural fit for your company is critical to achieving your goals. 

  1. Work with CROs that have strong scientific, regulatory, and statistical expertise
    A strategic approach with a focus on key milestones is critical to gain approval as quickly as possible. Look for CROs whose strengths include the ability to conduct challenging clinical trials, knowledge of the regulatory process, and scientific and statistical expertise to develop a plan for success at the outset to reach approval in an expedited speedy manner. Your CRO should have successfully obtained marketing approval for other orphan products previously. Marketing applications for orphan products require creative regulatory and statistical strategies to leverage the data obtained on populations much smaller than typically seen by regulators.
  2. Know the “ins and outs” of the U.S. Food and Drug Administration’s approval mechanisms to help speed orphan drug approval
    Many orphan diseases represent serious or life-threatening conditions. Consequently, working with a development partner that understands each of the accelerated development pathways (i.e., Accelerated Approval, Priority Review, Breakthrough Therapy, and Fast Track) and the potential benefits or lack thereof is critical. Making an informed decision on the best mechanism at the start of the orphan drug approval process is the fastest path to approval.
  3. Apply for US and European Orphan Drug Designation Simultaneously
    There is a combined form that can be used to obtain orphan drug status simultaneously in the US and EU. It is an option that is not being used broadly, but can result in significant reduction of time and effort.
  4. Look for a CRO partner with experience working in small patient populations 
    Working with small patient populations requires building communities and developing close connections with research foundations, advocacy groups, patients and health care providers for a purpose-driven approach to product development. It will also be important to gain buy-in from Key Opinion Leaders.
  5. Validate your population
    Before investing time and energy in an orphan drug application, make sure you are eligible. Regulators are on the lookout for developers who try to “slice the salami” meaning that your orphan population is really just a subset of a larger population from which there is no substantive difference.

Pharmaceutical and biotechnology companies can accelerate successful development of orphan products by partnering with product development service providers with a culture of solving challenges and the scientific and regulatory expertise to navigate complex trials and approval processes.

The National Organization for Rare Disorders reports nearly 7,000 orphan diseases affecting nearly 30 million Americans. As more drug companies search for new approaches after mass-market drug revenues are lost to generic competition, orphan drug development is gaining momentum.

At Rho, we share a passion for discovering new treatments and have experience successfully helping companies navigate the FDA’s orphan product approval processes. But just like anything that sounds too good to be true, sound product development program decisions should stem from a keen understanding of the requirements and potential benefits of each approach. Selecting the right product development services partner can help deliver new treatments to improve and save lives as quickly as possible.

Click Here to View a Slideshow on Key Tips for Orphan Products

FDA Issues Draft Guidance on Developing Drugs for Treatment of Early Stage Alzheimer’s Disease

Posted by Brook White on Mon, Mar 25, 2013 @ 09:23 AM
Share:

Herbert Harris, Rho Medical DirectorThe following article was contributed by our medical director, Herbert Harris, MD, PhD. 

On February 7, the FDA issued a proposal designed to assist companies developing new treatments for patients in the early stages of Alzheimer’s disease, before the onset of noticeable (overt) dementia.

Although we have an enormous amount of information about the underlying molecular pathophysiology of Alzheimer’s disease, translating this knowledge into effective new treatments has been exceedingly difficult. Part of this difficulty arises from the slowly progressive nature of the disorder. We have known for many decades that the accumulation in the brain of a protein known as amyloid is a central part of this process. Abnormal accumulation of amyloid triggers many other biochemical processes that lead to neuronal cell death and dysfunction that cause cognitive deterioration characteristic of the disease. This understanding has led to the development of many drugs that have the potential to prevent or oppose the abnormal accumulation of amyloid. However, these new drugs have typically been tested in patients in whom cognitive impairments are already fairly far advanced. Yet in recent years, advances in imaging technology and neuropathology have indicated that amyloid accumulation may begin years, or even decades before the appearance of measurable cognitive deficits. Such findings imply that interventions targeting amyloid accumulation are unlikely to show significant clinical benefits if they are not used until cognitive deficits have manifested. Instead, medicines that target amyloid accumulation and other fundamental molecular processes should probably be introduced well in advance of the onset of cognitive changes in order to be optimally effective. This understanding has led to a fundamental rethinking of the methods and strategies for drug development in Alzheimer's disease. Recognizing these new challenges that face the field, the FDA has developed a draft guidance document for the development of drugs to treat early stages of Alzheimer's disease. The guidance identifies a number of critical drug development issues and has indicated potential solutions that could move the field forward. In an accompanying press release, Russell Katz, M.D., Director of the Division of Neurology Products at the FDA’s Center for Drug Evaluation and Research noted: “The scientific community and the FDA believe that it is critical to identify and study patients with very early Alzheimer’s disease before there is too much irreversible injury to the brain. It is in this population that most researchers believe that new drugs have the best chance of providing meaningful benefit to patients.”

developing drugs for Alzheimer's patientsPerhaps the most problematic issue is that of identifying appropriate patient populations to study. Conventional clinical trials involving Alzheimer therapeutics typically enroll patients who meet criteria for a mild to moderate level of dementia as measured by various cognitive tests. Currently, there are a number of diagnostic entities that have been defined so as to capture patient populations at an early stage. These include Mild Cognitive Impermanent (MCI) and prodromal Alzheimer's disease. However, these diagnoses still depend on identification of some level of cognitive dysfunction. To identify patients at even earlier stages may require the use of genetic and other biomarkers. In developing their industry guidance, the FDA has acknowledged the potential importance of conducting trials in enriched populations defined by combinations of clinical findings and biomarkers. Unfortunately, to date, no biomarkers have been identified with sufficient predictive power. However, a great deal of progress is being made in this area.

The development of treatments for early stage Alzheimer's disease may also require the development of innovative outcome measures. Conventional studies of mild to moderate Alzheimer's disease typically employ cognitive testing used in combination with either a functional or global outcome measure as a co-primary endpoint. In the FDA guidance, it is acknowledged that in early stage Alzheimer's subjects, there may be little or no functional impairment. Therefore, it is recognized that in some cases the use of a co-primary outcome measure may be impractical. However, it is noted that as patients progress to later stages in which both functional and cognitive impairment begin to manifest, it may be appropriate to use composite scales that capture elements of function and cognition. The Clinical Dementia Rating Scale–Sum of Boxes score, which is been validated in patients whose level of impairment does not meet the threshold of frank dementia, is given in the guidance as an example of such a scale,. In the draft guidance, the possibility was also raised that a treatment might obtain approval under the accelerated approval mechanism based on effects demonstrated on an isolated cognitive measure. It was noted that in this scenario a sponsor might be required to demonstrate sustained global effects as a post-marketing condition.

The draft guidance contains an extensive discussion of the topic of biomarkers as primary and secondary outcome measures. It is noted that the use of a biomarker as a primary efficacy endpoint is a theoretical possibility under the accelerated approval mechanism, but there is currently no biomarker for which there is sufficient evidence to justify its use as a proxy for clinical preventive Alzheimer's disease. The draft guidance states “until there is widespread evidence-based argument in the research community that in effect on the particular biomarker is reasonably likely to predict clinical benefit, we will not be in a position to consider approval based on the use of a biomarker as a surrogate outcome measure in Alzheimer's disease (at any stage of illness).”

While many issues such as the potential role of biomarkers will have to await scientific development within the field, the development of an industry guidance document represents an important step that will focus the energies of the research community and enable much-needed progress in Alzheimer research. The agency is currently seeking public comments on the draft guidance. It is likely that they will begin finalization of the document next month. The FDA proposal is part of U.S. Department of Health and Human Services initiative known as the National Plan to Address Alzheimer’s Disease. This calls for both the government and the private sector to intensify efforts to treat or prevent Alzheimer’s and related dementias and to improve care and services.

Free Expert Consultation

4 Types of Dose Finding Studies Used in Phase II Clinical Trials

Posted by Brook White on Mon, Mar 11, 2013 @ 12:07 PM
Share:

phase II clinical trials dose finding studiesOne of the key goals of phase II is to determine the optimal dose that you will use going into your phase III trials and that ultimately will be used on your product label submitted for approval as part of the new drug application (NDA).  The optimal dose is the dose that is high enough to demonstrate efficacy in the target population, yet low enough to minimize safety concerns and adverse events.  There are a number of strategies to determine the optimal dose, but here we will look at the four most common dose finding study designs.

Parallel Dose Comparison

Parallel dose comparison studies are the classical dose finding studies and are still one of the most common study designs. In a parallel dose escalation study, several potential doses are selected and subjects are randomized to receive one of the doses or placebo for the entire study. At the end of the study, you can compare each treatment group to the control group and examine both safety and efficacy. Because all treatment groups, including the higher dose cohorts, are dosed at the same time, this study design is best suited for situations where you have a good idea about the safety profile before the study starts. The design is also the basis for some adaptive studies (such as adaptive randomizations or pruning designs) that can reduce the number of subjects exposed to unsafe or ineffective doses.

Cross-over

In a cross-over design, subjects are randomized to a sequence of investigational product (IP) and placebo. Specifically, they are given a dose of the IP and then switched to dosing with a placebo or they start dosing with a placebo and then are switched to doses of IP. The difference between the subjects' response to placebo and IP is the result of interest and by having different groups of subjects exposed to different doses, you can pick the optimal dose. The value of cross over studies is they can determine efficacy of a dose within a subject because subjects act as their own control.  This reduces the variability and can therefore reduce the number of subjects you need to study. However, cross-over designs only work when the drug is quickly eliminated from the body. You need to be able to give a subject the treatment, wait for it to clear, and then give the second treatment in the sequence. It also requires a product that is designed to be used multiple times. For example a product that is intended to be given once, such as a drug to lower blood pressure during heart surgery, can’t be tested in a cross-over study because you won’t do the surgery again just to give the second treatment in the sequence.

Dose Titration

In a dose titration study, you titrate to the maximum tolerated dose within a subject. This means that each subject will start at a low dose and receive an incrementally higher dose until the maximum dose is reached.  In some studies, like chemotherapy for cancer studies, this dose is determined by the onset of side effects--this dose is called the Maximum Tolerated Dose (MTD).  In other studies where the product is less toxic, it may depend on the blood levels of the IP, a metabolite, or a maximum dose determined from preclinical studies.  Dose titration studies work well for treatments of chronic conditions where a drug will be used for a long period of time, and where the dose is likely to be tailored to the subject's weight or reaction.  This design is also good for situations where it is likely that you will see significant differences in the way each subject reacts.  Chronic hypertension medications are a good example of products where dose titration is useful. There is a lot of variability in how individual patients respond to hypertension products and with titrating the dose, you can give a lower dose to those who respond to it.

Dose Escalation

If you are unsure of your safety profile and want to start exposing subjects to lower doses first, consider a dose escalation study. In this type of study, you start with one group of subjects (often referred to as a cohort) and give them a low dose. You observe this group for a period of time, and, if no safety issues are noted, you enroll a new group of subjects and give them a higher dose. This process is repeated until either you reach the maximum tolerated dose or you reach the highest dose you plan to consider. This design increases patient safety because you can start by exposing a small number of subjects to the lowest dose possible. You are mitigating risk both by limiting the initial number of subjects and limiting the exposure of each subject to study drug. You can also add control subjects to each cohort if you want to look at efficacy measures with an appropriate comparison group.

There are other types of study designs and many variations on each of these study designs that may be useful in determining the optimal dose before heading into your phase III clinical trials. Interested in learning more? Check out this video where Dr. Karen Kesler talks about whether an adaptive design is right for your study.

View "Is Adaptive Design Right for You?" Video

Dr. Karen Kesler, Senior Statistical Scientist and Dr. Andrea Mospan, Program Manager contributed to this article.  Check out the video below where Dr. Kesler discusses the basics of adaptive design.