Rho site logo

Rho Knows Clinical Research Services

Age Diversity in Clinical Trials: Addressing the Unmet Need

Posted by Brook White on Tue, Jul 10, 2018 @ 09:23 AM
Share:

Ryan2Ryan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including the Inner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which develops novel data visualizations and statistical graphics for use in clinical trials.

elderly patient with nurseIn a recent New York Times article, Paula Span raises the concern that elderly subjects are frequently omitted from clinical trials.  Consequently, physicians know very little about how a given treatment may affect their older patients.  Is a medication effective for the elderly?  Is it safe?  Without data, how is a physician to know?  

Span’s article is timely and aligns well with similar industry trends toward increased patient centricity and trial diversity.  Yet, expanding trials to include older patients poses a challenge for research teams because it brings two tenets of quality research into conflict with one another – representative study populations and patient safety.  

The fundamental assumption of clinical trials research is that we can take data from a relatively small, representative selection of subjects and generalize the results to the larger patient population.  If our sample is too constrained or poorly selected, we hinder the broad applicability of our results.  This is not merely a statistical concern, but an ethical one.  Unfortunately, our industry has long struggled with underrepresentation of important demographic groups, especially women, racial and ethnic minorities, and the elderly. 

At the same time, researchers are keenly concerned about protecting subject safety in trials.  Good Clinical Practice is explicit on this point: 

2.3 The rights, safety, and well-being of the trial subjects are the most important considerations and should prevail over interests of science and society.

Such guidance has engendered broad reluctance to conduct trials in what we deem “vulnerable populations,” namely children, pregnant, and the elderly.  The risk of doing more harm than good in these patient groups often leads us to play it safe and exclude these populations from trials.  Span, however, provides an astute counterpoint: expecting providers to prescribe a medication to a group of patients who were not included in the original research is equally irresponsible.  

No case illuminates the challenging catch-22 we face like the awful thalidomide debacle of the 1950s-60s.  Thalidomide, which was widely regarded as safe, was prescribed off-label for pregnant women to treat morning sickness.  Tragically, the drug was later linked to severe birth defects and banned for expecting mothers.

On one hand, the physicians prescribing thalidomide did so based on limited knowledge of the drug’s safety in pregnant women.  Had a trial had been conducted that demonstrated the risk to children, they would clearly know not to prescribe it to expecting mothers.  Yet, the very risk of such dangerous complications is why such trials are not conducted in vulnerable populations in the first place.  Risks for the elderly are different than for pregnant women, but the principal of protecting sensitive populations is the same.  

Span notes that even in studies that don’t have an explicit age cap, many protocols effectively bar elderly participants via strict exclusion criteria that prevent participation by people with disorders, disabilities, limited life expectancy, cognitive impairment, or those in nursing homes.  It must be stated, however, that the reason for such conditions is not to be obstinately exclusive but to reduce confounding variables and minimize risks to vulnerable patients.  In most cases, it would be patently unethical to conduct research on someone with cognitive impairment or in a nursing home where they may be unable to give adequate informed consent, or they may feel coerced to participate in order to continue receiving care.

So, how do we negotiate this apparent impasse?  Span offers a few general suggestions for increased inclusion, including restructuring studies and authorizing the FDA to require and incentivize the inclusion of older adults.  Changing the laws and enforcement can certainly drive change, but what can we do in the near term, short of legislative intervention?  

elderlycoupleA few quick suggestions:

  1. Reconsider age limits and avoid an all-or-none mentality to enrolling geriatric subjects.  The mindset that older adults are, as a whole, too vulnerable to enroll is usually an overreach.  In most cases, age limits are imposed as a convenience for the study, not a necessity.  Instead, consider evaluating eligibility on a subject-by-subject basis, which will still allow exclusion of patients deemed too frail, risky, or comorbid for the trial.  
  2. Actively recruit older subjects. The lack of geriatric patients in our trials is a result of many years of both passively and actively excluding them, so effort is needed to reverse these trends.  Beyond recruitment for an individual trial, researchers and providers should seek to educate older adults about clinical research.  Many elderly patients may be research-naïve – unfamiliar with clinical trials and how to participate, or unaware of available trials in their area.  
  3. Learn from other efforts to recruit marginalized populations.  As we’ve shared previously, improving trial diversity starts with an effort to thoroughly understand your patient population and their needs, and reduce obstacles to their participation.  
  4. Engage patient advocacy groups that focus on elderly patients.  Ask how trials can be better designed to meet their needs and include them.  Partner with these groups to aid in information sharing and outreach.
  5. Learn what is already expected from agencies like the FDA and NIH when it comes to inclusivity. 
    1. Span alludes to a recent NIH policy revision (stemming from the 21st Century Cures Act) that will require new NIH grantees to have a plan for including children and older adults in their research.
    2. In 2012, the Food and Drug Administration Safety and Innovation Act (FDASIA) required the FDA to create an action plan to improve data quality and completeness for demographic subgroups (sex, age, race, and ethnicity) in applications for medical products. 
  6. Design studies to examine effectiveness (demonstrating that a treatment produces desired results in ‘real world’ circumstances) not just efficacy (demonstrating that a treatment produces desired results in ideal conditions).  This is probably the most labor intensive because it requires additional investment beyond the typical Phase III randomized controlled clinical trial.  Yet, it is becoming more common to explore effectiveness through pragmatic trials, Phase IV studies, and post-market surveillance.   

Site Investigator vs. Sponsor SAE Causality: Are they different?

Posted by Brook White on Thu, Jun 21, 2018 @ 11:25 AM
Share:

Heather Kopetskie, MS, is a Senior Biostatistician at Rho. She has over 10 years of experience in statistical planning, analysis, and reporting for Phase 1, 2 and 3 clinical trials and observational studies. Her research experience includes over 8 years focusing on solid organ and cell transplantation through work on the Immune Tolerance Network (ITN)and Clinical Trials in Organ Transplantation (CTOT) project.  In addition, Heather serves as Rho’s biostatistics operational service leader, an internal expert sharing biostatistical industry trends, best practices, processes and training.

Hyunsook Chin, MPH, is a Senior Biostatistician at Rho. She has over 10 years of experience in statistical design, analysis, and reporting for clinical trials and observational studies. Her therapeutic area experience includes: autoimmune diseases, oncology, nephrology, cardiovascular diseases, and ophthalmology. Specifically, her research experience has focused on solid organ transplantation for over 8 years on the CTOT projects. She also has several publications from research in nephrology and solid organ transplantation projects. She is currently working on several publications.

An Adverse Event (AE) is any unfavorable or unintended sign, symptom, or disease temporally associated with a study procedure or use of a drug, and does not imply any judgment about causality. An AE is considered Serious if in the view of either the investigator or sponsor, the outcome is any of the following: 

  • Death
  • Life-threatening event
  • Hospitalization (initial or prolonged)
  • Disability or permanent damage
  • Congenital anomaly/birth defect
  • Required intervention to prevent impairment or damage
  • Other important medical event

When a serious adverse event (SAE) occurs the site investigator immediately reports the event to the sponsor. Both the site investigator and the sponsor assess causality for every SAE. Causality is whether there is a reasonable possibility that the drug caused the event. The FDA believes the sponsor can better assess causality as they have access to SAE reports from multiple sites and studies along with a familiarity with the drug’s mechanism of action. When expedited SAE reports are delivered to the FDA the sponsor causality is reported instead of the site investigator’s.

complexity-resized-600Causality assessments may differ between the site investigator and sponsor. It is important to understand the difference in assessments to ensure proper reporting and conduct through a trial. For example, if stopping rules rely on causality should the sponsor’s or site investigator’s causality assessment be used? Which causality assessment should be used for DSMB and CSR reports? To better understand how to handle these situations it’s important to understand the differences.

We reviewed over 1400 SAEs from 76 studies over the last 6 years. Each SAE had causality assessed against an average of 3.8 study interventions (e.g. study medication 1, study procedure 1, etc.) for a total of over 5300 causality assessments. Related causality included definitely, possibly, and probably related while Not Related included unlikely related and unrelated. At the SAE level an SAE was considered related if at least one study intervention was determined related.

Table 1: Causality Comparisons

  Study Investigator Sponsor
Study Interventions    
Not Related 89% 81%
Related 11% 19%
SAEs    
Not Related 78% 67%
Related 22% 33%

Sponsors deemed more SAEs to be related to study interventions than site investigators. This relationship is maintained when looking at the breakdown of SAEs by severity with the sponsor determining a larger percentage of SAEs related to the study intervention. This also held for the majority of system organ classes reviewed. 

flowchartWhat actions can we take with this information when designing a trial?

  1. If any study stopping rules rely on causality the study team may want to consider using the sponsor causality to ensure all possible cases are captured. The biggest hurdle with this transition would be acquiring the sponsor causality in real time as it is not captured in the clinical database.
  2. For DSMB reports, if only the site investigator causality is reported the relationship to SAEs may be under reported versus the information the FDA receives. Given the sponsor more often assesses SAEs as related this is important information that should be provided to the DSMB members when evaluating the safety of the study.
  3. For clinical study reports, both SAE and non-serious adverse events are reported. The study team should determine what information they want to include. The sponsor safety assessments are not included in the clinical database but it is what the FDA receives during the conduct of the trial. Additionally, if the sponsor more often assesses SAEs as related the report may under report related SAEs if only the site investigator assessment is used in the report.
Note that these findings are based on studies Rho has supported and may not be consistent with findings from other trials/sponsors.  Additionally, in some studies the site investigator may have changed the relationship of the SAE based on discussions with the sponsor and we do not have any information to quantify how often this occurs.

Cellular Therapy Studies: 7 Common Challenges

Posted by Brook White on Tue, May 15, 2018 @ 09:34 AM
Share:

Heather Kopetskie, Senior BiostatisticianHeather Kopetskie, MS, is a Senior Biostatistician at Rho. She has over 10 years of experience in statistical planning, analysis, and reporting for Phase 1, 2 and 3 clinical trials and observational studies. Her research experience includes over 8 years focusing on solid organ and cell transplantation through work on the Immune Tolerance Network (ITN)and Clinical Trials in Organ Transplantation (CTOT) project.  In addition, Heather serves as Rho’s biostatistics operational service leader, an internal expert sharing biostatistical industry trends, best practices, processes and training.

Kristen Much, Senior BiostatisticianKristen Mason, MS, is a Senior Biostatistician at Rho. She has over 4 years of experience providing statistical support for studies conducted under the Immune Tolerance Network (ITN) and Clinical Trials in Organ Transplantation (CTOT). She has a particular interest in data visualization, especially creating visualizations within SAS using the graph template language (GTL). 

Cellular therapy is a form of treatment where patients are injected with cellular material. cell therapyDifferent types of cells can be utilized such as stem cells (such as mesenchymal stem cells) and cells from the immune system (such as regulatory T cells (Tregs)) from either the patient or a donor. In many cases, these cells have been reprogrammed to carry out a new function that will aid in the treatment of a disease or condition. Cellular therapy has become increasingly-popular largely due to the fact that cells have the ability to carry out many complex functions that drugs cannot. When successful, cellular therapy can result in a more targeted and thus more effective treatment. More information on cellular therapy can be found here .

Rho is conducting several studies using cellular therapy to treat diseases such as systemic lupus erythematosus and pemphigus vulgaris and for various applications within organ transplantation.

Cellular therapy trials offer their own unique set of challenges. The following list presents some of these challenges encountered here at Rho.

  1. Cellular therapies require highly-specialized laboratories to manufacture the investigational product, especially if the cells are being manipulated. Centralized manufacturers are commonly utilized requiring logistical considerations if the trial has multiple study sites. These logistics may include proper packaging, temperature storage, shipping days, etc. which all must be considered when shipping the product.
  2. It is critical to plan for and establish clear communication between the manufacturing lab, the study site, and the study team when working under time constraints. One common consideration is to ensure extracted cells will not arrive at the manufacturer on a Saturday or Sunday when lab personnel may not be available to immediately process cells. 
  3. Protocols usually require a minimum number of cells be available for infusion into the subject. The protocol must detail what steps to take when not enough viable cellular product is produced. Some questions to consider include: 
    • Is it is possible to recollect cells for a second attempt? If so, does it work with the timing of the trial?
    • Are there leftover cells from the first attempt? 
  4. Potent drugs are sometimes paired with administration of the cellular product.  It is crucial to avoid administering these drugs unless a viable cellular product has been produced. Checks should be in place to ensure product is available before administering additional study drugs.
  5. Guidance exists limiting the amount of blood that can be collected over an 8-week period from a single subject. If the cellular product is manufactured from a blood donation, the amount of blood from any and all blood draws around the same time should be taken into consideration. If the blood donation occurs close to screening when blood is often drawn for various baseline labs pay close attention to the total amounts as exceeding the established limits can be easy. 
  6. The subject accrual for a study should be clearly outlined in the protocol. Is it X number of subjects that receive a minimum number of cells, X number of subjects that receive any cells, etc.
  7. Cellular product may not be administered until several months into the study. Subjects may be evaluated for eligibility several times while waiting for the infusion allowing multiple time points each subject may become ineligible. This along with the potential of insufficient cellular product can result in an unexpected length of time to administer cellular product to the target number of subjects. As such, this is an important factor when determining the duration and budget for a cellular therapy study. 
All in all, there are numerous opportunities for learning when using cellular therapies to treat disease. In many disease areas, this concept is still novel and study teams are facing new challenges with each study. Understanding these challenges early can help in the development of a robust protocol that addresses these same challenges before they ever become an issue.

10-Step Commercial Clinical Protocol Authoring Guide

Posted by Brook White on Thu, Aug 31, 2017 @ 01:57 PM
Share:

Lauren Neighbours, PhD, RACLauren Neighbours, PhD, RAC, is a Research Scientist at Rho. She leads cross-functional project teams for clinical operations and regulatory submission programs. Lauren partners with early-, mid-, and late-stage companies to develop and refine strategic development plans, design and execute clinical studies, lead regulatory submissions, and provide support for regulatory authority meetings and other consulting needs. She has over ten years of scientific writing and editing experience and has served as a lead author on clinical and regulatory documents for product development programs across a range of therapeutic areas.

Devin Rosenthal, PhDDevin Rosenthal, PhD, RAC, works with companies at all stages of development to help them shape their product development programs. He has experience across the full drug development spectrum through his roles in small biotech, big pharma, and at Rho, with particular focus on oncology, CNS, gastrointestinal, and respiratory indications. In addition to pharmaceutical development, Devin is also involved in strategic alliance and business development activities at Rho.

Genna Kingon, PhDGenna Kingon, PhD, RAC, is a Research Scientist at Rho involved in regulatory strategy and submission management from pre-IND to post-approval.  She also serves as a lead regulatory author on multiple programs for submissions to FDA and to various international regulatory authorities.  In particular, Genna focuses on rare disease programs and expedited approval pathways. 

begin with the end in mindA protocol is the most important document in a clinical study as it is the foundation for subsequent operational, regulatory, and marketing objectives for the development program. 

 Developing a protocol is an extensive undertaking that requires a cross-functional team and consideration of the position and role of the study in the full product development program.  Before the protocol authoring process even begins, a variety of activities and decisions are necessary to establish a strategy for success.  The following steps provide concepts and considerations that are essential in formulating the details that will become the protocol synopsis and ultimately the clinical study protocol. 

Pre-Authoring

1.    Begin with the end in mind

our program team should first prepare an Integrated Product Development Plan (IPDP). This plan, which is largely based upon the desired final Target Product Profile (TPP) and product labeling, maps out all activities through marketing application submission and clearly outlines the purpose, position, and necessity of each study in the product development program. Without these documents, you run the risk of completing a study that fails to advance your product’s development or is markedly less valuable to development than it otherwise could be.

Among other things, the IPDP should contain the clinically meaningful endpoint(s) for your studies that will be acceptable to regulators and support the desired marketing claims for the product. Additionally, the IPDP should include an assessment of the actual and potential competitive products likely to be on the market at or near the time of product launch. This information will be essential for optimal study design and conduct, and will therefore improve the chances of ultimate product success. Cross-functional input and buy-in from all key internal and external stakeholders for each study, as well as on the full development plan, is a necessity.

2.    Design the study

clinical study designBefore you start thinking about the protocol study procedures and visit schedule, you need to understand your overall goals for the study, and how the data that are collected will not only support your product development strategy but ultimately move your program forward. For studies in the early phase of development, consider first outlining the study objectives, as well as the endpoints that specifically address those objectives in a measurable and meaningful way. The design of the study should then flow from those objectives and endpoints, making sure the technical and logistical aspects of the protocol maintain a focus on the end goals.

For all studies, consider developing the statistical analysis plan (SAP) before drafting the protocol. During SAP development, the study objectives and endpoints are comprehensively considered and designed, along with the specific analytical methods needed to optimally interpret the data. Choose a sample size that has sufficient statistical power to reliably detect outcomes and differences of interest and that meaningfully contributes to accumulation of an adequate safety database for your product, but is also as practical as possible to enable successful study completion. Then, explore study design options with the protocol objective(s), SAP, and the TPP in mind.

In designing your study, take the following into account:

  1. Map out how key study measures will be assessed, with what frequency, and in what kind of study population.  Properly defining the study population is essential, particularly to ensure that the inclusion and exclusion criteria appropriately select for the eventual target population, as well as for optimal assessment of safety and efficacy in that population. 
  2. Be sure that existing animal toxicology data are adequate to support any proposed duration of dosing, dose levels, and specific subject eligibility criteria.
  3. Be mindful of manufacturing capacity and schedules for study drug to ensure that your study is feasible given the cost of goods and timelines for manufacturing.  You may have to adjust the dosing duration, dosage, number of dose levels, or your study timeline to accommodate manufacturing limitations.  Even after your drug is manufactured, you may want or need to develop specialized packaging such as blister packaging or cold-chain logistics to help ensure study success.
  4. Remember that the more complex the study design (e.g., number of arms, number of objectives and endpoints, number or complexity of assessments), the greater the chances for errors, omissions, data quality issues, and unexpected complications during study execution; and, therefore, the greater the chance for study failure.  Study design should be laser focused on what is required to produce only the information necessary for product labeling and/or to progress the compound to the next stage of development.  For this reason, it is also important to avoid the common temptation of adding “nice-to-have” but inessential study components during the course of protocol development. 

3.  Define technical details

Establish or obtain an International Conference on Harmonisation (ICH)-compliant protocol template and develop and maintain a style guide and/or list of writing conventions to ensure consistency and clarity within and between study documents.  Establish the appropriate reviewing processes, and identify cross-functional reviewers (editorial, regulatory, clinical, statistical, data management, medical, product safety, senior management, etc.).  Record all key decisions and their rationale throughout the development and writing process.  Failure to do so may result in frequently having to revisit issues, causing unnecessary delays and changes in the protocol or development plan.

4.  Draft the synopsis

Generate the study schedule of events, and draft the synopsis.  The synopsis should be no more than 10 pages total.  Obtain feedback from cross functional subject matter experts, senior leadership from the sponsor/contract research organization (CRO), and potential clinical investigators and study site staff.  Revise and finalize the synopsis:  this is the foundation for the clinical study protocol.  

Protocol

5.  Define operational details

Consider essential operational logistics such as laboratory test results required to enroll and/or randomize subjects (e.g., will this require local labs as opposed to a central lab?), total blood volume drawn, equipment and space necessary for subject evaluation, availability of specialist(s) for nonstandard assessments, storage and shipping requirements for clinical specimens and investigational product, and scheduling limitations/conflicts for study visits.  Consult both sponsor and CRO operations staff and study sites as necessary to determine the feasibility of the proposed operational plan.  

6.  Minimize the potential for amendments

simplify the protocol where possibleConsider what qualifies for inclusion in the protocol; detailed information that is not directly relevant to study conduct is usually better suited for operations manuals, which can be more easily updated throughout the study.  Avoid redundancy within the protocol; state everything once.  Use the synopsis as a tool to establish the foundation of the protocol.  At the completion of protocol development, the synopsis should be reviewed to ensure it accurately reflects the content of the final protocol (if it is intended to be appended to the protocol or used separately as an internal reference tool).  Continuously revising the synopsis while the protocol is being written is unnecessary and discouraged as this invariably leads to errors in one document or the other, as well as in the resulting study.  Whether or not a synopsis is included in the final protocol itself is often a matter of sponsor preference. 

7.  Draft the protocol

Prepare the protocol draft by expanding on the detail in the synopsis regarding the investigational plan, study schedule, analysis plan, safety monitoring, and the other outlined provisions.  Much of the protocol should be derived from template language, which generally does not change from protocol to protocol, but rather, only changes periodically following revised regulatory requirements or other administrative preferences.  Obtain additional review from cross-functional subject matter experts (which may include patient advocacy groups, as applicable), the sponsor and/or CRO personnel, and select study investigators.

Download: Protocol Template

Concurrent and/or Post-Protocol

8.  Draft the informed consent form (ICF)

Using an established and compliant informed consent form (ICF) template, draft the ICF with finalized protocol information at the appropriate reading level for the intended study subjects, which is rarely greater than about an eighth-grade level.  Obtain cross-functional subject matter expert and sponsor/CRO/site feedback.  Revise and finalize the form, which may require site- and institutional review board (IRB)-specific information or even site/IRB specific template language.  While the consent must include all required regulatory elements, strive to make the consent form as short as possible and without repetition.  A consent form that is overly complicated or too long to be easily read and understood fails in its purpose.

9.  Design case report forms (CRFs)

case report forms (CRFs)Capture data efficiently (fewer queries) with appropriate and reasonable CRF pages.  Be considerate of open-ended text boxes versus check boxes:  while an open-ended text box is preferable for describing unexpected, non-categorical events, check boxes are better for categorical items (e.g., ethnicity) to reduce the need for queries and to facilitate downstream data analysis.  The CRF should undergo interdisciplinary review by representatives from key functional areas (i.e., data management, biostatistics, programming, clinical operations, regulatory, safety, medical affairs) prior to finalization. 

10.  Design and compile operations manuals

The clinical sites will reference operations manuals for additional study information that is not specified in detail in the protocol (e.g., pharmacokinetic sampling procedures, shipping information, tissue collection procedures, investigational product preparation/dispensation, study contact information, etc.).  Use the manuals as an easily accessible reference for site study staff and a repository for information that has the potential to change during the study (e.g., shipping addresses if personnel/vendors are likely to change).

Download: Protocol Template

 

Why Depression Studies So Often Fail:  Don’t Blame “Placebo Response”

Posted by Brook White on Thu, Jun 29, 2017 @ 02:34 PM
Share:

Jack Modell, Vice President and Senior Medical OfficerJack Modell, Vice President and Senior Medical Officer, is a board-certified psychiatrist with over 35 years of experience in clinical research, including 20 years conducting trials, teaching, and providing patient care in academic medicine, and 15 additional years of experience in clinical drug development (proof of concept through market support), medical affairs, successful NDA filings, medical governance, drug safety, compliance, and management within the pharmaceutical and CRO industries. Jack has authored over 50 peer-reviewed publications across numerous medical specialties and has lead several successful development programs in the neurosciences. Jack is a key opinion leader in the neurosciences and is nationally known for leading the first successful development of preventative pharmacotherapy for the depressive episodes of seasonal affective disorder.

Prior to joining the pharmaceutical and contract research organization industries, I was in clinical practice for twenty years as a psychiatrist and medical researcher.  And something I noticed very early on among my patients with major mental illnesses, particularly those with severe depression and psychotic disorders, was that they did not generally get better – at least not for more than a day or two – by my simply being nice to them, treating them with ineffective medications (e.g., vitamins when no vitamin deficiency existed), seeing them weekly for office visits, or by providing other so-called supportive interventions that did not directly address the underlying illness.  To be clear, this is not to say that kindness and supportive therapy are not critical to the patient-physician relationship (“The secret of the care of the patient is in caring for the patient” [Frances Weld Peabody, 1927]), but rather that kindness and support alone rarely make a biologically based illness substantially improve or disappear. 

With this background, I vividly recall my surprise upon being asked shortly after I joined the pharmaceutical industry:  “Can you help us figure out how to decrease the nearly 50% placebo-response rate we see in antidepressant trials for major depressive disorder?”  “Fifty percent?” I replied, incredulously.  “There’s no way that 50% of patients in a true major depressive episode get better on placebos or just by seeing the doctor every couple of weeks!”  “Seriously?” was the reply, and they showed me voluminous data supporting their figure.

I spent the next few years trying to figure out this apparent paradox.  Not surprisingly, the answer turned out to be multifactorial.  After careful review of internal and external data, as well as published explanations for high “placebo response rates” in clinical depression trials (much of which also applies to clinical trials in general), the following three factors emerged as being of particular importance because they are easily mitigated by proper trial design, thorough research staff training, and meticulous oversight of study conduct.

(1)  Subjects being admitted into clinical trials often had depressive symptoms, but did not truly meet criteria for major depressive disorder.  Examples include subjects with personality disorders whose symptoms wax and wane considerably with external factors (e.g., family or job stress), subjects with depressive symptoms in response to a particular stressor (not of sufficient severity or duration to meet formal criteria for a major depressive episode and likely to abate with the passage of time), and subjects who – for various reasons – may feign or exaggerate symptoms for the purpose of seeking attention or gaining access to a clinical trial.  Unlike the patients I encountered in my clinical practice, subjects with these presentations often do improve with supportive interventions and placebo. 

Recruitment of truly depressed subjects is made even more difficult by the widespread availability of reasonably effective medication options. Patients in the throes of a major depressive disorder, who sometimes have difficulty even making it through the day, are rarely keen to commit to the additional efforts, uncertainties, and treatment delays involved with a clinical trial when an inexpensive prescription for an effective generic antidepressant can now be filled in a matter of minutes. Indeed, as more and more generally safe and effective medications have become approved and readily available for a variety of illnesses, the motivation for patients to join clinical trials in the hope of finding an effective treatment has correspondingly decreased.

(2) The second factor is somewhat difficult to discuss because it sometimes provokes an understandable defensive response in clinical investigators.  Consciously or unconsciously, many investigators and clinical raters inflate or deflate clinical ratings to enable the subject to gain entry into, or remain enrolled in, a clinical trial.  Most commonly, this is done by subtly – and sometimes not so subtly – coaching subjects on their answers, or when subject responses or findings seem to fall in between scale severity ratings, by rounding up or down to a rating that is more likely to qualify the subject for the trial. 

The effect of this practice is diagrammed in the following figures, specific examples of which can be seen in these references.1-3 In Figure 1, the white bell-shaped distribution is the expected distribution in severity rating scores of an unselected clinical population presenting for clinical trial participation, let’s say with a mean score at shown at X̄n. Not uncommonly, what we see in clinical trials in which a certain scale severity score is required for study entry (depicted by the vertical light blue line, with a score to the right of it required for entry) is not the expected right half of this bell-shaped distribution, but rather a distribution like that shown by the orange curve, which is essentially the right-half of the bell-shaped distribution with a large proportion of subjects whose ratings fell short of required severity for study entry (to the left of the blue line) “pushed” to the right, over the blue line, so that the subjects now qualify for study inclusion, with the mean of those thus selected shown at X̄s.

Figure 1

depression-fig-1.jpg

At the first follow-up visit, when raters (and subjects) now have little incentive to influence rating scores to meet a pre-specified criterion, the scores of the entire included population are free to relax towards their true values and assume the pre-selection and more normally distributed pattern.  Moreover, subjects and investigators, expecting that the onset of treatment should coincide with at least some clinical improvement, may bias rating scores during this period to reflect this expectation even though the signs and symptoms of the illness may have yet to show true change.  During this same time, any actual clinical improvement will also result in the rating score mean shifting leftward (white arrow, figure 2), but because the measured change – from the initial X̄s of the selected population to the new mean (X̄n1; orange arrow, figure 2) – is generally much greater than a true treatment effect during this period, any real changes are obscured and the ability to detect a true drug-placebo difference may be lost.  While this early “improvement” in rating scores for subjects in clinical trials may appear to be a “placebo effect” and is often confused with it, this apparent improvement is instead the result of artificially inflated scale scores regressing back to their original true distribution, in combination with whatever actual treatment and placebo effects may have occurred during this time.  Unfortunately, the introduction of non-qualified subjects to the study and rater bias will continue to hamper detection of actual drug-placebo differences throughout the course of the study.

Figure 2

depression-fig-2.jpg

(3) Finally, investigators and site staff often do not fully understand the true objective of the clinical trial:  it should never, for example, be “to show treatment efficacy” or to show that a product is “safe and well tolerated,” but rather, to test the null hypothesis of no treatment difference or to estimate likely treatment effect, as well as to faithfully and objectively record all adverse effects that may emerge during treatment.  Likewise, investigators and site staff often fail to understand the importance of complete objectivity and consistency in performing clinical ratings, the intention behind and importance of every inclusion and exclusion criterion (necessary for their proper interpretation and application), and the destructive effect on the outcome and scientific integrity of the trial that even well-intended efforts to include subjects who are not fully qualified can have.  

Each of these three factors can skew both drug and placebo trial populations and results, making it appear that subjects “improved” well beyond what would have resulted had there been strict adherence to protocol requirements and objective assessment of study entry and outcome measures.

What, then, can be done to prevent these problems from sabotaging the results of a clinical trial?  Foremost are thorough and meticulous investigator and rater education and training.  All too often, perfunctory explanations of the protocol and clinical assessment tools are provided at investigator meetings, and “rater training” takes the form of brief demonstrations of how the rating scales are used and scored, without actually testing raters to be certain that they fully understand how the scales are to be used and interpreted, including understanding scoring conventions, criteria, and necessary decision-making.4  Even seemingly sound training has marked limitations both immediately and as training effects deteriorate during conduct of the trial.4-7 

Training of the research staff must include not only a review of the protocol design and study requirements, but detailed explanations about why the trial is designed exactly as it is, the importance of strict adherence to study inclusion and exclusion criteria, and the necessity for complete honesty, objectivity, and consistency in conducting the clinical trial and in performing clinical assessments.  Detailed training on the disease under study is also important to ensure that all site staff have a complete understanding of the intended clinical population and disease being studied so that they can assess subjects accordingly.  And, as noted above, rater training must include not only education on the background, purpose, characteristics, and instructions for each scale or outcome measure used, but trainers, as well as investigators and raters, should be tested for adequate understanding and proficiency in use of each of these measures. 

Meticulous monitoring during the course of the study is also essential to ensure continued understanding of, and compliance with, protocol requirements, as well as accurate and complete documentation of study procedures and outcomes.  Study monitors and others involved with trial oversight should review data during the course of the trial for unexpected trends in both safety and efficacy data, and not simply for identification of missing data or isolated datum outliers.  Unexpected trends in safety data include adverse event reporting rates at particular sites that are much higher or lower than median reporting rates, and vital signs that are relatively invariant or favor certain values over time.  Unexpected trends in efficacy data include changes in closely related outcome measures that are incongruent – for example, objective and subjective ratings of a similar outcome differing considerably in magnitude or direction, that are much larger or smaller at particular sites than those observed at most sites, that occur in relatively fixed increments, and that show unusually similar patterns or values across subjects. 

Finally, and perhaps most importantly, is that no matter how well-informed or well-intentioned investigators and raters might be, humans simply cannot match computers in objectivity and consistency, including of the kind needed to make assessments based on subject responses to questions in clinical trials.  Unless being programmed to do so, a computer cannot, for example, coach a subject on how to respond, nor would it inflate or deflate ratings based on feelings, expectations, response interpretations, or desired outcomes.  A computer faithfully asks the same questions every time, following the same algorithm, and records responses exactly as provided by the subject.  Indeed, several studies have shown that computerized assessments of entry criteria and outcome measures in clinical trials – in particular interactive voice response systems (IVRS) and interactive web response systems (IWRS) – provide data of quality and signal-detection ability that meet and often exceed that obtained by human raters.1,3,7,8,9  For these reasons, strong consideration should also be given to using IVR and/or IWR systems for assessing study entry criteria and endpoints that allow such use.  

The author acknowledges John H. Greist, MD, for his outstanding research and input regarding these important findings and considerations.

References

  1. Greist JH, Mundt JC, Kobak K.  Factors contributing to failed trials of new agents:  can technology prevent some problems.  J Clin Psychiatry 2002;63[suppl 2]:8-13.
  2. Feltner DE, Kobak KA, Crockatt J, Haber H, Kavoussi R, Pande A, Greist JH.  Interactive Voice Response (IVR) for Patient Screening of Anxiety in a Clinical Drug Trial.  NIMH New Clinical Drug Evaluation Unit, 41st Annual Meeting, 2001, Phoenix, AZ.
  3. Mundt JC, Greist JH, Jefferson JW, Katzelnick DJ, DeBrota DJ, Chappell PB, Modell JG.  Is it easier to find what you are looking for if you think you know what it looks like?  J Clinical Psychopharmacol 2007;27:121-125.
  4. Kobak KA, Brown B, Sharp I, Levy-Mack H, Wells K, Okum F, Williams JBW.  Sources of unreliability in depression ratings.  J Clin Psychopharmacol 2009;29:82-85.
  5. Kobak KA, Lipsitz J, Billiams JBW, et. al.  Are the effects of rater training sustainable?  Results from a multicenter clinical trial.  J Clin Psychopharmacol 2007;27:534-535.
  6. Kobak KA, Kane JM, Thase ME, Nierenberg AA.  Why do clinical trials fail.  The problem of measurement error in clinical trials:  time to test new paradigms?  J Clin Psychopharmacol 2007;27:1-5.
  7. Greist J, Mundt J, Jefferson J, Katzelnick D.  Comments on “Why Do Clinical Trials Fail?”  The problem of measurement error in clinical trials:  time to test new paradigms?  J Clin Psychopharmacol 2007;27:535-536.
  8. Moore HK, Mundt JC, Modell JG, Rodrigues HE, DeBrota DJ, Jefferson JJ, Greist JH.  An Examination of 26,168 Hamilton Depression Rating Scale Scores Administered via Interactive Voice Response (IVR) Across 17 Randomized Clinical Trials.  J Clin Psychopharmacol 2006;26:321-324.
  9. http://www.healthtechsys.com/publications/ivrpubs2.html 

Webinar: Selecting Inclusion/Exclusion Criteria for Your Next Trial

Practical Strategies to Simplify Patient Centricity: Part 1—Overview

Posted by Brook White on Tue, May 16, 2017 @ 11:20 AM
Share:

Shann Williams, Senior Director, OperationsShann Williams has over 10 years of experience managing clinical trials. She is the Director of Operations of the Statistical and Clinical Coordinating Center for the division-wide Consolidated Coordinating Center sponsored by the National Institute of Allergy and Infectious Disease (NIAID). In addition, Shann serves as Rho's Project Management Operational Service Leader, an internal expert sharing project management best practices, processes and training.

This is the first in a series of blog posts on putting patient-centric principles into practice.

patient-centric.jpgThe term patient centricity is fraught with uncertainty for many.  This term carries the nuances of widely varying practical application methods as well as theoretical disagreements from stakeholders in our industry. For example, an article in Applied Clinical Trials entitled FDA and Industry Share Perspectives on Patient Centricity contrasted the biopharmaceutical industry and the FDA’s perspectives on patient centricity and concluded that industry thinks patient centricity is patient engagement, whereas the FDA is focused primarily on developing clinically meaningful outcomes to patients.

Although it is understandable that we would be intimidated by the lack of regulatory guidance and the uncertainties of taking on risks in any relatively new area, I would argue that this isn’t rocket science. Some of the same practices we have known about for years that make for successful studies can be implemented to demystify patient centricity and provide a starting place. 

Let’s take the well-known adage: How do you eat an elephant?  One bite at a time, right? Specifically, how do we eat the elephant of patient centricity?  We can employ simple, actionable, “bite-sized” strategies that will move us closer to a more patient centric approach.

patien-centricity-elephant.png

I’m not planning to cover the entire elephant, but rather will focus on what we’ve learned that has proven successful. For additional information on “the whole elephant,” I encourage you to read the findings from this DIA/Tufts study. 

Two high-level concepts that we’ve learned from years of clinical and community-based research have positive impacts on studies and are bite-sized portions toward patient centricity: easing patient burden and effective communication.  And, as the DIA/Tufts study reported, these are well in line with what they called “study volunteer ease” which was found to have the biggest bang for a relatively small investment.

Easing Patient Burden

Implementing the same successful practices we’ve used to achieve high retention rates is a first step toward reducing patient burden and allowing for a more patient focused approach.

High participant retention is important for any clinical research trial. It is critical to our ability to reach power for study analysis and is an indicator of overall study success. Those of you reading this post are likely from about the same socio-economic demographic with similar life circumstances: a busy career and a busy life outside of work.  How many of us would make the time to participate in clinical trials even when our careers hinge on their success? Very few of us, it seems, since it would currently take 1:6 Americans to participate in clinical trials to fulfill the enrollment goals for the studies currently listed on Clinicaltrials.gov.

Why is participation so low? Besides the risks and the poor perception of our industry, it is likely also because we know the burden and inconvenience is too great. Patients choose to participate in studies because the benefit of their participation outweighs their perceived risk, burden, and general inconvenience. Some patients are being altruistic while others are hoping their participation will improve their health. We can help to change the perception of clinical trials by making them less burdensome for patients overall.child-and-doctor.jpg

The Urban Environment and Childhood Asthma (URECA) study is an observational birth cohort study currently in its 12th year. It was funded out of the National Institutes of Allergy and Infectious Disease. URECA currently has 606 total patients enrolled. In its first two years it had 89% retention rate and  461 of the 606 original patients are still enrolled (76% retention) 12 years into the study. That high retention is attributable to several patient-focused practices described in detail below.  Patient focused practices start by looking at the study from the patient’s point of view. 

For example, we cannot expect patients to accommodate our schedules. We have to think about the logistics of what we’re asking them to do from their perspective.  Do they need to take off work? Will they need to miss school?  Are they going to have to deal with the complexities and stress of hospital and clinic parking decks?  How many of us ever complete surveys received in the mail actually return them? How many of us are annoyed by the constant barrage of emails in our inboxes?  Are we more likely to respond to a text message or return a call left on our voice mail?  

If we don’t think through these very simple things carefully, we are setting ourselves for enrollment challenges, the potential of multiple (and costly) protocol amendments, and we won’t be any closer to relieving the burden of clinical trial participation for our patients. Even if a protocol is designed with patient outcomes in mind, without implementing some basic principles, we’ve failed to take the patient into consideration.

These were the 8 patient-focused practices employed in the URECA study and discussed in detail in our publication in Clinical Trials from 2010

  • Call hours
    • Consider conducting any reminder phone calls, follow-up questionnaires or recruitment and screening calls after regular business hours when patients are more likely to be available. 
  • Employ culturally competent staff
    • Employing culturally competent staff that speak the patient’s native language, understand the nuisances of that specific population, and identify with challenges and considerations within that specific demographic and geographic location is imperative to putting patients first.
  • Flexible visit scheduling
    • Along the same lines as conducting calls after hours, is the clinic/site open on Saturdays? Consider scheduling changes that will allow patients to come in the early morning or evenings to avoid missing work and school.
  • Provide reimbursements for transportation and parking (no brainer!)
  • Host retention events
    • This is an example of something that worked for this specific population that may not work for others and points back to really knowing your population via hiring competent staff.  Since these were mothers and their young children, these events brought a sense of community.  Patients got to know other mothers and children that were in the same study and were able to create relationships and deepen relationships with study staff. 
  • Offering home visits
    • The success of this study hinged on our ability to meet the needs of this population. Young mothers with asthma who have babies with respiratory infections do not want to brave the winter in Boston to call a taxi to take them across town so that they can have a nasal lavage performed. This is not without risk though, and an option like this has additional implications that must be considered carefully.
  • Cell phone or texting reimbursements
    • This is especially important for those families who buy minutes. 
  • Distributing quarterly newsletters 
    • This is an easy way to make families aware of study status. The one from last December offered an update on the study retention rates and included indoor activities – making snowman cookies, connect-the-dots snowman for the children, and tips for staying safe in cold weather. These are more than just visit reminders or asthma educational materials, they are geared toward ensuring the patients feel engaged in the study. 

These practices take careful pre-planning, but by incorporating just one principle based on the time, schedule, and budget for each study, we can move forward in the right direction toward putting our patients first.

Look for the second post in this series which will share some specific patient stories that highlight the importance of our second patient centric principle: effective communication.

Webinar: Selecting Inclusion/Exclusion Criteria for Your Next Trial

Not Just Tiny Humans: Considerations for Conducting Pediatric Clinical Trials

Posted by Brook White on Tue, Apr 25, 2017 @ 09:43 AM
Share:

Jamie Arnott, BSN, RN--Project DirectorCaitlin Hirschman, RN, BSN--Clinical Team LeadProject Director Jamie Arnott, RN, BSN and Clinical Team Lead Caitlin Hirschman, RN, BSN have extensive experience in pediatric clinical research including recent studies in rare diseases and diseases with a seasonal component.

When it comes to the conduct of pediatric clinical trials, there are number of things you need to consider in order to ensure the successful conduct of a study. While we can’t predict the outcome, planning ahead for appropriate site conducting clinical studies in pediatric patient populationsand subject selection will take you one step closer. From study design to logistics to recruitment, there are real differences between studies conducted in pediatric populations and studies conducted in adult populations.

Patient Recruitment

While patient recruitment can be challenging in any study, there are additional challenges to recruiting pediatric patients.  Parents may be more risk averse to giving an unproven therapy to their child than they would towards receiving it themselves.  To improve the chances of successfully enrolling a study, it is important to consider potential motivators for participation:

  • Therapeutic benefit: If you are working on a therapy for a rare disease or for an indication where there is no approved or effective product, parents may be motivated by the opportunity to receive treatment that could improve their child’s condition even if it isn’t proven and if there is a chance they will receive placebo.  When there is an approved effective treatment available parents are likely to be reluctant to sign their child up when they may receive placebo, receive a treatment whose effectiveness is unknown, or receive a treatment with unknown side effects and safety issues.
  • Financial incentives: Many studies offer financial incentives to participants, and this can be a motivating factor for some parents.  Additionally, patients may receive study related medications, assessments, or more routine care that could be cost prohibitive otherwise.
  • Research benefit: Particularly for studies in rare disease or orphan indications, parents may see the benefit in research that provides a better understanding of the disease or the prospect of better treatment options in the future even if their child does not receive a direct benefit in participation.

Understanding what motivates parents to allow their child to participate in a clinical research study will help you to determine how to advertise and recruit for your study.  Some recruitment tactics (with appropriate ethics committee approval) to consider include:

  • Directly reaching out to parents by calling or through email.
  • Advertising at family events or locations where children and parents are likely to attend.
  • Reaching out to healthcare providers who may be the patients’ first point of contact even if they are not the location where the study will be conducted.  For example, if you conduct a study where the sites and investigators are typically at specialty practices, you may still want to recruit through primary care providers.
  • Consider referral processes for these types of sites to ensure patients are considered in a timely manner, based on their indication/treatment needs.

Patient Retention

using electronic devices in pediatric clinical studiesGetting pediatric patients enrolled in a study is great, but it is just as important to make sure most patients are completing the study.  There are a number of factors that make this more difficult in a pediatric study:

  • Multiple schedules to coordinate: Each study visit requires both the parent and child to be available.  Studies with numerous visits can become a significant hassle for parents, which can lead to discontinuations.  Making sure that every visit is necessary and being as accommodating as possible with scheduling, such as including flexible visit windows can mitigate this risk. (Remember: Most of the parents still have to work and kids attend school).
  • Parents don’t see the therapeutic benefit: If parents come to believe that their child is receiving placebo or that the treatment is ineffective, they may withdraw their child from the study. Providing clear information about what the trial is evaluating and encouraging frequent communication will help facilitate the parent voicing any concerns.
  • Discomfort of participation: No one likes long doctor visits or being stuck repeatedly with a needle, but these discomforts are even harder on pediatric patients and their parents.  Evaluate each assessment carefully during protocol development (even ones like blood pressure and temperature monitoring) to reduce the overall burden to the patient.

What can be done to improve retention? Encourage investigators to talk with parents about the importance of completing the study.  Consider what incentives may be appropriate to improve retention and work within the limitations of what the IRB will allow based on your study. Cash incentives may be effective with older patients and with parents.  In some cases, we’ve seen where study information or assessments are loaded on a device like a tablet that the patient may get to keep at the end of the study.  Treats or fun activities such as coloring books or video games to play at study visits can be good incentives for younger patients.  Keep in mind that there may be limitations on what you can provide as incentives.  All incentives will require IRB approval. Finally, keep visits as short as possible, limit blood draws and invasive procedures, that every procedure and assessment is truly necessary to determine the safety or efficacy of the investigational product.

Informed Consent

Pediatric studies introduce several challenges when it comes to informed consent:

  • Typically, if patients are at least 7 years old, in addition to parental consent you will need assent from the patient.  Assent documents will need to be written at an appropriate reading level.
  • In pediatric studies, parents are likely to want to know which treatment their child received and the outcome of the study after the study is complete.  Information on whether this will be made available needs to be included in the consent document.
  • You will need to decide whether consent is required from both parents.  If not, and the parents are divorced, can either parent make the decision?  If you do need consent from both parents, this can be an additional hurdle to enrollment.

Other Considerations

In our experience there are a number of other considerations that require proper planning to ensure study success:

  • Pregnancy tests: In many cases, pregnancy tests will be needed for female patients.  Depending on the age of the child and the view of the parents, this may be a hurdle.  In many cases, these tests are required at an earlier age than parents anticipate—typically as young as 9 years old.  Parents and patients do need to be informed if the test is being done.
  • Objective outcomes: For studies with young patients, objective outcomes are highly preferable to outcomes that rely on the reporting of the patient or parent.  If patient-reported scales are used, staff will need to be trained to get answers from the patient rather than the parent.
  • Sibling bias: The protocol will need to specify whether siblings can participate in the study.  Allowing siblings to participate may be helpful for enrollment, but it can also introduce bias into the results and potentially create a risk if home treatment is required.  There is a risk that if siblings are in different treatment arms, the treatments could be mixed up, while at home,  resulting in subjects receiving the incorrect treatment.
  • Dosing during school hours: If the protocol requires dosing during school hours, this may require extra paperwork/legwork on the part of the parent to gather supportive information allowing the school staff to give this investigational product (IP).  Many schools will not give IP to students and may not allow students to retain IP even if it is self-administered.
  • Missed school: Frequent visits during the school day or overnight visits may cause absenteeism issues for school-aged children.
  • Continuation: Consider whether participants will be able to continue receiving the product after the study, for example, through an open label extension of the study.  Even for a Phase III study, it may be several years before a product receives marketing approval.

Conducting clinical research in pediatric populations does introduce a unique set of challenges.  With proper planning, however, many of these challenges can be avoided or mitigated.

Download 5 Lessons Learned Conducting ADHD Trials

FDA Guidance on Non-Inferiority Clinical Trials to Establish Effectiveness

Posted by Brook White on Thu, Apr 20, 2017 @ 11:42 AM
Share:

Heather Kopetskie, Senior BiostatisticianHeather Kopetskie, MS, is a Senior Biostatistician at Rho. She has over 10 years of experience in statistical planning, analysis, and reporting for Phase 1, 2 and 3 clinical trials and observational studies. Her research experience includes over 8 years focusing on solid organ and cell transplantation through work on the Immune Tolerance Network (ITN) and Clinical Trials in Organ Transplantation (CTOT) project.  In addition, Heather serves as Rho’s biostatistics operational service leader, an internal expert sharing biostatistical industry trends, best practices, processes and training.

In November 2016, the FDA released final guidance  on Non-Inferiority Clinical Trials to Establish Effectiveness providing researchers guidance on when to use non-inferiority trials to demonstrate effectiveness along with how to choose the non-inferiority margin, test the non-inferiority hypothesis, and provide interpretable results. The guidance does not provide recommendations for how to evaluate the safety of a drug using a non-inferiority trial design. This article provides background on a non-inferiority trial design along with assumptions and advantages and disadvantages of the trial design.

Background

A non-inferiority trial is used to demonstrate a test drug is not clinically worse than an active treatment (active control) by more than a pre-specified margin (non-inferiority margin). There is no placebo arm in non-inferiority trials. A non-inferiority trial design is chosen when using a placebo arm would not be ethical because an available treatment provides an important benefit, especially for irreversible conditions (e.g. death). Without a placebo arm to compare either the test or active control against it is important to determine that the active control had its expected effect in the non-inferiority trial. If the active control had no effect in the non-inferiority trial it would not provide evidence that the test drug was effective.
The table below compares superiority with non-inferiority trials with respect to the objective and hypotheses. The effect of the test drug is ‘T’ and the effect of the active control is ‘C’. The difference tested during analyses is C – T.

  Superiority Trial Non-inferiority Trial
Objective To determine if one intervention is superior to another To determine if a test drug is not inferior to an active control intervention, by a preset margin
Null Hypothesis No difference between the two interventions The test drug (T) is inferior to the active control (C) by some margin (M) or more (C – T >= M).
Alternative Hypothesis One intervention is superior to the other The test drug (T) is inferior to the active control (C) by less than M (C-T < M)

 

Selecting a non-inferiority margin in a trial is challenging but also critical to a successful trial. The largest possible choice for the non-inferiority margin is the entire known effect of the active control compared to placebo, called M1. However, doing this, would lead to a finding that the test drug has an effect greater than 0. More generally, the non-inferiority margin is set to some portion of M1, called M¬2, to preserve some effect of the control drug, based on clinical judgment. For example, if a superiority trial of the active control demonstrated to be 15% better than placebo, a clinician may set the non-inferiority margin to be 9% (M1=15%, M2=9%). This would be 6% worse than the active treatment, but still 9% better than placebo.

Multiple results are possible in a non-inferiority trial as explained in the graphic below. The point estimate is indicated by the square and is the measure of C – T; the bars represent a 95% confidence interval; and ∆ is the non-inferiority margin.

non-inferiority drug trial, interpretation of results

  1. Point estimate favors test drug and both superiority and non-inferiority are demonstrated.
  2. Point estimate is 0 suggesting equal effect of active control and active treatment. The upper bound of the 95% confidence interval is below the non-inferiority margin so non-inferiority is demonstrated.
  3. The point estimate favors the active control. The upper bound of the 95% confidence interval is less than the non-inferiority margin, demonstrating non-inferiority. However, the point estimate is above zero indicating that active treatment is not as good as the active control (C – T > 0), even while meeting the non-inferiority standard.
  4. Point estimate is 0 suggesting equal effect, but the upper bound of the 95% confidence interval is greater than the non-inferiority margin so non-inferiority is not demonstrated.
  5. Point estimate favors the active control and the entire confidence interval is above the non-inferiority margin so inferiority is demonstrated.

Non-inferiority Margin

The selection of the non-inferiority margin is critical in designing a non-inferiority trial and the majority of the FDA guidance focuses on this. The non-inferiority margin is selected by reviewing historical trials of the active control. The active control must be a well-established intervention with at least one superiority trial establishing benefit over placebo. If approval of the active control was based on a single study (not unusual in the setting of risk reduction of major events such as death, stroke, and heart attack), changes in practice should be evaluated. Using the lower bound of the 95% confidence interval provides a conservative estimate of the active control effect. If multiple historical trials exist one of the assumptions of the non-inferiority trial is consistency of the effect between the historical studies and the non-inferiority trial. Therefore, if consistency isn’t present between the historical studies this can lead to problems in estimating the active control effect. Inconsistency can also sometimes lead researchers away from performing a non-inferiority trial, especially if a historical trial did not demonstrate an effect. In situations with multiple historical trials, careful review of all study results and a robust meta-analysis are crucial to selecting an appropriate non-inferiority margin.

Assay Sensitivity and Constancy Assumption

Assay sensitivity is essential to non-inferiority trials as it demonstrates that had the study included a placebo arm, the active control – placebo difference would have been at least M1. The guidance outlines three considerations when determining if a trial has assay sensitivity.

  1. Historical evidence of sensitivity to drug effects
  2. The similarity of the new non-inferiority trial to the historical trials (the constancy assumption)
  3. The quality of the new trial (ruling out defects that would tend to minimize differences between treatments)

The constancy assumption in #2 above is that the non-inferiority study is sufficiently similar to the past studies with respect to the following design features.

  • The characteristics of the patient population
  • Important concomitant medications
  • Definitions and ascertainment of study endpoints
  • Dose of active control
  • Entry criteria
  • Analytic approaches

The presence of constancy is important to evaluate. For example, if a disease definition has changed over time or the methodology used in the historical trial is outdated the constancy assumption may be violated and the use of a non-inferiority design may not be appropriate. If all the design features are similar except the patient characteristics the estimate of the size of the control effect can be adjusted if the effect size is known in the patient sub-groups.

Benefits of non-inferiority trials

  • A non-inferiority trial is useful when a placebo controlled trial is not appropriate.
  • A non-inferiority trial may also test for superiority without concern about inflating the Type I error rate with care planning of the order in which hypothesis are tested. The reverse is not true; a superiority trial cannot claim non-inferiority.

Disadvantages of non-inferiority trials

  • Must be able to demonstrate assay sensitivity and the constancy assumption hold. This is especially difficult when medical practice has changed since the superiority trial (e.g. the active control is always used with additional drugs currently).
  • When the active treatment is not well established or historical trials have shown inconsistent results choosing a non-inferiority margin proves to be difficult.
  • If the treatment effect of the active control is small, the sample size required for a non-inferiority study may not be feasible
Download: Understanding Dose Finding Studies

The Rise of Electronic Clinical Outcome Assessments (eCOAs) in the Age of Patient Centricity

Posted by Brook White on Tue, Dec 06, 2016 @ 10:36 AM
Share:

Lauren Neighbours, Clinical Research ScientistLauren Neighbours is a Research Scientist at Rho. She leads cross-functional project teams for clinical operations and regulatory submission programs and has over ten years of scientific writing and editing experience. Lauren has served as a project manager and lead author for multiple clinical studies across a range of therapeutic areas that use patient- and clinician-reported outcome assessments, and she worked with a company to develop a patient-reported outcome instrument evaluation package for a novel electronic clinical outcome assessment (eCOA).

Jeff Abolafia, Chief Strategist Data StandardsJeff Abolafia is a Chief Strategist for Data Standards at Rho and has been involved in clinical research for over thirty years. He is responsible for setting strategic direction and overseeing data management, data standards, data governance, and data exchange for Rho’s federal and commercial divisions. In this role, Jeff is responsible for data collection systems, data management personnel, developing corporate data standards and governance, and developing systems to ensure that data flows efficiently from study start-up to submission or publication. Jeff has also developed systems for managing, organizing, and integrating both data and metadata for submission to the FDA and other regulatory authorities.

patient centricityWith the industry-wide push towards patient-centricity, electronic clinical outcome assessments (eCOAs) have become a more widely used strategy to streamline patient data collection, provide real-time access to data (for review and monitoring), enhance patient engagement, and improve the integrity and accuracy of clinical studies.  These eCOAs are comprised of a variety of electronically captured assessments, including patient reported outcomes (PROs), clinician-reported and health-care professional assessments (ClinROs), observer reported outcomes (ObsROs), and patient performance outcomes administered by health-care professionals (PerfOs).  The main methods for collection of eCOA data include computers, smartphones, and tablets, as well as telephone systems.  While many companies have chosen to partner with eCOA vendors to provide these electronic devices for use in a clinical study, other sponsors are exploring “bring your own device (BYOD)” strategies to save costs and start-up time.  No matter what strategy is used to implement an eCOA for your clinical study, there are several factors to consider before embarking on this path.  

Designing a Study with eCOAs

The decision to incorporate an eCOA into your clinical study design is multifaceted and includes considerations such as the therapeutic area, the type of data being collected, and study design, but the choice can first be boiled down to 2 distinct concepts: 1) the need for clinical outcome data from an individual, and 2) the need for this data to be collected electronically. Thus, the benefits and challenges to eCOAs can be aligned with either or both of these concepts.

Regarding the first concept, the need for clinical outcome data should be driven by your study objectives and a cost-benefit analysis on the optimal data collection technique. Using eCOAs to collect data is undoubtedly more patient-centric than an objective measure such as body mass index (BMI), as calculated by weight and height measurements. The BMI calculation does not tell you anything about how the patient feels about their body image, or whether the use of a particular product impacts their feelings of self-worth. If the study objective is to understand the subjective impact of a product on the patient or health-care community, a well designed eCOA can be a valuable tool to capture this information. These data can tell you specific information about your product and help inform the labeling language that will be included in the package insert of your marketed product. Additionally, FDA has encouraged the use of PROs to capture certain data endpoints, such as pain intensity, from a patient population who can respond themselves (see eCOA Regulatory Considerations below). Of course, it’s important to note that the inherent subjectivity of eCOAs does come with its own disadvantages. The data is subject to more bias than other objective measures, so it’s critical to take steps to reduce bias as much as possible. Examples of ways to reduce bias include single- or double-blind trial designs, wherein the patient or assessor is not aware of the assigned treatment, and building in a control arm (e.g., placebo or active comparator) to compare eCOA outcome data across treatment groups.

Another important concept is the process for identifying and implementing the electronic modality for eCOA data collection.  Many studies still use paper methods to collect clinical outcome data, and there are cases when it may make more sense to achieve your study objectives through paper rather than electronic methods (e.g., Phase 1 studies with limited subjects).  However, several types of clinical outcome data can be collected more efficiently, at lower cost, and at higher quality with electronic approaches (e.g., diary data or daily pain scores).  From an efficiency standpoint, data can be entered directly into a device and integrated with the electronic data management system being used to maintain data collection balancing time and cost when considering paper or electronic clinical outcomes assessmentsfor the duration of the study.  This saves time (and cost) associated with site personnel printing, reviewing, interpreting, and/or transcribing data collected on paper into the electronic data management system, and it also requires less monitoring time to review and remediate data.  Additionally, paper data is often “dirty” data, with missing or incorrectly recorded data in the paper version, followed by missing or incorrectly recorded data entered into the data management system.  The eCOA allows for an almost instantaneous transfer of data that saves the upfront data entry time but also saves time and cost down the road as it reduces the effort required to address queries associated with the eCOA data.  Aside from efficiencies, eCOA methods allow for more effective patient compliance measures to be implemented in the study.  The eCOA device can be configured to require daily or weekly data entry and real-time review by site personnel prior to the next scheduled clinic visit.  Additionally, the eCOA system can send out alerts and reminders to patients (to ensure data is entered in a timely manner) and to health-care personnel (to ensure timely review and verification of data and subsequent follow-up with patients as needed).  The downsides to electronic data collection methods tend to be associated with the costs and time to implement the system at the beginning of the study.  It’s therefore essential to select an appropriate eCOA vendor  early who will work with you to design, validate, and implement the clinical assessment specifically for your study.

eCOA Regulatory Considerations

In line with the industry push for patient-focused clinical studies, recent regulatory agency guidance has encouraged the use of eCOAs to evaluate clinical outcome data.  The fifth authorization of the Prescription Drug User Fee Act (PDUFA V), which was enacted in 2012 as part of the Food and Drug Administration Safety and Innovation Act (FDASIA), included a commitment by the FDA to more systematically obtain patient input on certain diseases and their treatments.  In so doing, PDUFA V supports the use of PRO endpoints to collect data directly from the patients who participate in clinical studies but also as a way to actively engage patients in their treatment.  The 2009 FDA guidance for industry on Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims , further underscores this idea by stating “[the use] of a PRO instrument is advised when measuring a concept best known by the patient or best measured from the patient perspective.”  The 2013 Guidance for Industry on Electronic Source Data in Clinical Investigations  provides the Agency’s recommendations on “the capture, review, and retention of electronic source data” and is to be used in conjunction with the 2007 guidance on Computerized Systems Used in Clinical Investigations for all electronic data and systems used in FDA-regulated clinical studies, including eCOAs.  To support these efforts, the FDA has developed an extensive Clinical Outcome Assessment Qualification Program, which is designed to review and assess the design, validity, and reliability of a COA for  a particular use in a clinical study.  Furthermore, the newly formed Clinical Outcome Assessment Compendium  is a collated list of COAs that have been identified for particular uses in clinical studies.  The COA Compendium is further evidence of FDA’s commitment to patient-centric product development, and it provides a helpful starting point for companies looking to integrate these assessments into their clinical development programs. 

Before choosing an eCOA for your clinical development program, the following regulatory factors should be considered:

  • FDA holds COAs to the same regulatory and scientific standards as other measures used in clinical trials. Thus, it is advisable to refer to the Guidance for Industry on Patient-Reported Outcomes and the available information on the COA Assessment Qualification program and COA Compendium provided by the Agency when implementing eCOAs into your development program. If you plan to divert from currently available regulatory guidance, make sure to have a solid rationale and supporting documentation to substantiate your position.
  • The qualification of an eCOA often requires input from patients and/or health-care professionals to evaluate the effectiveness of the assessment. This input is necessary for the regulatory agency to determine whether the eCOA can accurately measure what it’s supposed to measure (validity) and to demonstrate it can measure the outcome dependably (reliability).
  • Data collected from qualified and validated eCOAs can be used to support product labeling claims. The key is to use an eCOA when it’s appropriate to do so and to make sure the eCOA supports your intended labeling claims because the instrument will be evaluated in relation to the intended use in the targeted patient population.
  • For the cases where an instrument was developed for paper based collection or an instrument is collected using multiple modes, it may be necessary to test for equivalence. This regulatory expectation is often required (especially for primary and secondary endpoints) to ensure that the electronic version of the instrument is still valid and data collected with mixed modes are comparable.

A CRO Can Help with your eCOA Strategy

CROs partner with sponsor companies to develop and execute their product development strategies.  In some cases, this involves implementing clinical outcome measures into a development program and then facilitating the interactions between the company and regulatory authorities to ensure adequate qualification of the COA prior to marketing application submission.  Whether or not you choose to engage a CRO in your development plan, consider seeking outside consultation from the experts prior to establishing your eCOA strategy to give you and your company the best chance of success.  

CROs Can Help:

  • Determine endpoints where eCOA data is appropriate
  • Determine the cost/benefit of electronic vs paper data capture
  • Determine the best mode of electronic data capture
  • Recommend eCOA vendors when appropriate
  • Perform equivalence analysis
  • Facilitate discussions with regulatory authorities
  • Manage the entire process of eCOA implementation

Webinar: ePRO and Smart Devices

Craftsmanship in Clinical Trial Study Design

Posted by Brook White on Wed, Jul 06, 2016 @ 02:55 PM
Share:

Ryan Bailey, Senior Clinical ResearcherRyan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including the Inner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which develops novel data visualizations and statistical graphics for use in clinical trials.

Shann Williams, Senior Director OperationsShann Williams has 10 years of experience managing clinical trials. She is a Sr. Director of Operations and the program director of the statistical and clinical coordinating center of the Transplantation Group for the division-wide consolidated coordinating center sponsored by the National Institute of Allergy and Infectious Disease (NIAID).  In addition, Shann serves as Rho's project management operational service leader, an internal expert sharing project management best practices, processes and training.

My dad is an expert carpenter and handyman. Growing up, I spent hours watching him work - furniture, flooring, painting, plumbing, roofs, siding, decks - you name it, he did it. I learned a lot from watching him and helping him, but I never accomplished his level of expertise and proficiency. It is not for lack of knowledge or ability; rather, it is a lack of practice and experience. I'm capable. My father is masterful.

craftsmanship in clinical trial study designClinical trials are not so different from construction and craftsmanship. To be a successful clinical researcher, you need to coalesce expertise across a variety of domains - statistics, data management, project management, clinical operations, product safety, regulatory, medical writing - as you design, prepare, execute, and troubleshoot throughout the trial. The CRO industry exists because we provide specialty expertise in these areas, and many pharmaceutical companies are glad to have a trusted partner to manage various aspects of this work.

Yet, regardless of which CRO services pharmaceutical companies seek out, one task pharmaceutical companies have been reluctant to source to their CRO partners is clinical trial design. According to a recent press release by Cutting Edge Information, as recently as 2014, no Top 50 pharmaceutical or medical device team surveyed reported that they shared clinical trial design responsibilities with CROs.

This is not especially surprising. With their product on the line, pharmaceutical companies have a keen interest in retaining careful control over the study design. It is also a matter of practicality. Before seeking out a CRO to support management of your trial, it helps to have a well-constructed plan of how the trial should be executed. However, Cutting Edge Information reports that this trend is likely to change dramatically in coming years. By 2020, over 50% of companies they surveyed plan to share trial design responsibilities with CROs. Why the change?

In part, it's due to the need for craftsmanship. When it comes to clinical trials, CROs offer a level of end-to-end proficiency built on decades of extensive trial management experience and specialization. When your job is conducting hundreds of trials for a wide range of clients and diverse therapeutic areas, you naturally achieve a high degree of expertise: a deep knowledge base, valuable foresight, honed skills, improved efficiencies, and the ability to operate deftly in a complex and highly regulated environment. As the ones most often implementing the trials, overseeing the day-to-day project operations, and conducting analyses, CROs are in the best position to identify the strengths and weaknesses of clinical trial designs.

With the pace of new drug and device development lagging, costs increasing, and pressure to get efficacious products to market building, clinical trial designs have come under growing scrutiny. Bad blue prints lead to less than optimal functionality or worse: a complete do-over. In the same way, even simple trial design errors will lead to less than optimal results. The need for a do-over can be cost-prohibitive or just plain disastrous.

By incorporating CROs in the design process, pharmaceutical companies will foster closer partnerships, reduce costs by leveraging efficiencies, benefit from the extensive experience CROs have to offer, and craft more effective trials. Independently, pharmaceutical companies and CROs are capable. Together, we are masterful.

Free Webinar: Protocol Design