Rho site logo

Rho Knows Clinical Research Services

What We Learned at PhUSE US Connect

Posted by Brook White on Tue, Jun 12, 2018 @ 09:40 AM

ryan-baileyRyan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including the Inner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which develops novel data visualizations and statistical graphics for use in clinical trials.

Last week, PhUSE hosted its first ever US Connect conference in Raleigh, NC. Founded in Europe in 2004, the independent, non-profit Pharmaceutical Users Software Exchange has been a rapidly growing presence and influence in the field of clinical data science. While PhUSE routinely holds smaller events in the US, including their popular Computational Science Symposia and Single Day Events, this was the first time they had held a large multi-day conference with multiple work streams outside of Europe. The three-day event attracted over 580 data scientists, biostatisticians, statistical programmers, and IT professionals from across the US and around the world to focus on the theme of "Transformative Current and Emerging Best Practices."

After three days immersed in data science, we wanted to provide a round-up of some of the main themes of the conference and trends for our industry.

Emerging Technologies are already Redefining our Industry

emerging technologyIt can be hard to distinguish hype from reality when it comes to emerging technologies like big data, artificial intelligence, machine learning, and blockchain.  Those buzzwords made their way into many presentations throughout the conference, but there was more substance than I expected.  It is clear that many players in our industry (FDA included) are actively exploring ways to scale up their capabilities to wrangle massive data sets, rely on machines to automate long-standing data processing, formatting, and cleaning processes, and use distributed database technologies like blockchain to keep data secure, private, and personalized.  These technologies are not just reshaping other sectors like finance, retail, and transportation; they are well on their way to disrupting and radically changing aspects of clinical research.

The FDA is Leading the Way

Our industry has gotten a reputation for being slow to evolve, and we sometimes use the FDA as our scapegoat. Regulations take a long time to develop, formalize, and finalize, and we tend to be reluctant to move faster than regulations. However, for those that think the FDA is lagging behind in technological innovation and data science, US Connect was an eye opener. With 30 delegates at the conference and 16 presentations, the agency had a strong and highly visible presence.

Moreover, the presentations by the FDA were often the most innovative and forward-thinking. Agency presenters provided insight into how the offices of Computational Science and Biomedical Informatics are applying data science to aid in reviewing submissions for data integrity and quality, detecting data and analysis errors, and setting thresholds for technical rejection of study data. In one presentation, the FDA demonstrated its Real-time Application for Portable Interactive Devices (RAPID) to show how the agency is able to track key safety and outcomes data in real time amid the often chaotic and frantic environment of a viral outbreak. RAPID is an impressive feat of technical engineering, managing to acquire massive amounts of unstructured symptom data from multiple device types in real time, process them in the cloud, and perform powerful analytics for "rapid" decision making. It is the type of ambitious technically advanced project you expect to see coming out of Silicon Valley, not Silver Spring, MD.

It was clear that the FDA is striving to be at the forefront of bioinformatics and data science, and in turn, they are raising expectations for everyone else in the industry.

The Future of Development is "Multi-lingual"  

A common theme through all the tracks is the need to evolve beyond narrowly focused specialization in our jobs. Whereas 10-15 years ago, developing deep expertise in one functional area or one tool was a good way to distinguish yourself as a leader and bring key value to your organization, a similar approach may hinder your career in the evolving clinical research space. Instead, many presenters advocated that the data scientist of the future specialize in a few different tools and have broad domain knowledge. As keynote speaker Ian Khan put it, we need to find a way to be both specialists and generalists at the same time. Nowhere was this more prevalent than in discussions around which programming languages will dominate our industry in the years to come.

While SAS remains the go-to tool for stats programming and biostatistics, the general consensus is that knowing SAS alone will not be adequate in years to come. The prevailing languages getting the most attention for data science are R and Python. While we heard plenty of debate about which one will emerge as the more prominent, it was agreed that the ideal scenario would be to know at least one, R or Python, in addition to SAS.

We Need to Break Down Silos and Improve our Teams

data miningOn a similar note, many presenters advocated for rethinking our traditional siloed approach to functional teams. As one vice president of a major Pharma company put it, "we have too much separation in our work - the knowledge is here, but there's no crosstalk." Rather than passing deliverables between distinct departments with minimal communication, clinical data science requires taking a collaborative multi-functional approach. The problems we face can no longer be parsed out and solved in isolation. As a multi-discipline field, data science necessarily requires getting diverse stakeholders in the room and working on problems together.

As for how to achieve this collaboration, Dr. Michael Rappa delivered an excellent plenary session on how to operate highly productive data science teams based on his experience directing the Institute for Advanced Analytics at North Carolina State University. His advice bucks the traditional notion that you solve a problem by selecting the most experienced subject matter experts and putting them in a room together. Instead, he demonstrated how artfully crafted teams that value leadership skills and motivation over expertise alone can achieve incredibly sophisticated and innovative output.

Change Management is an Essential Need

Finally, multiple sessions addressed the growing need for change management skills. As the aforementioned emerging technologies force us to acquire new knowledge and skills and adapt to a changing landscape, employees will need help to deftly navigate change. When asked what skills are most important for managers to develop, a VP from a large drug manufacturer put it succinctly, "our leaders need to get really good at change management."

In summary, PhUSE US Connect is helping our industry look to the future, especially when it comes to clinical data science, but the future may be closer than we think. Data science is not merely an analytical discipline to be incorporated into our existing work; it is going to fundamentally alter how we operate and what we achieve in our trials. The question for industry is if we're paying attention and pushing ourselves to evolve in step to meet those new demands.

Webinar: Understanding the FDA Guidance on Data Standards

Could Your Drug Development Program Benefit from an NDA/BLA/PMA Gap Analysis?

Posted by Brook White on Wed, Aug 23, 2017 @ 09:37 AM

David Shoemaker, PhD--Senior Vice President R&DDavid Shoemaker, PhD, Senior Vice President R&D, has extensive experience in the preparation and filing of all types of regulatory submissions including primary responsibility for four BLAs and three NDAs.  He has managed or contributed to more than two dozen NDAs, BLAs, and MAAs and has moderated dozens of regulatory authority meetings.  

Jack Modell, MD--Vice President and Senior Medical OfficerJack Modell, MD, Vice President and Senior Medical Officer, is a board-certified psychiatrist with 30 years of experience in clinical research, teaching, and patient care including 10 years of experience in clinical drug development (phases 2 through 4) and successful NDA filings. Dr. Modell is a key opinion leader nationally known for leading the first successful development of preventative pharmacotherapy for the depressive episodes of seasonal affective disorder.

scott-burian.jpgScott Burian, PhD, Senior Research Scientist, has contributed to the development of a diverse range of small molecule, biologic, and nanoparticle-based products.  He has participated in numerous FDA interactions, including pre-IND meetings, Type A meetings, and Advisory Committee meetings. He is fully-versed in eCTD format and has authored a variety of CMC submissions, including numerous pre-IND meeting packages, INDs, NDAs, and IMPDs.

bridging the gap between clinical data and NDA submissionHere at Rho, we’ve helped many companies with their marketing application submissions. In fact, in the past six years, we’ve been a key service provider on 14 submissions, provided biostatistics support for 30 submissions, and prepared over 20 Integrated Summary of Safety (ISS) and Integrated Summary of Efficacy (ISE) SAPs. Over the course of working on these submissions, one common hurdle we see is that Sponsor companies often enter this stage without a strong understanding of what data they have and how that maps to a viable approval pathway.

Whether you plan to file a new drug application (NDA), a biologics license application (BLA), or a premarket approval application (PMA) with the FDA or a marketing authorization application (MAA) with the European Medicines Agency, you’ll need an in depth understanding of how the data you have from your clinical studies, nonclinical studies, and Chemistry, Manufacturing and Controls (CMC) / Quality development map to the requirements of the application. These requirements can be specific to the therapeutic area or regulatory authority, and are continually changing as science advances.

Discovering you don’t have all the data you need as you begin preparing your marketing application can lead to costly time delays. What can be done? We recommend undertaking a gap analysis following proof-of-concept in Phase II. This timing allows you to design your adequate and well-controlled studies to attain all necessary clinical data. Performing the gap analysis at this stage of development will also provide enough time to conduct additional nonclinical studies or CMC development that may be needed to support the application.
You need a cross-functional team of medical, regulatory, clinical, statistical, CMC, and toxicology experts with experience getting a product to market, ideally in the therapeutic area of interest. Many small to mid-size companies don’t have all of this expertise in-house, so the team will need to bring in outside support in the form of consultants or a contract research organization (CRO) that has this expertise.

A gap analysis starts with a detailed look at the existing data and regulatory communications. What is the format of the data? Anything you plan to submit will need to be in CDISC format, so if you need data from legacy studies, the data must be converted to CDISC format if the study was initiated after December 2016. Next, look at the label claims you plan to make. Do you have (or have a plan to collect) all the data needed to support those claims? This can be difficult to determine.

mapping clinical dataOnce you’ve determined the data you have and the data you’ll need, create a map that clearly identifies the deficiencies in your database. You may find that there are very few gaps and the data you’ve collected and will collect in your pivotal studies will adequately support your marketing application. You may also realize that you don’t need all of the data from your legacy studies, which can save you some time and money in CDISC conversion costs. Conversely, you may identify significant gaps in your database that require additional studies. That is still a good outcome because by performing the gap analysis you have clearly identified what needs to be completed and you will have sufficient time to gather the additional data. This could mean just completing your Phase 3 studies, or performing additional clinical (e.g. food effect studies) or nonclinical studies, or CMC development work, thus ensuring that upon completion of the Phase III studies, you will have a clear path to your marketing application submission.

So, is the additional time and expense of conducting a gap analysis worth it? Rho believes that the answer is most definitely, yes. However, we typically recommend waiting until proof-of concept has been demonstrated to conduct this analysis. At that point, you should have convinced yourself that you have a viable product and have a general idea of its characteristics and potential value to patients. An experienced team of medical, nonclinical, CMC, regulatory, and statistical experts can conduct a gap analysis relatively quickly and for a relatively limited cost. When compared to a significant delay between the end of Phase 3 and submission or an unsuccessful marketing application submission, it is almost certainly worth it.

Download: Marketing Application Planning Tool

Protocol Design and Development Webinar: Follow-up Q&A

Posted by Brook White on Thu, Feb 04, 2016 @ 11:07 AM

Thank you to everyone who attended our recent webinar on protocol design and development.  During the webinar, we weren't able to get to all of the questions.  Below, Dr. Shoemaker and Dr. Kesler have answered the remainder of the questions.

If you didn't have an opportunity to attend the webinar, it is now available on demand.  

Watch Webinar

Why do you think the adoption of the PRM has been so long in the coming?

The Pharmaceutical industry is nototiously slow to adopt novel techniques due to the siloed structure and because the current protocol development process has been in place for decades. Not until the current protocol authors understand the concept of CDISC and the importance of generating consistent data across their program will their methods change. That will only happen if protocol authors are responsible for writing marketing applications.

What are the major consequences of redundancy in the protocol?

Inefficiency due to the need for redundant editing to ensure replacement of all instances and ultimately the cost of amendments if the redundant information is not edited correctly.

How long does it take to properly develop a clinical protocol?

Given adequate time to develop a novel protocol for a new indication with a new molecular entity depends on coordinating the time of all the people whose input is required. Depending upon peoples' priorities and availability it typically takes between one and two months.

If I am developing my drug as an add-on to an approved drug, why not conduct Phase I in patients (not healthy volunteers) taking stable doses of the approved drug? I want to know the safety of a range of doses of study drug when so administered. Pros/cons

Pros are that you save time and money with this approach. Cons are that you won't know if a safety event is due to your product, the approved product, or the combination. You also won't know whether the patients' compromised condition contributed in any way to the safety event.

It is said that no amount of good monitoring can fix a bad protocol. Do you have an example of such a situation and what should the monitoring team look out for to avoid such a situation?

By the time the monitoring team starts reviewing the data at the site or in house it is too late, the die has already been cast by the design of the clinical study. The monitors should endeavor to participate in protocol design to assist in mistakes made at this stage. Otherwise they can only make recommendations to amend the protocol if they see the data being generated is not answering the intended objectives of the study.

Is it advisable to write into the protocol the duration of acceptable periods during which study drug may be suspended without automatically discontinuing the subject?

If your study drug planned to be titrated within subject (e.g. some hypertension drugs) then it is advisable to have not only a duration of suspension, but also dose escalation/de-escalation processes as well. For other situations where study drug is being suspended due to concomitant events, like hospitalization, it is also advisable to have windows for the duration of acceptable suspension. If you don't have expected reasons for suspension and don't expect it to happen often, then it is probably a level of detail you don't need.

Do you have any template?

Yes we have an internal protocol template that we provide to all our clients developing protocols.

Please remind us what data we need to provide for you to determine a sample size for a clinical trial.

It depends on the type of primary outcome. If it is dichotomous, you need to provide the expected percent responding in both the active and control arms. If it is continuous, you'll need to provide the expected mean and variance (or standard deviation) for each group, or the expected difference in means. Other types of outcomes (e.g. survival, multiple categories) require additional information. All studies need a Type I level (alpha) specified as well as the desired power of the study. Estimates of the rate of dropout are also needed for most studies.

We will be conducting another webinar on Thursday March 17th at 1 PM ET on Clinical Research Statistics for Non-statisticians.  We will go into more depth about sample size calculations during that webinar.

Register Now

Is this protocol process impacted if/when combo solutions are involved? Combo is defined as drug/sensor based, or subcutaneous drug-illuting solutions.

Not really. Obviously you have to understand the combination product and its properties to the same extent that you understand your drug from a nonclinical and manufacturing properties perspective.

When is unblinded medical review warranted in Phase 2 studies?

There is a new guidance from FDA as of December 2015 advocating the use of a Safety Assessment Committee to review unblinded data from the totality of the data on your product and this should be implemented with the advent of controlled studies in Phase 2.

When can multiple repeat dose safety study be done with parallel dosing of multiple dose groups?

Never. Parallel dosing of multiple dose groups can be done for efficacy comparisons after safety has been demonstrated.

What is the proper endpoint for oncology trial now? it is overall response, tumor shrink, survival or quality of life?

It depends on the type of tumor being studied, but overall response is the preferred SURROGATE clinical endpoint in most cases for accelerated approval with follow-up measurement of survival used to validate this SURROGATE clinical endpoint. Quality of Life is usually montored with one of several patient reported outcomes (PROs) as a secondary clnical endpoint.

You mentiond that CDISC was advising avoidance of the use of "Day 0" terminology to describe intervention date and that this would be required after a certain date. Can you please restate when this goes into effect?

Trials started after December 2016.

Do you know of any company which can offfer to write protocol for their product?

Rho provides protocol design and development services. You can learn more on our website or by contacting us.

Check out our other on-demand and upcoming webinars here.

David Shoemaker, SVP R&DDavid Shoemaker, Ph.D.
Senior Vice President R&D

Dr. David Shoemaker has more than 25 years of experience in research and pharmaceutical development.  He has served as a Program Leader or Advisor for multi-disciplinary program teams and has been involved with products at all stages of the development process. Dr. Shoemaker has managed the regulatory strategy for programs involving multiple therapeutic areas, including hematology, oncology, cardiology, pulmonology, infectious diseases, genetic enzyme deficiencies, antitoxins, and anti-bioterrorism agents.  He has extensive experience in the preparation and filing of all types of regulatory submissions including primary responsibility for four BLAs and three NDAs.  He has managed or contributed to more than two dozen NDAs, BLAs, and MAAs.  Dr. Shoemaker has moderated dozens of regulatory authority meetings for all stages of development.  His primary areas of expertise include clinical study design and regulatory strategy for development of novel drug and biological products.

Karen-1.jpgKaren Kesler, Ph.D.
Assistant Vice President Operations

Dr. Karen Kesler earned both a Master’s and Doctoral degree in Biostatistics from the University of North Carolina at Chapel Hill and has over 20 years of experience in the industry.  Dr. Kesler currently serves as the Primary Investigator of the Statistics and Data Management Center for a NIH sponsored coordinating center researching asthma, allergies, autoimmune disorders, and solid organ transplant.  Dr. Kesler is deeply involved in researching more efficient Phase II and III trials and has led many adaptive studies including sample size recalculations, pruning designs, Bayesian dose escalation studies, and adaptive randomizations.  She has given numerous professional presentations and has over 25 publications and manuscripts to her credit.

Late-Stage Biostatistics Submission Services

Posted by Brook White on Tue, Apr 21, 2015 @ 09:52 AM

Rob Woolson, Chief Strategist-Biostatistics and Data Standards for Regulatory SubmissionsRob Woolson, Chief Strategist-Biostatistics and Data Standards for Regulatory Submissions,  has led SDTM/ADaM dataset conversion projects in multiple therapeutic areas. He has held a leadership role in six CDISC-compliant regulatory submissions, having guided the creation of ISS/ISE statistical analysis plans; integrated analysis dataset design and production; integrated display design and production; and submission-related documentation development. 

Bridging the Gap between Technical/Data Standards and Regulatory/Medical Writing in NDA Submissions

Many CROs currently offer two categories of services to sponsor companies preparing for an NDA or marketing application submission. The first category contains technical data standards services. These services include converting existing clinical datasets to CDISC compliant formats and generating the datasets for the integrated summary of safety (ISS) and integrated summary of efficacy (ISE). The second category includes regulatory and medical writing services associated with the marketing application submission.

There are a number of activities that need to take place that fall in between these two categories to ensure a successful marketing application submission. While some CROs offer these services on an ad-hoc basis, and some sponsors have the internal expertise and capacity to fill these gaps, there remains an unmet need to define and consistently deliver these services. We see these late-stage biostatistics submission services falling into two sub-categories—marketing application services and regulatory interaction support.

Marketing applications services include activities like analysis plan development; data standards plan development; supporting regulatory and medical writing during the submission process; pooled data assessment; and analyses to support key messages. For example, an analysis development plan would help a sponsor devise an analysis strategy to support the key messages in their submission, such as:

  • Defining the safety parameters of interest based on FDA expectations, a priori special interests/concerns, and safety concerns identified during data review.
  • Determining an analysis and presentation approach that gets at the true value of parameters of interest and allows for meaningful comparisons between the active and placebo/comparator groups.
  • Developing a strategy for combining and pooling data.  (For example, while you have a regulatory obligation to present all safety data, there is no requirement to pool all studies for analysis into a single group.)
  • Determining which subgroups should be explored for consistency of effect. 

Regulatory interaction support includes providing biostatistics input and analysis for both face-to-face meetings and written communications with FDA. In the period leading up to your submission and following the submission, it is likely you will have increasing interaction with FDA. These services prepare you to have the biostatistics and data standards expertise on-hand that you’ll need to effectively respond to FDA communications, and to recommend when you should initiate conversations with FDA over your data.

So, as you approach the end of your development program, make sure you have the statistical support in place that you need.

Watch Now: Understanding the New FDA Guidance on Data Standards Webinar

Are Your Data Submission Ready? 5 Questions You Should Be Asking

Posted by Brook White on Thu, Apr 09, 2015 @ 09:41 AM
Rob Woolson, Biostatistics and Data Standards for Regulatory SubmissionsRob Woolson, Chief Strategist-Biostatistics and Data Standards for Regulatory Submissions,  has led SDTM/ADaM dataset conversion projects in multiple therapeutic areas. He has held a leadership role in six CDISC-compliant regulatory submissions, having guided the creation of ISS/ISE statistical analysis plans; integrated analysis dataset design and production; integrated display design and production; and submission-related documentation development. 

In December 2014 the Food and Drug Administration released two final guidance documents and one technical conformance guide related to the content and format of electronic submissions.  These documents require the electronic submission of standardized non-clinical and clinical study data for nearly all submission to CDER and CBER, including INDs, NDAs, ANDAs, and BLAs by 2016.  In other words, study data must comply with a number of standards enumerated by the FDA, most notably CDISC data standards (for example, SDTM and ADaM).  These guidance documents are binding.  Submissions that do not follow technical data conformance standards will not be filed or received by the FDA, meaning if you don’t comply you could receive a refuse to file letter.  So, what do you need to do?  Ask yourself these five questions.

How much of my data currently conforms to these standards?

It may be the case that you’ve been working with CDISC standards from the start of development.  All of your clinical datasets are formatted to SDTM and ADaM formats and you plan to create your Integrated Summary of Safety (ISS) and Integrated Summary of Efficacy (ISE) datasets to conform as well.  It may also be the case that you know that none of your existing data conforms and that extensive remediation will be necessary.  In our experience, however, many situations are more complex.  For example,  some late stage development work may conform to CDISC standards with much of the earlier development work following a legacy standard (or no standard at all).  The first step is to figure out what you have.

For the datasets that don’t conform, do they all need to be converted to CDISC standards?

It is likely that if you are preparing for a submission now, at least some of your datasets will need to conform to CDISC standards.  However, that doesn’t necessarily mean that all of your legacy datasets will need to be converted.  By preparing a Data Standards Plan and submitting it to the FDA, you may be able to limit the amount of conversion and remediation work that is done, saving both time and money.  Getting feedback from the FDA at this stage is critical, so that there won’t be any surprises when you file.

Do I have the necessary expertise and capacity in house to make these assessments, perform needed remediation, and get the data submitted?

Moving quickly at this stage in development is critical.  Everyone wants to get their product to market as soon as possible, so once the final studies are finished you want the data submission work to be fast and accurate.  If you don’t have the internal capacity to move at the rate you want it may make sense to bring in an outside vendor.  Data submissions are also a pretty specialized area, and many companies may not have the internal expertise required.  This can be another reason you should consider an outside vendor.  

If I need a vendor to help with these activities, what should I be looking for?

It’s likely that you’ve used one or more CROs at some stage of development, but the CRO that performed your clinical studies may or may not be the best choice when it comes to getting your data ready for submission.  Here are some things to consider:

  • Experience
    • How many protocols have they mapped to SDTM or ADaM? Were any of them for products in the same or similar indication as yours?
    • How many marketing applications have they filed in the last five years?
    • What is their technical acceptance rate? Have any of their submissions resulted in a refusal to file?
    • What level of experience do the individuals assigned to your project have? You should expect a greater level of experience for statisticians and data standards staff working on a marketing application submission than you would need for working on a typical single study.
    • Does the vendor have experience communicating directly with the FDA on data standards plans?
    • Which standards do they have experience with? At a minimum, we recommend:
      • CDASH
      • SDTM
      • ADaM
      • define-XML
      • CDISC controlled terminology
    • Which version of the standards have they used?  Are they familiar with the versions you are using and plan to use?  
  • Tools and Technology
    • What tools are they using to assess compliance? Are they the same as those in use at FDA?
    • Do they have tools for metadata mapping, creating define.xml files, and performing QC on clinical databases?

What Types of Services Can I Expect a Vendor to Provide?

A high quality vendor should be able to look at your situation and make recommendations about the services you need.  Typical services include:

  • Assessing and reporting on the current state of clinical databases relative to external and FDA standards and requirements. If you don’t already have one, they should be able to provide a plan for preparing databases for submission.
  • Performing any needed data remediation.
  • Preparing and submitting a Data Standards Plan to the FDA.
  • Executing a test data transfer to obtain FDA reviewer feedback.

Be wary of vendors that provide a set package of services.  Each submission is unique, and these packages may include costly and time consuming services you don’t really need.

Watch Now: Understanding the New FDA Guidance on Data Standards Webinar

Addressing Diversity in Clinical Trials

Posted by Brook White on Mon, Nov 24, 2014 @ 09:55 AM

Shann Williams, Clinical Trials in Transplantation Project DirectorShann Williams is the Project Director of the statistical and clinical coordinating center for the Clinical Trials in Transplantation (CTOT) program sponsored by the National Institute of Allergy and Infectious Disease (NIAID).  In addition, Shann serves as Rho's project management operational service leader, an internal expert overseeing project management processes and training.

Michelle Walter, AVP OperationsMichelle Walter, AVP Operations, has considerable experience directing federally-funded respiratory and allergy clinical research. Her background as a project director of multi-protocol, multi-site clinical programs has been a considerable asset over the course of her fifteen year tenure at Rho.  For the last 8 years, she has been the project director for the Statistical and Clinical Coordinating Center for the Inner City Asthma Consortium, an NIAID-funded network of clinical and mechanistic sites. 

diverse patient/subject populations in clinical trialsA recently released Food and Drug Administration (FDA) report encourages enrollment of more women and minorities in clinical trials. The report identifies three priorities in order to address this need: participation, data quality, and transparency. Clinical research organizations (CROs) can help guide sponsors in all of these areas in order to ensure that their drug and device applications not only meet all of the current requirements, but that they are at the forefront of the direction of the FDA and of biomedical research as a whole.

Overcoming Barriers to Participation:

Finding solutions to barriers that limit participation in clinical trials by demographic subpopulations, particularly those populations that are underserved and underrepresented, is key to diversifying enrollment.

There are several ways sponsors and sites can overcome these barriers. Using available census data, mapping technologies, and peer-reviewed publications to understand the demographic variability of the disease population by region, site-specific enrollment goals and strategies can be created to ensure that percentages of subjects targeted and enrolled accurately reflect the available subpopulation at that site. Similarly, selecting new sites whose community make-up will help achieve enrollment goals can help if other sites encounter limitations.

hurdles to participation in clinical trialsAddressing the needs of non-native English speakers in the United States is immensely important to encouraging diversity. Sponsors can develop appropriate recruitment materials using the subpopulation’s native language and ensure these materials are accessible to the population of interest. Translation services and on-site translators also aid in ensuring informed consent forms are easily understood and thoroughly explained. Choosing translators with the same demographic background as subjects under study helps dispel mistrust and miscommunication. It is also important to administer informed consent forms in a format that is acceptable in that subpopulation. For example, providing adequate time and resources for all applicable family members to review information and offer guidance to the participant about whether or not they should consent may increase enrollment and help reduce drop out during the trial.

Identifying and addressing the motivations, cultural preferences and common barriers to participation for various sub-populations may increase recruitment and retention. Providing solutions to sites to address these barriers – including: tools and training for literacy problems, transportation or reimbursement for transportation, childcare options, flexible visit schedules to help with variable work schedules and options other than checks for participation reimbursements – all help to improve recruitment and retention and diversify enrollment.

Ensuring Data Quality:

As noted in the FDA report, data standards are integral to improving the completeness and quality of information based on demographic subgroups. Identify a CRO partner that has demonstrated leadership in the development and application of data standards. For example, it is important to have subject matter experts that have worked closely with the FDA and its reviewers on numerous clinical development programs, with a track record of successful submissions of several marketing applications with extensive demographic subgroup analyses.

Free Webinar on Cost Effective Data Standards 



Transparency in Reporting:

transparency in clinical trialsData transparency must be a seamless next step in the clinical trials process, both for submitting reports to the FDA and for informing trial participants of the outcomes. Although the FDA report specifically focuses on data transparency in reporting to the FDA and other data sharing venues, providing study results directly to the trial subjects and their families helps to reinforce that these subjects have personally contributed to research in the disease area that impacts them personally. Results can be presented in the form of newsletters, handouts and tailored materials specific to the subpopulation of interest. This type of transparency further strengthens positive feelings regarding biomedical research as a whole to combat cultural and historical mistrust.


3 Key Questions When Developing the Integrated Summary of Safety (ISS)

Posted by Brook White on Tue, Jan 07, 2014 @ 02:01 PM

Rob Woolson, Integrated Summary of SafetyRob Woolson is a senior biostatistician with 12 years’ experience in the analysis of complex data. He has conducted statistical analyses in all phases of drug development (Phase I through IV, including NDAs) and has led SDTM/ADaM dataset conversion projects in multiple therapeutic areas.  He has held a leadership role in six CDISC-compliant FDA submissions, having guided the creation of ISS/ISE statistical analysis plans; integrated analysis dataset design and production; integrated display design and production; and submission-related documentation development.

A new drug application (NDA) covers information about a product from inception through clinical trials.  The integrated summary of safety (ISS) is a section of the NDA that provides comprehensive safety information collected throughout the development program.  The goal of the ISS is to characterize the overall safety profile of the drug and to identify risks that should be included on the product label.  This article discusses three key questions to address as a part of your ISS analysis plan.

(1) What are the safety parameters of interest?

Safety parameters of interest typically include those specified in FDA guidance, those that are a priori special interest or concern for the program or compound, and those identified during data review. Some examples of safety parameters are exposure, concomitant medications, deaths, adverse experiences (occurrence, relatedness, severity, seriousness, duration, timing, etc.), laboratory measures, and vital signs. A good test of whether one has selected the right parameters is to ask whether the summary of the estimates of these parameters sufficiently describes the overall drug safety profile.

keys to ISS(2) How does one present and analyze data to convey key safety messages and describe the overall safety profile?

Once safety parameters have been selected, one must decide the best way to present them. Some guiding principles we’ve come up with are:

  • The presentation should help a reviewer get at the true value of the parameter of interest.

  • The parameter should be presented with a high degree of confidence (maximum precision, minimum bias).

  • The presentation should permit the reviewer to make meaningful comparisons between active and placebo. An absolute means little unless it is compared to something.

There are a number of ways we can characterize these parameters including proportions (i.e., a crude rate), incidence rate (per unit time), total incidence rate (events per unit time which may be useful where there are multiple events per subject and different exposure times), time to event, and change from baseline.  One challenge frequently encountered in presenting safety data is that different trials have collected data differently which lead to additional complexity in how data should be presented in the ISS.  For example, one may have to deal with several different follow up times.  The solutions selected must convince a reviewer that they are provided an unbiased presentation of the data.  There are several methods that can be used for making between group comparisons.  These methods include difference of proportions, ratio of proportions (mentioned in FDA guidance), difference in rates, ratio of rates, hazard ratio, survival curves, and difference in means.

(3) Should safety data from all studies be pooled and, if so, how?

One of the first things to consider in characterizing the information is if and how data will be pooled. There is a regulatory obligation to present all safety data; however, there is no requirement that data from all studies be pooled.

It is important that whatever pooling strategy is taken, one is prepared to justify it to FDA reviewers and/or an advisory committee.  

  • A reason to pool data is that one may be able to provide more precise and more reliable estimates of safety parameters. Also, pooled data may allow conclusions to be drawn (e.g., comorbidities) that wouldn’t be seen by looking at the studies individually. 

  • A reason not to pool data is that studies and the populations in those studies may be so different that it is difficult to make sensible comparisons. Additionally, analyses of pooled data can be time consuming and expensive when compared to summarizing and presenting the analyses that were performed as part of the studies. (Going too far down that path, however, may lead reviewers to believe the analyses are insufficient.)

In most cases, it will make sense to pool at least some data.  Some general principles for creating a pooling strategy:

  • Combine the data in the most valid way(s)

  • The safety message should drive the pooling strategy
  • Summaries should produce transparent results

  • No masking of safety signals
  • Estimates produced should be as unbiased as possible

Some factors to consider when looking at individual protocols to determine whether it makes sense to pool data from those protocols are:

  • Design similarity

  • Doses studied

  • Duration

  • Controlled/uncontrolled/choice of control

  • Region

  • Population

Keep in mind that differences in these factors introduce variation to the safety parameters of interest.

In conclusion, it is important that the finished ISS tells FDA reviewers a clear and focused story about the drug’s safety profile.  Doing this effectively is necessary to move through the approval process efficiently.

Register for a free CDISC webinar

CDASH: Reduce Development Costs by Extending CDISC Standards to Clinical Data Management

Posted by Brook White on Tue, Dec 04, 2012 @ 09:42 AM

Jeff Abolafia-Rho CDISC ExpertThe following article was contributed by Jeff Abolafia, one of our resident CDISC experts. Jeff has more than 20 years of experience in clinical research and has successfully led multiple CTD/NDA submissions. He is the co-founder of the Research Triangle Park CDISC Users Group and a member of the CDISC ADaM and ADaM Metadata teams.

In recent years the FDA has clearly stated its preference for receiving both clinical and analysis data formatted in compliance with CDISC standards. This has been communicated through a series of guidance documents, correspondence with sponsors, and presentations at conferences. As a result, CDISC models have become the de facto standard for submitting data to the FDA.

Given the FDA’s preference for receiving CDISC data, many sponsors have begun to produce CDISC-compliant databases in order to meet FDA submission requirements. In the short term this has led to additional work and higher costs. However, when the standards are implemented properly, organizations have a tremendous opportunity for significant cost savings throughout product development.

As a CRO, Rho has had the opportunity to work with many sponsors on CDISC related projects. Most of these sponsors have noted that producing CDISC compliant deliverables have increased their costs. This has surprised many sponsors. Wasn’t producing standardized CDISC datasets supposed to reduce time and costs?

When it comes to implementing CDISC standards, perhaps sponsors are trying to solve the wrong problem. The problem that most sponsors are addressing is: how can we get the FDA what they want. Instead, we should be asking: how can implementing CDISC standards be part of a cost effective product development strategy. The problem each organization chooses to tackle will determine its implementation strategy.

When the primary goal is meeting FDA requests, the focus tends to be on producing SDTM and ADaM databases and associated documentation. At this point in time, these are the CDISC related deliverables that the FDA has requested. Under this scenario most organizations choose one of the two following implementation strategies: 1) Legacy conversions – datasets are created in a proprietary format while studies are conducted. Data is converted to CDISC format before or while the submission database is being assembled; or 2) During the course of a study convert operational data to SDTM format. Using the SDTM database as input, create an analysis data database that is ADaM compliant. Both of these approaches will get the FDA what they want. However, they also lead to lots of additional work and increased costs.
So, how can we get the FDA what they want and also save time and money? A business case study on CDISC standards by Gartner found that implementing standards from the beginning can save up to 60% of non-subject participation time and cost and that about half of the value was gained in the startup stages. The study also reported that the average study startup time can be reduced from around five months to three months. The use of CDISC standards can be extended upstream to both the protocol and to data collection.

The CDISC CDASH standard extends standards to clinical data management, with the goal of standardizing data collection. CDASH provides standard data streams and variables that are found in most clinical studies. CDASH was also designed to facilitate converting the operational database to SDTM.

CDASH provides a sponsor with a global library of data elements that are also the industry standard. The CDASH global library can be augmented by therapeutic specific libraries. CDASH Libraries can include entire forms for a given data stream, variables or data fields, controlled terminology for each variable, and pre-defined edit checks for each variable. These libraries can be utilized by the sponsor for all studies within and across product development projects.
Extending standards to data collection provides many benefits. Using a global library of standardized data elements allows for cheaper and faster Clinical Data Management System (CDMS) setup. Business case studies by Gartner and Tufts have found that CDMS setup time can be reduced by as much as 50%. Using CDASH facilitates also converting operational data to SDTM. Standardized operational data combined with standardized programs, specifications, and tools can streamline producing SDTM datasets. Producing the operational and SDTM databases can be packaged so that creating SDTM compliant clinical databases is cost effective for Phase One and Phase Two studies. This is a significant benefit for sponsors whose business goal is taking their product to market or partnering and for sponsors without a lot of resources in the earlier product development stages. Also, moving standards implementation upstream also increases communication among business units. Standards implementation is extended beyond programming and biostatistics to data management and clinical operations.

Cost effective standards implementation requires a change in philosophy. It entails re-defining what we are trying to accomplish by using CDISC standards. By integrating standards into the entire life cycle of product development and collecting standardized data instead of standardizing collected data, we can both get the FDA what they want and save time and money while doing so.


Rozwell et al., 2009, online at http://www.cdisc.org/business-case. A business case for standards by CDISC and Gartner.

Register for a free CDISC webinar

3 Benefits of Combining Clinical Data Management and Biostatistical Services

Posted by Jamie Hahn on Thu, Oct 25, 2012 @ 01:56 PM

woman celebratingIn some cases, we're asked to provide services for just one piece of the biometrics component of a clinical trial project or program, such as clinical data management services OR biostatistical services. While in theory this set-up is perfectly acceptable, there are potential benefits that could be realized by having one contract research organization (CRO) support both the clinical data management and biostatistical components for your clinical trial project or program. 

When one CRO provides both clinical data management and biostatistics services for a trial, you can benefit in the following ways:

1) Well-designed database and less re-work

Clinical data management and biostatistical experts collaborate from the earliest stages of study start-up. Early collaboration on CRF design, clinical database set-up, and the clinical data validation plan ensures that the clinical data will support your objectives and reduces the potential for costly statistical re-work associated with an unfamiliar or poorly designed database.

2) Cleaner data

Experienced clinical data managers can provide databases with error rates far below industry standards. Focusing on building quality into every clinical database from CRF design through database lock will ensure that data issues, errors, and anomalies are minimized, and any data errors that do occur will be found early in the process. The earlier data errors are found, the less expensive these errors are to fix. When a clinical database has been designed well and the clinical data management process has been executed successfully, the biostatisticians have many fewer data errors and anomalies to investigate and correct, thus saving you time and money.  

3) Better traceability of data and potentially faster approval

When clinical data managers and biostatisticians collaborate early in the clinical trial process, they can focus on creating clinical, SDTM, and analysis databases in a manner that amplifies the traceability of data. Planning for the use of CDASH, SDTM, and ADaM standards from the start will increase traceability, facilitate FDA review, and potentially expedite approval timelines. 

What benefits have you noticed when one contract research organization supports both the clinical data management and biostatistical services for your clinical trial project or program? 


Click me

Rho Adopts CDASH Standard

Posted by Brook White on Mon, Sep 24, 2012 @ 04:54 PM

CDASH, data management forms, CDISCIn recent years, industry groups have been exploring ways to reduce medical research costs by making the routine parts of research more routine.  One of the more successful attempts, now strongly encouraged by FDA, has been the CDISC standards. These standards encompass standard ways to describe the data collected and used in clinical trials and standard starting points for data that are routinely collected in most trials.  Rho staff have helped lead this trend, serving on teams developing the CDISC standards since before CDISC was an independent organization.  Rho has been using various versions of SDTM and ADaM for clinical and analysis datasets for nearly a decade, adopting these as formal standards a few years ago.  We are now taking this one step further by adopting CDASH, an industry-standard approach to data management forms, as our formal standard. 


Adoption of CDASH will allow us to provide the same level of quality in data management services, faster, and in many cases, at a lower cost than was previously possible.  CDASH provides a global library of standard case report forms (CRFs) that can be used as a starting point for any study.  Use of these forms eliminates the upfront design work of creating common forms and allows for easier data transfer and collaboration later on.    CEO Russ Helms stated “As a company committed to innovation and customer service, Rho’s adoption of the CDASH standard was a logical step.  It allows us to be more efficient internally while providing faster results for our customers at equal or lower cost.”


Click here to learn more about our data management services.

Click me