Rho site logo

Rho Knows Clinical Research Services

What We Learned at PhUSE US Connect

Posted by Brook White on Tue, Jun 12, 2018 @ 09:40 AM
Share:

ryan-baileyRyan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including the Inner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which develops novel data visualizations and statistical graphics for use in clinical trials.

Last week, PhUSE hosted its first ever US Connect conference in Raleigh, NC. Founded in Europe in 2004, the independent, non-profit Pharmaceutical Users Software Exchange has been a rapidly growing presence and influence in the field of clinical data science. While PhUSE routinely holds smaller events in the US, including their popular Computational Science Symposia and Single Day Events, this was the first time they had held a large multi-day conference with multiple work streams outside of Europe. The three-day event attracted over 580 data scientists, biostatisticians, statistical programmers, and IT professionals from across the US and around the world to focus on the theme of "Transformative Current and Emerging Best Practices."

After three days immersed in data science, we wanted to provide a round-up of some of the main themes of the conference and trends for our industry.

Emerging Technologies are already Redefining our Industry

emerging technologyIt can be hard to distinguish hype from reality when it comes to emerging technologies like big data, artificial intelligence, machine learning, and blockchain.  Those buzzwords made their way into many presentations throughout the conference, but there was more substance than I expected.  It is clear that many players in our industry (FDA included) are actively exploring ways to scale up their capabilities to wrangle massive data sets, rely on machines to automate long-standing data processing, formatting, and cleaning processes, and use distributed database technologies like blockchain to keep data secure, private, and personalized.  These technologies are not just reshaping other sectors like finance, retail, and transportation; they are well on their way to disrupting and radically changing aspects of clinical research.

The FDA is Leading the Way

Our industry has gotten a reputation for being slow to evolve, and we sometimes use the FDA as our scapegoat. Regulations take a long time to develop, formalize, and finalize, and we tend to be reluctant to move faster than regulations. However, for those that think the FDA is lagging behind in technological innovation and data science, US Connect was an eye opener. With 30 delegates at the conference and 16 presentations, the agency had a strong and highly visible presence.

Moreover, the presentations by the FDA were often the most innovative and forward-thinking. Agency presenters provided insight into how the offices of Computational Science and Biomedical Informatics are applying data science to aid in reviewing submissions for data integrity and quality, detecting data and analysis errors, and setting thresholds for technical rejection of study data. In one presentation, the FDA demonstrated its Real-time Application for Portable Interactive Devices (RAPID) to show how the agency is able to track key safety and outcomes data in real time amid the often chaotic and frantic environment of a viral outbreak. RAPID is an impressive feat of technical engineering, managing to acquire massive amounts of unstructured symptom data from multiple device types in real time, process them in the cloud, and perform powerful analytics for "rapid" decision making. It is the type of ambitious technically advanced project you expect to see coming out of Silicon Valley, not Silver Spring, MD.

It was clear that the FDA is striving to be at the forefront of bioinformatics and data science, and in turn, they are raising expectations for everyone else in the industry.

The Future of Development is "Multi-lingual"  

A common theme through all the tracks is the need to evolve beyond narrowly focused specialization in our jobs. Whereas 10-15 years ago, developing deep expertise in one functional area or one tool was a good way to distinguish yourself as a leader and bring key value to your organization, a similar approach may hinder your career in the evolving clinical research space. Instead, many presenters advocated that the data scientist of the future specialize in a few different tools and have broad domain knowledge. As keynote speaker Ian Khan put it, we need to find a way to be both specialists and generalists at the same time. Nowhere was this more prevalent than in discussions around which programming languages will dominate our industry in the years to come.

While SAS remains the go-to tool for stats programming and biostatistics, the general consensus is that knowing SAS alone will not be adequate in years to come. The prevailing languages getting the most attention for data science are R and Python. While we heard plenty of debate about which one will emerge as the more prominent, it was agreed that the ideal scenario would be to know at least one, R or Python, in addition to SAS.

We Need to Break Down Silos and Improve our Teams

data miningOn a similar note, many presenters advocated for rethinking our traditional siloed approach to functional teams. As one vice president of a major Pharma company put it, "we have too much separation in our work - the knowledge is here, but there's no crosstalk." Rather than passing deliverables between distinct departments with minimal communication, clinical data science requires taking a collaborative multi-functional approach. The problems we face can no longer be parsed out and solved in isolation. As a multi-discipline field, data science necessarily requires getting diverse stakeholders in the room and working on problems together.

As for how to achieve this collaboration, Dr. Michael Rappa delivered an excellent plenary session on how to operate highly productive data science teams based on his experience directing the Institute for Advanced Analytics at North Carolina State University. His advice bucks the traditional notion that you solve a problem by selecting the most experienced subject matter experts and putting them in a room together. Instead, he demonstrated how artfully crafted teams that value leadership skills and motivation over expertise alone can achieve incredibly sophisticated and innovative output.

Change Management is an Essential Need

Finally, multiple sessions addressed the growing need for change management skills. As the aforementioned emerging technologies force us to acquire new knowledge and skills and adapt to a changing landscape, employees will need help to deftly navigate change. When asked what skills are most important for managers to develop, a VP from a large drug manufacturer put it succinctly, "our leaders need to get really good at change management."

In summary, PhUSE US Connect is helping our industry look to the future, especially when it comes to clinical data science, but the future may be closer than we think. Data science is not merely an analytical discipline to be incorporated into our existing work; it is going to fundamentally alter how we operate and what we achieve in our trials. The question for industry is if we're paying attention and pushing ourselves to evolve in step to meet those new demands.

Webinar: Understanding the FDA Guidance on Data Standards

Heat Maps for Database Lock

Posted by Brook White on Tue, Aug 08, 2017 @ 11:50 AM
Share:

Kristen Mason, Senior BiostatisticianKristen Mason, MS, is a Senior Biostatistician at Rho. She has over 4 years of experience providing statistical support for studies conducted under the Immune Tolerance Network (ITN) and Clinical Trials in Organ Transplantation (CTOT). She has a particular interest in data visualization, especially creating visualizations within SAS using the graph template language (GTL). 

Heather Kopetskie, Senior BiostatisticianHeather Kopetskie, MS, is a Senior Biostatistician at Rho. She has over 10 years of experience in statistical planning, analysis, and reporting for Phase 1, 2 and 3 clinical trials and observational studies. Her research experience includes over 8 years focusing on solid organ and cell transplantation through work on the Immune Tolerance Network (ITN)and Clinical Trials in Organ Transplantation (CTOT) project.  In addition, Heather serves as Rho’s biostatistics operational service leader, an internal expert sharing biostatistical industry trends, best practices, processes and training.

Preparing a database for lock can be a burdensome process. It requires coordinated effort from an entire clinical study team, including, but not limited to, the clinical data manager, study monitor, biostatistician, clinical project manager, principal investigator, and medical monitor. The team must work together to ensure the accuracy and reliability of the data, but with so many sites, subjects, visits, case report forms (CRFs), and data points it can be difficult to stay on top of the entire process. 

Using existing metadata (see Mining Metadata for Clinical Research Activities for more information on metadata) graphics can be created to visually represent the overall status of each requirement for database lock. This is possible using a graphic called a ‘heat map’ that displays the CRF metadata. The resulting graphic is shown below. 

heat map showing CRF metadata for database lock

The graphic has one row per subject and one column for each CRF collected at each visit. This results in one ‘box’ per subject per visit per CRF. Each box is colored and/or annotated to indicate the current status of each CRF. 

Broadly speaking, a quick glance at this graphic can show the clinical study team exactly how many CRFs have yet to be completed, where queries have not yet been closed, which CRFs have been source data verified, and whether or not an individual CRF has been locked.  Not to mention, all of this information can be identified for a specific subject at a specific visit for a specific CRF. 

Focusing on the details of our particular example, it is easy to see that no subject has yet initiated data entry for both Visit 4 and Visit 5. Additionally, three subjects have not started data entry for the Treatment Visit, ten for Visit 1, fifteen for Visit 2, and twenty-four for Visit 3. An open query remains for several subjects on the TRT form at the Treatment Visit, and for just subject 88528 on the PE form at the Screening Visit. A handful of forms have been source verified and no CRFs have been locked. Additionally, the graphic provides detail on the total number of subjects, visits, and CRFs for the study. This helps reveal specifics such as which visits are more burdensome with multiple CRFs and exactly how far along the subjects are in the study. 

Historically, this information has been conveyed through pages and pages of multiple listings, which can take minutes if not hours to decipher. Having all of the information in a single snapshot can help determine what steps need to be taken to get to database lock quickly and accurately. 

Further instruction on how to implement this graphic within SAS will be available soon. 

Post-Lock Data Flow: From CRF to FDA

Mining Metadata for Clinical Research Activities

Posted by Brook White on Wed, Jul 26, 2017 @ 09:48 AM
Share:

Derek Lawrence, Senior Clinical Data ManagerDerek Lawrence, Senior Clinical Data Manager, has 9 years of data management and analysis experience in the health care/pharmaceutical industry.  Derek serves as Rho's Operational Service Leader in Clinical Data Management, an internal expert responsible for disseminating the application of new technology, best practices, and processes.

Metadata: An Underutilized Resource

mine clinical database metadataAs anyone involved in clinical database creation knows, considerable resources are devoted to the development and validation of electronic data capture (EDC) systems. Once these databases are live and clinical data begin coming in, various processes for setting up data cleaning programming, database quality review, and reporting are put into play. Unfortunately, most of the processes are manual and require the data managers, programmers, and biostatisticians to have a series of specific conversations concerning the database’s setup, structure, and dynamic behavior that would in turn affect how programming tasks were approached and how biostatisticsshould best approach the data.

The solution for not only decreasing the amount of time spent setting up these activities, but also increasing the accuracy of said setup presents itself in the effective usage of the project’s metadata. This metadata, or “data about data”, spans all elements of the clinical database, including:

  • CRF metadata
    • Labels, formats, response options, entry requirements, field-level checks, etc.
  • Form metadata
    • Source data verification (SDV), signature participation, orientation (standard vs. log), etc.
  • Event metadata
    • Visit windows, associated CRFs, repeatability, access requirements, etc.
  • Query metadata
    • Current status, dates, resolutions, marking groups, etc.

Establishing Usable Datasets

data-mining.jpgThe first step in mining the metadata is to create machine-readable datasets from the source in question. In the case of most commercially- available EDC systems, the CRF and Event metadata contents of a project can be exported in a variety of formats (XML, Excel, etc.). During the nightly process by which clinical data are exported from our EDC studies and saved to the Rho network, we added a post-processing step where a macro reads in the exported study metadata files and produces working datasets. From here, these elements of the clinical database are machine-readable and available for use. Other standard EDC reports provide additional sources for Forms and Query metadata. These data can be extracted from the system either directly using an API (application programming interface) or by creating reports using EDC system-specific tools, which can be scheduled and saved to the network automatically. The contents of these reports can also be converted to datasets for ease of use.

A Wide Variety of Applications

From this point, we can automate a number of tasks that traditionally required manual review, specifications, and the application of subject matter expertise in order to successfully complete. From driving the database validation process to the creation of system performance metrics to the programming and configuration of statistical datachecks, the now-accessible metadata allows us to more rapidly and accurately initiate a multitude of tasks with much of the manual component removed. We will cover the use of some of the specific data monitoring and cleaning uses using study metadata in a series of future blog posts.

Post-Lock Data Flow: From CRF to FDA

The Rise of Electronic Clinical Outcome Assessments (eCOAs) in the Age of Patient Centricity

Posted by Brook White on Tue, Dec 06, 2016 @ 10:36 AM
Share:

Lauren Neighbours, Clinical Research ScientistLauren Neighbours is a Research Scientist at Rho. She leads cross-functional project teams for clinical operations and regulatory submission programs and has over ten years of scientific writing and editing experience. Lauren has served as a project manager and lead author for multiple clinical studies across a range of therapeutic areas that use patient- and clinician-reported outcome assessments, and she worked with a company to develop a patient-reported outcome instrument evaluation package for a novel electronic clinical outcome assessment (eCOA).

Jeff Abolafia, Chief Strategist Data StandardsJeff Abolafia is a Chief Strategist for Data Standards at Rho and has been involved in clinical research for over thirty years. He is responsible for setting strategic direction and overseeing data management, data standards, data governance, and data exchange for Rho’s federal and commercial divisions. In this role, Jeff is responsible for data collection systems, data management personnel, developing corporate data standards and governance, and developing systems to ensure that data flows efficiently from study start-up to submission or publication. Jeff has also developed systems for managing, organizing, and integrating both data and metadata for submission to the FDA and other regulatory authorities.

patient centricityWith the industry-wide push towards patient-centricity, electronic clinical outcome assessments (eCOAs) have become a more widely used strategy to streamline patient data collection, provide real-time access to data (for review and monitoring), enhance patient engagement, and improve the integrity and accuracy of clinical studies.  These eCOAs are comprised of a variety of electronically captured assessments, including patient reported outcomes (PROs), clinician-reported and health-care professional assessments (ClinROs), observer reported outcomes (ObsROs), and patient performance outcomes administered by health-care professionals (PerfOs).  The main methods for collection of eCOA data include computers, smartphones, and tablets, as well as telephone systems.  While many companies have chosen to partner with eCOA vendors to provide these electronic devices for use in a clinical study, other sponsors are exploring “bring your own device (BYOD)” strategies to save costs and start-up time.  No matter what strategy is used to implement an eCOA for your clinical study, there are several factors to consider before embarking on this path.  

Designing a Study with eCOAs

The decision to incorporate an eCOA into your clinical study design is multifaceted and includes considerations such as the therapeutic area, the type of data being collected, and study design, but the choice can first be boiled down to 2 distinct concepts: 1) the need for clinical outcome data from an individual, and 2) the need for this data to be collected electronically. Thus, the benefits and challenges to eCOAs can be aligned with either or both of these concepts.

Regarding the first concept, the need for clinical outcome data should be driven by your study objectives and a cost-benefit analysis on the optimal data collection technique. Using eCOAs to collect data is undoubtedly more patient-centric than an objective measure such as body mass index (BMI), as calculated by weight and height measurements. The BMI calculation does not tell you anything about how the patient feels about their body image, or whether the use of a particular product impacts their feelings of self-worth. If the study objective is to understand the subjective impact of a product on the patient or health-care community, a well designed eCOA can be a valuable tool to capture this information. These data can tell you specific information about your product and help inform the labeling language that will be included in the package insert of your marketed product. Additionally, FDA has encouraged the use of PROs to capture certain data endpoints, such as pain intensity, from a patient population who can respond themselves (see eCOA Regulatory Considerations below). Of course, it’s important to note that the inherent subjectivity of eCOAs does come with its own disadvantages. The data is subject to more bias than other objective measures, so it’s critical to take steps to reduce bias as much as possible. Examples of ways to reduce bias include single- or double-blind trial designs, wherein the patient or assessor is not aware of the assigned treatment, and building in a control arm (e.g., placebo or active comparator) to compare eCOA outcome data across treatment groups.

Another important concept is the process for identifying and implementing the electronic modality for eCOA data collection.  Many studies still use paper methods to collect clinical outcome data, and there are cases when it may make more sense to achieve your study objectives through paper rather than electronic methods (e.g., Phase 1 studies with limited subjects).  However, several types of clinical outcome data can be collected more efficiently, at lower cost, and at higher quality with electronic approaches (e.g., diary data or daily pain scores).  From an efficiency standpoint, data can be entered directly into a device and integrated with the electronic data management system being used to maintain data collection balancing time and cost when considering paper or electronic clinical outcomes assessmentsfor the duration of the study.  This saves time (and cost) associated with site personnel printing, reviewing, interpreting, and/or transcribing data collected on paper into the electronic data management system, and it also requires less monitoring time to review and remediate data.  Additionally, paper data is often “dirty” data, with missing or incorrectly recorded data in the paper version, followed by missing or incorrectly recorded data entered into the data management system.  The eCOA allows for an almost instantaneous transfer of data that saves the upfront data entry time but also saves time and cost down the road as it reduces the effort required to address queries associated with the eCOA data.  Aside from efficiencies, eCOA methods allow for more effective patient compliance measures to be implemented in the study.  The eCOA device can be configured to require daily or weekly data entry and real-time review by site personnel prior to the next scheduled clinic visit.  Additionally, the eCOA system can send out alerts and reminders to patients (to ensure data is entered in a timely manner) and to health-care personnel (to ensure timely review and verification of data and subsequent follow-up with patients as needed).  The downsides to electronic data collection methods tend to be associated with the costs and time to implement the system at the beginning of the study.  It’s therefore essential to select an appropriate eCOA vendor  early who will work with you to design, validate, and implement the clinical assessment specifically for your study.

eCOA Regulatory Considerations

In line with the industry push for patient-focused clinical studies, recent regulatory agency guidance has encouraged the use of eCOAs to evaluate clinical outcome data.  The fifth authorization of the Prescription Drug User Fee Act (PDUFA V), which was enacted in 2012 as part of the Food and Drug Administration Safety and Innovation Act (FDASIA), included a commitment by the FDA to more systematically obtain patient input on certain diseases and their treatments.  In so doing, PDUFA V supports the use of PRO endpoints to collect data directly from the patients who participate in clinical studies but also as a way to actively engage patients in their treatment.  The 2009 FDA guidance for industry on Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims , further underscores this idea by stating “[the use] of a PRO instrument is advised when measuring a concept best known by the patient or best measured from the patient perspective.”  The 2013 Guidance for Industry on Electronic Source Data in Clinical Investigations  provides the Agency’s recommendations on “the capture, review, and retention of electronic source data” and is to be used in conjunction with the 2007 guidance on Computerized Systems Used in Clinical Investigations for all electronic data and systems used in FDA-regulated clinical studies, including eCOAs.  To support these efforts, the FDA has developed an extensive Clinical Outcome Assessment Qualification Program, which is designed to review and assess the design, validity, and reliability of a COA for  a particular use in a clinical study.  Furthermore, the newly formed Clinical Outcome Assessment Compendium  is a collated list of COAs that have been identified for particular uses in clinical studies.  The COA Compendium is further evidence of FDA’s commitment to patient-centric product development, and it provides a helpful starting point for companies looking to integrate these assessments into their clinical development programs. 

Before choosing an eCOA for your clinical development program, the following regulatory factors should be considered:

  • FDA holds COAs to the same regulatory and scientific standards as other measures used in clinical trials. Thus, it is advisable to refer to the Guidance for Industry on Patient-Reported Outcomes and the available information on the COA Assessment Qualification program and COA Compendium provided by the Agency when implementing eCOAs into your development program. If you plan to divert from currently available regulatory guidance, make sure to have a solid rationale and supporting documentation to substantiate your position.
  • The qualification of an eCOA often requires input from patients and/or health-care professionals to evaluate the effectiveness of the assessment. This input is necessary for the regulatory agency to determine whether the eCOA can accurately measure what it’s supposed to measure (validity) and to demonstrate it can measure the outcome dependably (reliability).
  • Data collected from qualified and validated eCOAs can be used to support product labeling claims. The key is to use an eCOA when it’s appropriate to do so and to make sure the eCOA supports your intended labeling claims because the instrument will be evaluated in relation to the intended use in the targeted patient population.
  • For the cases where an instrument was developed for paper based collection or an instrument is collected using multiple modes, it may be necessary to test for equivalence. This regulatory expectation is often required (especially for primary and secondary endpoints) to ensure that the electronic version of the instrument is still valid and data collected with mixed modes are comparable.

A CRO Can Help with your eCOA Strategy

CROs partner with sponsor companies to develop and execute their product development strategies.  In some cases, this involves implementing clinical outcome measures into a development program and then facilitating the interactions between the company and regulatory authorities to ensure adequate qualification of the COA prior to marketing application submission.  Whether or not you choose to engage a CRO in your development plan, consider seeking outside consultation from the experts prior to establishing your eCOA strategy to give you and your company the best chance of success.  

CROs Can Help:

  • Determine endpoints where eCOA data is appropriate
  • Determine the cost/benefit of electronic vs paper data capture
  • Determine the best mode of electronic data capture
  • Recommend eCOA vendors when appropriate
  • Perform equivalence analysis
  • Facilitate discussions with regulatory authorities
  • Manage the entire process of eCOA implementation

Webinar: ePRO and Smart Devices

5 Reasons Your CRO is Putting Your Trial at Risk

Posted by Brook White on Mon, Mar 10, 2014 @ 09:44 AM
Share:

Information for this article was contributed by Alicia McNeil and Elizabeth Kelchner, Clinical Data Scientists at Rho with significant experience in rescue studies.

warning signs in clinical trialsNo one wants to be here: You went through a lengthy proposal process, endured the bid defense process and thought you selected the best CRO for your study. The contract is signed and the study has started, but things just don’t seem right.

Here are five red flags to watch for if you suspect problems with your CRO, as well as tips for what to do if you decide your study needs to be rescued.

  1. You loved the team you met at the bid defense meeting, but once your study started you were assigned a completely new (and less experienced) team.
    Some new assignments are always a possibility. Maybe one of the team members left the company or a team member was unavailable because a study that they anticipated would be complete ran over. That said, you shouldn’t see substantial turnover on your team, and replacement team members should have similar experience and expertise to those you met at the bid defense.
  2. Project team members don’t return your calls or respond to email in a timely fashion.
    You can’t expect your project team to sit at their desks all day to answer the phone and check their email. After all, they need to be busy working on your project. So what is a reasonable amount of time to wait for a response? Hopefully, they’ve set clear expectations for response times during the kick-off meeting and in the project management plan, but one business day is a typical benchmark for non-urgent communications. There also should be a process and expectations in place to deal with time critical issues.
  3. You haven’t seen and haven’t been asked to sign off on important study documents.
    There are a variety of documents that the CRO should draft during study start-up (project management plan, clinical monitoring plan, data management plan, etc.). You should be given the opportunity to provide input, review, and sign off on these documents as they will set the direction for the execution of the study and ensure expectations are set on all sides.
  4. You aren’t receiving regular status reports.
    Status reports are another topic your project team should have covered at the kick-off meeting and in study documents so that you know what to expect in each area for which the CRO has contracted responsibility. They should be sending you status reports in a consistent format on a regular schedule. The reports should include enough detail that you can track the progress of important activities and gauge any significant risks to the study.
  5. There are signs data isn’t being collected or managed properly.
    In a study that is running smoothly, you should expect to see the following in terms of data collection and management:
    • Data is being collected in a system designed to handle clinical trial data. If data is being collected in spreadsheets or another unorthodox manner, this is a very significant problem. This may seem obvious, but more than once we’ve rescued studies where data was being collected into Excel spreadsheets.
    • You are given the opportunity to participate in User Acceptance Testing (UAT) if your study is using EDC. This is a good way to familiarize yourself with the specifics of how data is collected and what sites will experience, and it demonstrates transparency on the part of the CRO.
    • Queries are being sent and closed on a regular basis. As soon as sites start collecting data, you should be getting updates about queries sent and closed.

Sometimes, despite your best efforts to correct course with your CRO, you may decide that you need to change CROs mid-stream.  Here are some tips for interacting with your existing CRO before and during the transition as well as tips for selecting a CRO to rescue your study:

Tips for working with the incumbent CRO:

  • Keep copies of all documentation (study plans, annotated CRF, build specifications, decision logs, etc.) in case you need to transition to another CRO. Don’t rely on the incumbent CRO to do this. The more historical information available to the new CRO, the better.
  • Get regular data transfers.
  • Request and implement a communication plan. Know how to escalate issues if needed.
  • Have regular meetings with all team members to keep everyone on the same page. If possible, meetings that include both members of the incumbent CRO team and the rescue team can make the transition much smoother.
  • Avoid burning bridges with the incumbent CRO early in the process.
  • Negotiate the financial side of transitions carefully so that the incumbent CRO can work as necessary to complete tasks where practical given timeline constrains as opposed to relying on the new CRO to complete all tasks. This also allows appropriate communication between the incumbent and the new CRO.

rescue trialsTips for Selecting a CRO for the Rescue

  • They should have a thoroughly documented rescue process or rescue project plan template.
  • They’ve successfully implemented rescues between the applicable platforms (i.e., EDC to EDC, paper to EDC, EDC to paper).
  • They understand the need to minimize and manage process change for clinical trial sites.
Click me

Choosing the Right System for your Clinical Trial: Understanding the Differences between EDC and IVR/IWR

Posted by Jamie Hahn on Tue, Apr 09, 2013 @ 09:05 AM
Share:

headshot steve palmatierThe following article was contributed by Steve Palmatier, Rho's service leader for Interactive Response Technology (IxR) system configuration and development.

Sometimes it's difficult to determine the best tool for a job, especially when technologies are developed in parallel to handle similar tasks.  Take Interactive Response Technology (IxR) and Electronic Data Capture / Electronic Case Report Forms (EDC), for example.  Both technologies provide a method for electronic entry of important data.  Both can have data verification checks incorporated to minimize the potential for ambiguous or incorrect data entry.  Both commonly incorporate user roles to limit access of individual users to functionality that is appropriate.  So what are the differences that would provide insight on which technology to use when?  Several areas of differentiation are outlined below.

Purpose of the System

EDC - In short, EDC systems’ primary purpose is to electronically collect and validate participant data for eventual use in statistical analyses.  Collecting these data electronically makes them more quickly available to the study team than traditional paper CRFs, and therefore allows more informed and proactive decision making.

IxR – The goal of IxR in clinical trials is to perform specific tasks, such as randomization, study drug dispensation, study drug resupply requests, emergency unmasking, etc.  It is not the goal of IxR in most cases to be the primary place where participant data are entered and stored, though some data are required to perform the aforementioned tasks.

System Interface

EDC – Due to the sheer volume of data to be captured, EDC systems nearly always use a computer-based interface that allows users to easily navigate between forms and between different areas on the same form.  While swift entry of data into EDC systems is often desired so that study teams have accurate enrollment information, it is not usually operationally critical, so it is acceptable for a user to enter data in not-quite-real-time.  Moreover, most clinical sites in developed countries can be expected to have computers, so a computerized interface is acceptable the vast majority of the time.

IxR – IxR has two main interfaces: web and voice (IWR and IVR respectively).  Over the past 10 years or so, the prevalence of IVR systems has decreased significantly due to workstations, laptops, smartphones, and tablets becoming more widely available in the clinical setting.  However, there are still some instances in which the phone interface is beneficial, such as when entry of data for randomization is highly time-sensitive (e.g., in neonatal trials where randomization must occur very shortly after birth), and when the IxR will be used for patient-reported outcomes or diary entry, since study subjects may not have access to a computer at home.

Navigation Paradigm

EDC - Most EDC systems are form-based, and most of the data entry fields on any particular web page are static.  When a participant is enrolled in a trial, a set of forms is made available into which that participant’s data will be entered.  Whether these forms are necessary or not becomes apparent later.  For instance, if a participant withdraws consent early in the study, there may be many forms for visits later in the study that never have data associated with them.  In many cases, the order in which data are entered is not controlled since different data will become available at different times, though sometimes additional forms are generated as they become necessary (e.g., SAE forms).

IxR - IxR systems generally create data entry pages dynamically.  That is, the information and entry fields that appear on-screen or that are prompted over the phone are a result of previous selections and entries made by the user.   This both minimizes data entry by the user and provides a gating mechanism that forces things to happen in the correct order.  For instance, a user cannot skip to kit assignment prior to randomization, or randomization prior to entry of valid stratification data.

User Modification of Previously Entered Data

EDC - EDC forms can usually be revisited multiple times because all of the data that are to be entered on a form may not be available at once (e.g., lab values).  Often, entry of data that seems inaccurate or is in an incorrect format is accepted and stored but fires a query that must be resolved prior to database lock, and the user may return at a later time to correct or confirm the entry.  This is consistent with the primary purpose of EDC, to store data for use in data analysis that will take place at a later date.

IxR - Unlike EDC forms, entry of data and completion of a function in IxR usually triggers an action that is based on the entered data, so it is uncommon for a user to be able to return to the system to make corrections of previously missing or incorrectly entered data without support intervention.     Incorrect entry of stratification data prior to randomization has cascading impacts, so correcting the mistake often involves more than simply updating that one data point.

Validation Burden

EDC – Because there is an opportunity to correct mistakes between the initial entry and database lock, the importance of correct and complete data at the time of entry is not often assessed to be at the highest level.  Also, since the primary purpose of EDC is to store data rather than to perform actions, validation of the system can focus primarily on making sure that edit checks fire correctly and that the data is stored accurately.

IxR - Because IxR performs actions that impact the course of the study, IxR systems generally carry a higher risk than EDC systems.  Not only is it important for validation efforts to ensure that the entered data is correct; but it is also important to validate the logic that is exercised in order to make the decisions and perform the actions that are based on that data – assigning the correct treatment kits, requesting resupply of investigational product when appropriate, enforcing cohort caps, etc.  The result is that IxR systems (especially those that are highly configurable) generally require more extensive validation and a higher percentage of setup time allotted to validation activities.

In the next post in this series, we’ll use these distinctions to help determine the appropriate scope for IxR systems so that the technology can be used most advantageously.

Free Expert Consultation

CDASH: Reduce Development Costs by Extending CDISC Standards to Clinical Data Management

Posted by Brook White on Tue, Dec 04, 2012 @ 09:42 AM
Share:

Jeff Abolafia-Rho CDISC ExpertThe following article was contributed by Jeff Abolafia, one of our resident CDISC experts. Jeff has more than 20 years of experience in clinical research and has successfully led multiple CTD/NDA submissions. He is the co-founder of the Research Triangle Park CDISC Users Group and a member of the CDISC ADaM and ADaM Metadata teams.


In recent years the FDA has clearly stated its preference for receiving both clinical and analysis data formatted in compliance with CDISC standards. This has been communicated through a series of guidance documents, correspondence with sponsors, and presentations at conferences. As a result, CDISC models have become the de facto standard for submitting data to the FDA.


Given the FDA’s preference for receiving CDISC data, many sponsors have begun to produce CDISC-compliant databases in order to meet FDA submission requirements. In the short term this has led to additional work and higher costs. However, when the standards are implemented properly, organizations have a tremendous opportunity for significant cost savings throughout product development.


As a CRO, Rho has had the opportunity to work with many sponsors on CDISC related projects. Most of these sponsors have noted that producing CDISC compliant deliverables have increased their costs. This has surprised many sponsors. Wasn’t producing standardized CDISC datasets supposed to reduce time and costs?

When it comes to implementing CDISC standards, perhaps sponsors are trying to solve the wrong problem. The problem that most sponsors are addressing is: how can we get the FDA what they want. Instead, we should be asking: how can implementing CDISC standards be part of a cost effective product development strategy. The problem each organization chooses to tackle will determine its implementation strategy.


When the primary goal is meeting FDA requests, the focus tends to be on producing SDTM and ADaM databases and associated documentation. At this point in time, these are the CDISC related deliverables that the FDA has requested. Under this scenario most organizations choose one of the two following implementation strategies: 1) Legacy conversions – datasets are created in a proprietary format while studies are conducted. Data is converted to CDISC format before or while the submission database is being assembled; or 2) During the course of a study convert operational data to SDTM format. Using the SDTM database as input, create an analysis data database that is ADaM compliant. Both of these approaches will get the FDA what they want. However, they also lead to lots of additional work and increased costs.
So, how can we get the FDA what they want and also save time and money? A business case study on CDISC standards by Gartner found that implementing standards from the beginning can save up to 60% of non-subject participation time and cost and that about half of the value was gained in the startup stages. The study also reported that the average study startup time can be reduced from around five months to three months. The use of CDISC standards can be extended upstream to both the protocol and to data collection.


The CDISC CDASH standard extends standards to clinical data management, with the goal of standardizing data collection. CDASH provides standard data streams and variables that are found in most clinical studies. CDASH was also designed to facilitate converting the operational database to SDTM.


CDASH provides a sponsor with a global library of data elements that are also the industry standard. The CDASH global library can be augmented by therapeutic specific libraries. CDASH Libraries can include entire forms for a given data stream, variables or data fields, controlled terminology for each variable, and pre-defined edit checks for each variable. These libraries can be utilized by the sponsor for all studies within and across product development projects.
Extending standards to data collection provides many benefits. Using a global library of standardized data elements allows for cheaper and faster Clinical Data Management System (CDMS) setup. Business case studies by Gartner and Tufts have found that CDMS setup time can be reduced by as much as 50%. Using CDASH facilitates also converting operational data to SDTM. Standardized operational data combined with standardized programs, specifications, and tools can streamline producing SDTM datasets. Producing the operational and SDTM databases can be packaged so that creating SDTM compliant clinical databases is cost effective for Phase One and Phase Two studies. This is a significant benefit for sponsors whose business goal is taking their product to market or partnering and for sponsors without a lot of resources in the earlier product development stages. Also, moving standards implementation upstream also increases communication among business units. Standards implementation is extended beyond programming and biostatistics to data management and clinical operations.


Cost effective standards implementation requires a change in philosophy. It entails re-defining what we are trying to accomplish by using CDISC standards. By integrating standards into the entire life cycle of product development and collecting standardized data instead of standardizing collected data, we can both get the FDA what they want and save time and money while doing so.

References

Rozwell et al., 2009, online at http://www.cdisc.org/business-case. A business case for standards by CDISC and Gartner.

Register for a free CDISC webinar

3 Benefits of Combining Clinical Data Management and Biostatistical Services

Posted by Jamie Hahn on Thu, Oct 25, 2012 @ 01:56 PM
Share:

woman celebratingIn some cases, we're asked to provide services for just one piece of the biometrics component of a clinical trial project or program, such as clinical data management services OR biostatistical services. While in theory this set-up is perfectly acceptable, there are potential benefits that could be realized by having one contract research organization (CRO) support both the clinical data management and biostatistical components for your clinical trial project or program. 

When one CRO provides both clinical data management and biostatistics services for a trial, you can benefit in the following ways:

1) Well-designed database and less re-work

Clinical data management and biostatistical experts collaborate from the earliest stages of study start-up. Early collaboration on CRF design, clinical database set-up, and the clinical data validation plan ensures that the clinical data will support your objectives and reduces the potential for costly statistical re-work associated with an unfamiliar or poorly designed database.

2) Cleaner data

Experienced clinical data managers can provide databases with error rates far below industry standards. Focusing on building quality into every clinical database from CRF design through database lock will ensure that data issues, errors, and anomalies are minimized, and any data errors that do occur will be found early in the process. The earlier data errors are found, the less expensive these errors are to fix. When a clinical database has been designed well and the clinical data management process has been executed successfully, the biostatisticians have many fewer data errors and anomalies to investigate and correct, thus saving you time and money.  

3) Better traceability of data and potentially faster approval

When clinical data managers and biostatisticians collaborate early in the clinical trial process, they can focus on creating clinical, SDTM, and analysis databases in a manner that amplifies the traceability of data. Planning for the use of CDASH, SDTM, and ADaM standards from the start will increase traceability, facilitate FDA review, and potentially expedite approval timelines. 

What benefits have you noticed when one contract research organization supports both the clinical data management and biostatistical services for your clinical trial project or program? 

 

Click me