Rho site logo

Rho Knows Clinical Research Services

An Interactive Suite of Data Visualizations for Safety Monitoring

Posted by Brook White on Thu, Feb 23, 2017 @ 01:42 PM
Share:

This is the fourth in a series of posts introducing open source tools Rho is developing and sharing online. Click here to learn more about Rho's open source effort, here to read about our interactive data visualization library, Webcharts, and here to learn about SAS graphing tools we've developed.

Frequent and careful monitoring of patient safety is one of the most important concerns of any clinical trial. For the medical monitors and safety monitoring committees responsible for supervising patient well-being and ensuring product safety, this obligation requires continuous access to a variety of critical study data.

For trials with large participant enrollment, severe diseases, or complex treatments, study monitors may be tasked with reviewing thousands of data points and safety markers. Unfortunately, traditional reporting methods require monitors to comb through scores of static listings and summary tables. This method is inefficient and poses the risk that clinically-relevant signals will be obscured by the sheer volume of data common in clinical trials.

To improve safety monitoring, we created a suite of interactive data monitoring tools we call the Safety Explorer. Although the safety explorer can be configured to include a variety of charts specific to each study, the standard set-up includes 6 charts (click the links to learn more):

  • Adverse Events Explorer - dynamically query adverse event (AE) data in real time to go from study population view to individual patient records
  • Adverse Events Timeline - view interactive timelines for each participant showing when AEs occurred in a trial
  • Test Results Histogram- explore interactive histograms showing distribution of labs, vital signs, and other safety measures with linked data tables
  • Test Results Outlier Explorer - track patient trajectories over time for lab measures, vital signs, and other safety endpoints in line charts
  • Test Results Over Time - explore population averages for labs, vital signs, and other safety endpoints in box or violin plots
  • Shift Plot - monitor changes in lab measures, vital signs, and other safety endpoints between study events in a dot plot

The safety explorer utilizes common CDISC data standards to quickly create consistent charts for any project. Within a given chart, users can use filters to dynamically sort, highlight, and drill down to data points of interest using controls familiar to anyone who has used a website.

Interactive Histogram with Linked Table

interactive histogram safety data

Explore the distribution of test results (click here for interactive version)

Graphical representations of data grant reviewers a systematic snapshot of the data that helps tell the story of the information. By adding interactive elements, reviewers can quickly examine the charts for patterns of interest and drill down to subject-level data instantly. This ability to quickly distinguish signal from noise, gives monitors greater insight into their data and allows them to work much more efficiently.

It is common practice for us to create safety explorers for all full service projects and studies where Rho provides medical monitoring. All of the charts described here are open source and free to use, so please let us know if you have any feedback, or would like to contribute!

Interactive Box Plot Showing Results Over Time

interactive box plot showing results over time

Track changes in population test results through a study (click here for interactive version)

View

Ryan Bailey, Senior Clinical ResearcherRyan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including theInner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which develops novel data visualizations and statistical graphics for use in clinical trials.

5 Tips for Creating a Request for Proposal (RFP) for Clinical Trial Services

Posted by Brook White on Tue, Feb 14, 2017 @ 11:13 AM
Share:

RFPs tips that allow apples to apples comparisons of clinical trial servicesIf you’re looking for a contract research organization (CRO) to provide clinical trial services, chances are you’ll need to create a request for proposal (RFP). In the complicated world of outsourcing clinical trials, using RFPs to gather comparable bids from CROs can be incredibly challenging. The good news is, with a little planning and time, you can create RFPs that will reduce inconsistencies among bidders and ultimately help you identify the CRO that is truly the right partner for the job.

Here are five tips for creating RFPs that will help you compare “apples to apples” and help the CROs better understand your needs, values, and selection criteria for your clinical trial services:

  1. Provide background information on your compound and program.  Information about other clinical studies completed or in progress, outcomes from preclinical work, regulatory strategy and even funding and marketing plans can provide context that will help a CRO understand your needs and give you a proposal that best addresses all of your concerns.
  2. Provide a protocol or protocol synopsis.  Details about the study, such as number of clinical trial sites, number of subjects, and type and frequency of procedures and assessments are important cost drivers and providing them will help ensure a more accurate proposal.  Also, an experienced CRO should also be able to make valuable recommendations based on your protocol.
  3. Provide detailed RFP information to get consistent costs. Be specific. Some examples might include:
    • Project specifications – What are the important details of your program? (Use our RFP specifications tool)
    • Project timelines – By when do you expect certain milestones to be met?
    • Responsibilities (CRO, sponsor, other vendors) – For which segments of your program do you need a CRO to provide clinical trial services?
  4. Provide additional details. The more details you can provide the better.  It’s also OK to ask questions of prospective CROsask the CRO to make recommendations. You can tell a lot about a CRO by the recommendations they make and how they make them.  However, if you ask CROs to make recommendations be prepared for potential inconsistencies in the assumptions made and pricing offered between different CROs. The following are some additional details that might be helpful to bidders:
    • Provide site locations if you have already determined which sites you want to use.  If you aren’t sure, ask for recommendations based on your target enrollment and timelines.
    • If you’ve already determined which sites you’ll be using, it is helpful to know whether they will use a central lab or local lab and also will they use a local or central IRB. This can have an impact on timelines and costs.
    • Make note of any additional vendors you need such as specialty labs, Electronic Patient Reported Outcomes (ePRO), translations, meeting planners, or imaging services.
    • Will you be using paper or EDC? The vast majority of trials are now using EDC, but there may be some small studies or specific circumstances where paper still makes sense.
    • Do you want your data output in CDISC format? Based on the FDA’s guidance, new studies must be submitted in CDISC format, so it is strongly recommended.
    • If you are planning an interim analyses or will need support for a DSMB, make sure to include this information.
    • Will you use automated subject randomization (IVRS or IWRS)?
    • What are your plans for clinical supplies and distribution (IP management)?
    • Are you interested in risk-based monitoring strategies?  If so, include this information in your RFP. Incorporating remote monitoring or targeted SDV strategies could impact the budget.What are your plans for clinical supplies and distribution (IP management)?
    • Do you want the CRO to be responsible for the TMF?  If so, ask about whether they use an eTMF and if so which one.
    • If you know you want to use specific vendors (i.e. you know you want to use Medidata RAVE for EDC), be sure to include that information.
  5. Other items to request from CROs:
    • Project team CVs including the project manager, lead CRA, lead data manager, medical monitor, and lead statistician
    • Summary of team therapeutic experience and experience running similar trials
    • Relevant company information
Download: RFP Specifications Tool

 

Using SAS to Create Novel Data Visualizations

Posted by Brook White on Tue, Feb 07, 2017 @ 12:59 PM
Share:

Ryan Bailey, Senior Clinical ResearcherRyan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including theInner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which developsnovel data visualizations and statistical graphics for use in clinical trials.

Shane Rosanbalm, Senior BiostatisticianShane Rosanbalm, MS, Senior Biostatistician, has over fifteen years of experience providing statistical support for clinical trials in all phases of drug development, from Phase I studies through NDA submissions.  He has collaborated with researchers in several areas including neonatal sepsis, RA, oncology, chronic pain, hypertension, and Parkinson’s disease.  He is the lead SAS developer on Rho’s Center for Applied Data Visualization, where he develops tools and publishes on best practices for visualizing and reporting data.

This is the third in a series of posts introducing open source tools Rho is developing and sharing online. Click here to learn more about Rho's open source effort.

In our last post, we introduced Webcharts, one of our many interactive web-based charting tools that uses D3. In addition to the many web-based tools that Rho has on GitHub, we also maintain a number of SAS®-based graphics repositories. In fact, our strong reputation for clinical biostatistics and expertise with SAS (and SAS graphing tools) long predated our development of web graphics.

A sampling of some of our SAS tools is provided below, but we invite you to visit GitHub and check out our full offering of SAS tools. You can use the Find a repository... Search bar to search for "SAS". All of our SAS repositories begin with "sas-".

Codebook

sas codebook

SAS codebook

The SAS codebook macro is designed to provide a quick and concise summary of every variable in a SAS dataset. In addition to information about variable names, labels, types, formats, and statistics, the macro also produces a small graphic showing the distribution of values for each variable. This report is a convenient way to provide a snapshot of your data and quickly get to know a new dataset.

Violin Plot

violin plot

The SAS violin plot macro is designed to allow for a quick assessment of how the distribution of a variable changes from one group to another. Think of it as a souped-up version of a box and whisker plot. In addition to seeing the median, quartiles, and min/max, you also get to see all of the individual data points as well as the density curves associated with the distributions.

Sankey Bar Chart

sankey bar chart

The SAS Sankey bar chart macro is an enhancement of a traditional stacked bar chart. In addition to showing how many subjects are in each category over time, this graphic also shows you how subjects transition from one category to another over time.

Other SAS graphics tools include a Beeswarm Plot (a strip plot with non-random jittering) and the Axis Macro for automating the selection of axis ranges for continuous variables. We are adding new SAS repositories frequently. We invite you to try the tools, share your feedback, and contribute to the development of the tools.

Visit Rho's Center for Applied Data Visualization

How Stable Is Your CRO?  And Does It Matter?

Posted by Brook White on Wed, Feb 01, 2017 @ 11:15 AM
Share:

CRO stabilityClinical trials are a costly business.  In 2013 alone, the biopharmaceutical industry spent nearly $10 billion on clinical trials.  Operational failures in clinical trials can increase those costs and create delays in time to market. So when you pick a CRO, you want to ensure that at best they are mitigating operational risks, and at worse aren’t introducing new ones. Stability can play a significant role in this.

Corporate stability

When looking to outsource to a CRO, there are several aspects of corporate stability that should be assessed:

  • How long has the CRO been in business? The newer the company, the more likely that they don’t have the overall experience you may need. Additionally, they have a shorter history with their clients, so it will be difficult to assess past customer satisfaction.
  • Are they financially stable? A company that isn’t profitable, has a lot of outside debt, or is under strain from shareholders or investors may be making short-term business decisions that aren’t in the best interests of your study or program. Worse yet, they could go out of business in the middle of your clinical study or run into cash flow issues that impact the success of your clinical trial.
  • Have they recently been involved in M&A activity, have they gone public, or are there plans to do so? This type of activity typically comes with significant organizational impacts that can bleed over into day-to-day activities. From changing SOPs mid-study to project team turnover, it rarely has a positive impact in the short-term for existing clients.

Consider incorporating questions about these topics into a request for information (RFI) or as part of the request for proposal (RFP) process.  These are valid questions that CROs are used to addressing, so be wary of companies that are reluctant to do so.

Client Stability

client stabilityA good measure of how well you will be treated once the contract is signed is the satisfaction of existing and former clients.  You should look for a CRO that has long-standing relationships with many of their clients.  You’ll often see CROs that have slides listing the logos of their clients or with testimonial quotes from clients.  That’s all well and good, but you need to check for yourself.  Ask for references whose work is of similar scope and for similar services as those you are planning to outsource.  Contact references directly rather than relying on the CRO to act as an intermediary.

People Stability

happy project teamEmployee turnover rates across the CRO industry are incredibly high. In 2015, CRO turnover was 25% despite the fact that team stability is a key factor in meeting sponsor expectations and in doing high quality clinical research. Make sure you ask both about turnover and about the tenure of your assigned team. A team with multiple members that have recently joined the company should be a red flag. In addition to company turnover, find out what you can about project team stability. Keeping a team together over the course of a study means less re-training, fewer mistakes, and less re-work. There is a learning curve with each new sponsor and product. If you are considering outsourcing multiple trials particularly within the same program, ask if the same team members can be assigned to future studies.

Learn more about selecting a CRO by checking out the outsourcing tips on our blog.

Download: RFP Specifications Tool

OHRP Announces Revisions to the Common Rule

Posted by Brook White on Thu, Jan 26, 2017 @ 11:40 AM
Share:

Shann Williams, Senior Director OperationsShann Williams has 10 years of experience managing clinical trials. She is a Sr. Director of Operations and the program director of the statistical and clinical coordinating center of the Transplantation Group for the division-wide consolidated coordinating center sponsored by the National Institute of Allergy and Infectious Disease (NIAID).  In addition, Shann serves as Rho's project management operational service leader, an internal expert sharing project management best practices, processes and training.

regulations, common ruleOn January 18, the Department of Health and Human Services (HHS) announced revisions to the Common Rule.  The intent of the revisions is to enhance protection for research participants while reducing the administrative burden associated with oversight.  This article provides background on the Common Rule, discusses the purpose of the revisions, provides a summary of the changes including which proposals from the review period were not incorporated, and discusses some anticipated impact of the final rule.  Stakeholders will need to be compliant by January 19, 2018.

Background

In 1991, the Federal Policy for the Protection of Human Subjects, generally referred to as the Common Rule, was released.  The original rule was based heavily on the Belmont Report of 1974 that sought to outline:

  1. The boundaries between biomedical/behavioral research and the accepted routine practice of medicine.
  2. The role of assessment of risk-benefit criteria in the determination of the appropriateness of research involving human subjects.
  3. Appropriate guidelines for the selection of human subjects for participation in such research.
  4. The nature and definition of informed consent in various research settings.

The last revision took place in 2005.   Notice of proposed revisions to the Common Rule was published in September 2015 and more than 2100 comments were received, and some proposed changes were not adopted in the final rule.

The Common Rule applies to research using human subjects conducted or supported by HHS or 15 other federal agencies.  Pharmaceutical industry studies not conducted or supported by HHS may still fall under similar and largely redundant FDA regulations (e.g. 21 CFR 50, 21 CFR 54, 21 CFR 56).  Further, studies conducted or supported by HHS that need to comply with IND regulation would be subject to both the Common Rule and FDA regulations (for example, a study that will support changes to the label for an approved product). 

Revision Purpose

According the rule summary, the revision is “intended to better protect human subjects involved in research, while facilitating valuable research and reducing burden, delay, and ambiguity for investigators. These revisions are an effort to modernize, simplify, and enhance the current system of oversight.”

Research involving human subjects has grown and developed substantially over the last two decades. The revision cites examples of developments that include the increasing number and type of clinical trials, the use of electronic health data and other electronic data elements, as well as large data combined with more sophisticated analytical techniques.  These advances were not being matched by a comprehensive oversight system.  Per the revision, “The sheer volume of data that can be generated in research, the ease with which it can be used to identify individuals were simply not possible, or even imaginable, when the Common Rule was first adopted.”  The revision also references the President’s Precision Medicine Initiative as a consideration for proposed changes with the central tenant that participants in research should be active partners and not merely passive subjects.

Summary of Changes

The revision to the Common Rule includes the following:

  • Enacts new policies for what research subjects should be informed of during the informed consent process.  New requirements include:
    • A statement added to the consent form regarding whether a subject’s biospecimens – even if non-identified – may be used for commercial profit and whether the subject will or will not share in this profit.
    • Whether or not clinically relevant research results will be disclosed to subjects and, if so, how they will be disclosed.
  • Requires that one version of the informed consent form used during enrollment be posted for studies conducted or supported by the federal government on a federal website no later than 60 days after last patient, last visit. Responses to the revision suggested using Clinicaltrials.gov as the website to post these forms, but the federal website that will be used has not been determined.
  • Creates an expectation that informed consent forms include a succinct summary at the beginning of the document that includes information that would be most important to the person considering participation in the study such as potential benefits and risks, alternative treatments they should consider, and the purpose of the research.  According to Jerry Menikoff, Director of the Office of Human Research Protections (OHRP) at HHS, “Over the years, many have argued that consent forms have become these incredibly lengthy and complex documents that are designed to protect institutions from lawsuits, rather than providing potential research subjects with the information they need in order to make an informed choice about whether to participate in a research study. We are very hopeful that these changes and all the others that reduce unnecessary administrative burdens will be beneficial to both researchers and research participants.”
  • Allows obtaining a broad consent from subjects for unspecified future research as an option for investigators and requires that 12 elements be included (6 of these are specific to a broad consent).
  • Identifies new categories of research that may be exempt from a full IRB review based on the study’s risk profile. 
  • Requires that multi-center research use a single IRB for research that takes place within the United States.  This will become effective 3 years after publication of the final rule in 2020.
  • Enforces regulatory compliance for IRBs that are not operated by a Federal Wide Assurance holding institution.  Previously the rule held that institutions holding the FWA would be held responsible, thus increasing the reluctance of clinical sites to use a central IRB.
  • Alleviates the need for IRB continuing review of ongoing research for studies that were under expedited review, those studies that have completed study interventions and are in analysis, or are in observational follow-up combined with standard of care.

The final rule did not adopt several proposals being considered during the advanced notice of proposed rulemaking time period, including:

  • Requiring that non-identified biospecimens be subject to the Common Rule and require consent to be obtained for such specimens as well as imposing further restrictions under which a waiver of consent could be obtained for biospecimen research.  The proposed changes that received the most comments were regarding how the rule would impact biospecimens including the expanded definition of “human subject”.
  • Requiring that this policy expand to cover clinical trials that are not federally funded
  • Enforcing standard privacy practices for identifiable health information and biospecimens
  • Adding a list of activities that would meet the definition of “minimal risk” to provide more clarity around this definition
  • Requiring that the study exemptions to IRB review be documented and determined in a specific way (e.g., using a specific tool)

Predictions, Questions, and Things that Remain to Be Seen

  • The Common Rule document includes projections for quantitative savings and qualitative benefits from years 2017 – 2026. One theme of the NPRM public comments which led to the abandonment of the more controversial proposed changes in this final rule was the lack of details or as the Administrative Procedure Act calls them “terms or substance.”  The comments discussed a lack of quality deliverables as well as ambiguity around underlying tools, templates and concepts.  The revision cites examples of these not available for public comment at the time of the proposed rule-making: 1) broad consent templates; 2) standards for privacy protection; 3) list of eligible expedited procedures; and 4) study exemption decision tool.  Time will tell if the streamlined nature of the final rule as compared to the proposed changes will result in measurable deliverables and the projected quantitative & qualitative benefits.
  • In an ideal execution of the new rule, the requirement for multi-site U.S. research to use a single IRB would reduce costs and expedite the research.  Notably, the NIH also recently issued their policy on the use of a single IRBs for these types of studies.  However, we’ve had first-hand experience following this recent mandate.  Some clinical sites have been reluctant or unable to relinquish control over their research to a central IRB authority.  We have some studies that have had to go through both a central IRB and institutional site IRBs thereby increasing the study costs and timelines in direct contradiction to the goal of the revision.  Since the requirement change for the central IRB not held by a FWA-holding institution be in compliance with the rule will not be made effective until next year, there seems to be high potential that there will be increased costs and burden in the short-term.
  • In the same vein of thought, some clinical site IRBs may continue their current practices for full- and continuing reviews regardless of whether those studies meet the revised definition of being exempt from requiring them. Hopefully the changes will provide long-term benefits as more clinical sites become comfortable implementing the revisions and amending their institutional guidelines.
  • It is possible that institutions that support Common Rule-only, IND-only, and Common Rule/IND regulated projects will develop policies that generally assure compliance to both the CR and FDA regulations irrespective of which formally applies to any given study.
  • Providing study subjects their clinically relevant data has been a topic of discussion and an area of vested interest for some time now.  This is certainly a reasonable expectation for study subjects and one that the clinical research community wants to provide. The new rule does not require data to be provided to subjects, but the requirement to tell subjects whether they will receive this data may encourage the clinical research community to move in that direction more quickly.    However, there has been very little in terms of generally accepted methods or best practices in the absence of a formal guidance. In order to draft the appropriate consent form language per the updated rule, study-specific decisions will need to be made earlier during the protocol development phase which will impact CRF and database development as well as potentially have study design and randomization schema impacts.  Discussions will need to take place around what data is deemed to be “clinically relevant” (e.g., should all data for a given subject be provided or just this subset?), how distributing this data to study subjects may impact blinding considerations, appropriate timing of data sharing (e.g., after LPLV or after that specific subject completes the study), HIPAA considerations, providing the same data to their primary care providers, format and medium of the data to be shared, among many others factors for consideration. The research community will need to start discussing these impacts to ensure the statement included in the informed consent form will accurately convey the intent. 
  • In addition, it is uncertain what opinions are held by the new presidential administration and what impact those opinions might have.   
  • With regard to informed consent changes, there is clearly a desire to make ICFs easier for subjects to read, so they fully understand the benefits and risks of participation.  While the final rule sets that expectation, implementation of these changes will determine whether they provide the intended benefit. 
As with all new regulations, we’ll know more as stakeholders implement the various changes.  What do you think about the final rule?  Share your thoughts in the comments below.

Webcharts: A Reusable Tool for Building Online Data Visualizations

Posted by Brook White on Wed, Jan 18, 2017 @ 01:39 PM
Share:

 

This is the second in a series of posts introducing open source tools Rho is developing and sharing online. Click here to learn more about Rho's open source effort.

When Rho created a team dedicated developing novel data visualization tools for clinical research, one of the group's challenges was to figure out how to scale our graphics to every trial, study, and project we work on. In particular, we were interested in providing interactive web-based graphics, which can run in a browser and allow for intuitive, real-time data exploration.

Our solution was to create Webcharts - a web-based charting library built on top of the popular Data-Driven Documents (D3) JavaScript library - to provide a simple way to create reusable, flexible, interactive charts.

Interactive Study Dashboard

interactive study dashboard--webcharts

Track key project metrics in a single view; built with Webcharts (click here for interactive version)

Webcharts allows users to compose a wide range of chart types, ranging from basic charts (e.g., scatter plots, bar charts, line charts), to intermediate designs (e.g., histograms, linked tables, custom filters), to advanced displays (e.g., project dashboards, lab results trackers, outcomes explorers, and safety timelines). Webcharts' extensible and customizable charting library allows us to quickly produce standard charts while also crafting tailored data visualizations unique to each dataset, phase of study, and project.

This flexibility has allowed us to create hundreds of custom interactive charts, including several that have been featured alongside Rho's published work. The Immunologic Outcome Explorer (shown below) was adapted from Figure 3 in the New England Journal of Medicine article, Randomized Trial of Peanut Consumption in Infants at Risk for Peanut Allergy. The chart was originally created in response to reader correspondence, and was later updated to include follow-up data in conjunction with a second article, Effect of Avoidance on Peanut Allergy after Early Peanut Consumption. The interactive version allows the user to select from 10 outcomes on the y-axis. Selections for sex, ethnicity, study population, skin prick test stratum, and peanut specific IgE at 60 and 72 months of age can be interactively chosen to filter the data and display subgroups of interest. Figure options (e.g., summary lines, box and violin plots) can be selected under the Overlays heading to alter the properties of the figure.

Immunologic Outcome Explorer

immunologic outcome explorer using webcharts


Examine participant outcomes for the LEAP study (click here for interactive version)

Because Webcharts is designed for the web, the charts require no specialized software. If you have a web browser (e.g., Firefox, Chrome, Safari, Internet Explorer) and an Internet connection, you can see the charts. Likewise, navigating the charts is intuitive because we use controls familiar to anyone who has used a web browser (radio buttons, drop-down menus, sorting, filtering, mouse interactions). A manuscript describing the technical design of Webcharts was recently published in the Journal of Open Research Software.

The decision to build for general web use was intentional. We were not concerned with creating a proprietary charting system - of which there are many - but an extensible, open, generalizable tool that could be adapted to a variety of needs. For us, that means charts to aid in the conduct of clinical trials, but the tool is not limited to any particular field or industry. We also released Webcharts open source so that other users could contribute to the tools and help us refine them.

Because they are web-based, charts for individual studies and programs are easily implemented in RhoPORTAL, our secure collaboration and information delivery portal which allows us to share the charts with study team members and sponsors while carefully limiting access to sensitive data.

Webcharts is freely available online on Rho's GitHub site. The site contains a wiki that describes the tool, an API, and interactive examples. We invite anyone to download and use Webcharts, give us feedback, and participate in its development.

View

Jeremy Wildfire, MS, Senior Biostatistician, has over ten years of experience providing statistical support for multicenter clinical trials and mechanistic studies related to asthma, allergy, and immunology.  He is the head of Rho’s Center for Applied Data Visualization, which develops innovative data visualization tools that support all phases of the biomedical research process. Mr. Wildfire also founded Rho’s Open Source Committee, which guides the open source release of dozens of Rho’s graphics tools for monitoring, exploring, and reporting data. 

Ryan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including theInner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which developsnovel data visualizations and statistical graphics for use in clinical trials.

Tips for Effective Enrollment Tracking

Posted by Brook White on Thu, Jan 05, 2017 @ 10:05 AM
Share:

Heather Kopetskie, Senior BiostatisticianHeather Kopetskie, MS, is a Senior Biostatistician at Rho. She has over 10 years of experience in statistical planning, analysis, and reporting for Phase 1, 2 and 3 clinical trials and observational studies. Her research experience includes over 8 years focusing on solid organ and cell transplantation through work on the Immune Tolerance Network (ITN) and Clinical Trials in Organ Transplantation (CTOT) project.  In addition, Heather serves as Rho’s biostatistics operational service leader, an internal expert sharing biostatistical industry trends, best practices, processes and training.

It’s important to track enrollment of a trial over time to make sure accrual goals are met but knowing the enrollment number alone isn’t sufficient to know how a study is progressing. Viewing enrollment visually can provide a quick overview of how enrollment is progressing in the trial as a whole but also how specific sites are performing, whether enrollment goals will be met, and key information about the enrollment population. Below are some graphics that have been valuable for us to keep track of site performance.

Most trials start with a rolling site activation as many factors impact when a site may be activated. Along with this each site may have a different target enrollment for a trial. The below Enrollment Over Time graph takes into account when sites are activated and their target randomization rate to show target rates over time to meet accrual goals along with actual enrollment rates. In addition to overall study status, the graph can be subset to review a particular sites status. In the Overall Enrollment bar graph, a quick overview of how many subjects have been screened, enrolled, and randomized at each site along with the target accrual are shown to quickly see which sites are performing and which sites need additional follow-up.

enrollment metrics

In some studies, it’s important to track sub-groups of enrollment. This can be done by including sub-bars that show what percent of subjects are in each group.

 

enrollment-by-subgroup.png

Dropout is a concern in many trials when the sample size guidelines project what is expected and how certain dropout rates will affect the power of the primary analysis. This graph lets us keep track of how we are doing with staying within the pre-specified dropout rates to ensure we aren’t loosing too much power to evaluate the primary endpoint.

study dropout tracking

Tracking enrollment overall and by site can help the study team manage the study and focus their efforts on sites that are lagging behind. Close monitoring of study dropouts is valuable so additional retention strategies can be put in place if needed before the number of dropouts has a detrimental effect on the power of a trial. 

Download: 5 Tips for Conducting Feasibility for a New Clinical Trial

The Rise of Electronic Clinical Outcome Assessments (eCOAs) in the Age of Patient Centricity

Posted by Brook White on Tue, Dec 06, 2016 @ 10:36 AM
Share:

Lauren Neighbours, Clinical Research ScientistLauren Neighbours is a Research Scientist at Rho. She leads cross-functional project teams for clinical operations and regulatory submission programs and has over ten years of scientific writing and editing experience. Lauren has served as a project manager and lead author for multiple clinical studies across a range of therapeutic areas that use patient- and clinician-reported outcome assessments, and she worked with a company to develop a patient-reported outcome instrument evaluation package for a novel electronic clinical outcome assessment (eCOA).

Jeff Abolafia, Chief Strategist Data StandardsJeff Abolafia is a Chief Strategist for Data Standards at Rho and has been involved in clinical research for over thirty years. He is responsible for setting strategic direction and overseeing data management, data standards, data governance, and data exchange for Rho’s federal and commercial divisions. In this role, Jeff is responsible for data collection systems, data management personnel, developing corporate data standards and governance, and developing systems to ensure that data flows efficiently from study start-up to submission or publication. Jeff has also developed systems for managing, organizing, and integrating both data and metadata for submission to the FDA and other regulatory authorities.

patient centricityWith the industry-wide push towards patient-centricity, electronic clinical outcome assessments (eCOAs) have become a more widely used strategy to streamline patient data collection, provide real-time access to data (for review and monitoring), enhance patient engagement, and improve the integrity and accuracy of clinical studies.  These eCOAs are comprised of a variety of electronically captured assessments, including patient reported outcomes (PROs), clinician-reported and health-care professional assessments (ClinROs), observer reported outcomes (ObsROs), and patient performance outcomes administered by health-care professionals (PerfOs).  The main methods for collection of eCOA data include computers, smartphones, and tablets, as well as telephone systems.  While many companies have chosen to partner with eCOA vendors to provide these electronic devices for use in a clinical study, other sponsors are exploring “bring your own device (BYOD)” strategies to save costs and start-up time.  No matter what strategy is used to implement an eCOA for your clinical study, there are several factors to consider before embarking on this path.  

Designing a Study with eCOAs

The decision to incorporate an eCOA into your clinical study design is multifaceted and includes considerations such as the therapeutic area, the type of data being collected, and study design, but the choice can first be boiled down to 2 distinct concepts: 1) the need for clinical outcome data from an individual, and 2) the need for this data to be collected electronically. Thus, the benefits and challenges to eCOAs can be aligned with either or both of these concepts.

Regarding the first concept, the need for clinical outcome data should be driven by your study objectives and a cost-benefit analysis on the optimal data collection technique. Using eCOAs to collect data is undoubtedly more patient-centric than an objective measure such as body mass index (BMI), as calculated by weight and height measurements. The BMI calculation does not tell you anything about how the patient feels about their body image, or whether the use of a particular product impacts their feelings of self-worth. If the study objective is to understand the subjective impact of a product on the patient or health-care community, a well designed eCOA can be a valuable tool to capture this information. These data can tell you specific information about your product and help inform the labeling language that will be included in the package insert of your marketed product. Additionally, FDA has encouraged the use of PROs to capture certain data endpoints, such as pain intensity, from a patient population who can respond themselves (see eCOA Regulatory Considerations below). Of course, it’s important to note that the inherent subjectivity of eCOAs does come with its own disadvantages. The data is subject to more bias than other objective measures, so it’s critical to take steps to reduce bias as much as possible. Examples of ways to reduce bias include single- or double-blind trial designs, wherein the patient or assessor is not aware of the assigned treatment, and building in a control arm (e.g., placebo or active comparator) to compare eCOA outcome data across treatment groups.

Another important concept is the process for identifying and implementing the electronic modality for eCOA data collection.  Many studies still use paper methods to collect clinical outcome data, and there are cases when it may make more sense to achieve your study objectives through paper rather than electronic methods (e.g., Phase 1 studies with limited subjects).  However, several types of clinical outcome data can be collected more efficiently, at lower cost, and at higher quality with electronic approaches (e.g., diary data or daily pain scores).  From an efficiency standpoint, data can be entered directly into a device and integrated with the electronic data management system being used to maintain data collection balancing time and cost when considering paper or electronic clinical outcomes assessmentsfor the duration of the study.  This saves time (and cost) associated with site personnel printing, reviewing, interpreting, and/or transcribing data collected on paper into the electronic data management system, and it also requires less monitoring time to review and remediate data.  Additionally, paper data is often “dirty” data, with missing or incorrectly recorded data in the paper version, followed by missing or incorrectly recorded data entered into the data management system.  The eCOA allows for an almost instantaneous transfer of data that saves the upfront data entry time but also saves time and cost down the road as it reduces the effort required to address queries associated with the eCOA data.  Aside from efficiencies, eCOA methods allow for more effective patient compliance measures to be implemented in the study.  The eCOA device can be configured to require daily or weekly data entry and real-time review by site personnel prior to the next scheduled clinic visit.  Additionally, the eCOA system can send out alerts and reminders to patients (to ensure data is entered in a timely manner) and to health-care personnel (to ensure timely review and verification of data and subsequent follow-up with patients as needed).  The downsides to electronic data collection methods tend to be associated with the costs and time to implement the system at the beginning of the study.  It’s therefore essential to select an appropriate eCOA vendor  early who will work with you to design, validate, and implement the clinical assessment specifically for your study.

eCOA Regulatory Considerations

In line with the industry push for patient-focused clinical studies, recent regulatory agency guidance has encouraged the use of eCOAs to evaluate clinical outcome data.  The fifth authorization of the Prescription Drug User Fee Act (PDUFA V), which was enacted in 2012 as part of the Food and Drug Administration Safety and Innovation Act (FDASIA), included a commitment by the FDA to more systematically obtain patient input on certain diseases and their treatments.  In so doing, PDUFA V supports the use of PRO endpoints to collect data directly from the patients who participate in clinical studies but also as a way to actively engage patients in their treatment.  The 2009 FDA guidance for industry on Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims , further underscores this idea by stating “[the use] of a PRO instrument is advised when measuring a concept best known by the patient or best measured from the patient perspective.”  The 2013 Guidance for Industry on Electronic Source Data in Clinical Investigations  provides the Agency’s recommendations on “the capture, review, and retention of electronic source data” and is to be used in conjunction with the 2007 guidance on Computerized Systems Used in Clinical Investigations for all electronic data and systems used in FDA-regulated clinical studies, including eCOAs.  To support these efforts, the FDA has developed an extensive Clinical Outcome Assessment Qualification Program, which is designed to review and assess the design, validity, and reliability of a COA for  a particular use in a clinical study.  Furthermore, the newly formed Clinical Outcome Assessment Compendium  is a collated list of COAs that have been identified for particular uses in clinical studies.  The COA Compendium is further evidence of FDA’s commitment to patient-centric product development, and it provides a helpful starting point for companies looking to integrate these assessments into their clinical development programs. 

Before choosing an eCOA for your clinical development program, the following regulatory factors should be considered:

  • FDA holds COAs to the same regulatory and scientific standards as other measures used in clinical trials. Thus, it is advisable to refer to the Guidance for Industry on Patient-Reported Outcomes and the available information on the COA Assessment Qualification program and COA Compendium provided by the Agency when implementing eCOAs into your development program. If you plan to divert from currently available regulatory guidance, make sure to have a solid rationale and supporting documentation to substantiate your position.
  • The qualification of an eCOA often requires input from patients and/or health-care professionals to evaluate the effectiveness of the assessment. This input is necessary for the regulatory agency to determine whether the eCOA can accurately measure what it’s supposed to measure (validity) and to demonstrate it can measure the outcome dependably (reliability).
  • Data collected from qualified and validated eCOAs can be used to support product labeling claims. The key is to use an eCOA when it’s appropriate to do so and to make sure the eCOA supports your intended labeling claims because the instrument will be evaluated in relation to the intended use in the targeted patient population.
  • For the cases where an instrument was developed for paper based collection or an instrument is collected using multiple modes, it may be necessary to test for equivalence. This regulatory expectation is often required (especially for primary and secondary endpoints) to ensure that the electronic version of the instrument is still valid and data collected with mixed modes are comparable.

A CRO Can Help with your eCOA Strategy

CROs partner with sponsor companies to develop and execute their product development strategies.  In some cases, this involves implementing clinical outcome measures into a development program and then facilitating the interactions between the company and regulatory authorities to ensure adequate qualification of the COA prior to marketing application submission.  Whether or not you choose to engage a CRO in your development plan, consider seeking outside consultation from the experts prior to establishing your eCOA strategy to give you and your company the best chance of success.  

CROs Can Help:

  • Determine endpoints where eCOA data is appropriate
  • Determine the cost/benefit of electronic vs paper data capture
  • Determine the best mode of electronic data capture
  • Recommend eCOA vendors when appropriate
  • Perform equivalence analysis
  • Facilitate discussions with regulatory authorities
  • Manage the entire process of eCOA implementation

Webinar: ePRO and Smart Devices

Embracing Open Source as Good Science

Posted by Brook White on Wed, Nov 30, 2016 @ 09:37 AM
Share:

Ryan Bailey, Senior Clinical ResearcherRyan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including theInner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which developsnovel data visualizations and statistical graphics for use in clinical trials.

open source software in clinical researchSharing. It's one of the earliest lessons your parents try to teach you - don't hoard, take turns, be generous. Sharing is a great lesson for life. Sharing is also a driving force behind scientific progress and software development. Science and software rely on communal principles of transparency, knowledge exchange, reproducibility, and mutual benefit.

The practice of open sharing or open sourcing has advanced these fields in several ways:

We also feel strongly that the impetus for open sharing is reflected in Rho's core values - especially team culture, innovation, integrity, and quality. Given our values, and given our role in conducting science and creating software, we've been exploring ways that we can be more active in the so-called "sharing economy" when it comes to our work.

One of the ways we have been fulfilling this goal is to release our statistical and data visualization tools as freely-accessible, open source libraries on GitHub. GitHub is one of the world's largest open source platforms for virtual collaboration and code sharing. GitHub allows users to actively work on their code online, from anywhere, with the opportunity to share and collaborate with other users. As a result, we not only share our code for public use, we also invite feedback, improvements, and expansions of our tools for other uses.

We released our first open source tool - the openFDA Adverse Event Explorer - in June 2015. Now we have 26 team members working on 28 public projects, and that number has been growing rapidly. The libraries and tools we've been sharing have a variety of uses: monitor safety data, track project metrics, visualize data, summarize every data variable for a project, aid with analysis, optimize SAS tools, and explore population data.

Most repositories include examples and wikis that describe the tools and how they can be used. An example of one of these tools, the Population Explorer is shown below.

Interactive Population Explorer

interactive population explorer, clinical trial graphics

Access summary data on study population and subpopulations of interest in real time.

One of over 25 public projects on Rho's GitHub page - available at: https://github.com/RhoInc/PopulationExplorer

Over the next few months, we are going to highlight a few of our different open source tools here on the blog. We invite you to check back/subscribe to learn more about the tools we're making available to the public. We also encourage you to peruse the work for yourself on our GitHub page: https://github.com/RhoInc.

We are excited to be hosting public code and instructional wikis in a format that allows free access and virtual collaboration, and hope that an innovative platform like GitHub will give us a way to share our tools with the world and refine them with community feedback. As science and software increasingly embrace open source code, we are changing the way we develop tools and optimizing the way we do clinical research while staying true to our core purpose and values.

If you have any questions or want to learn more about one of our projects, email us at: graphics@rhoworld.com

Big Data: The New Bacon

Posted by Brook White on Wed, Nov 16, 2016 @ 04:10 PM
Share:

Dr. David Hall, Senior Research ScientistDavid Hall is a bioinformatician with an expertise in the development of algorithms, software tools, and data systems for the management and analysis of large biological data sets for biotechnology and biomedical research applications. He joined Rho in June, 2014 and is currently overseeing capabilities development in the areas of bioinformatics and big biomedical data. He holds a B.S. in Computer Science from Wake Forest University and a Ph.D. in Genetics with an emphasis in Computational Biology from the University of Georgia.

big data is the new baconData is the new bacon as the saying goes. And Big Data is all the rage as people in the business world realize that you can make a lot of money by finding patterns in data that allow you to target marketing to the most likely buyers. Big Data and a type of artificial intelligence called machine learning are closely connected. Machine learning involves teaching a computer to make predictions by training it to find and exploit patterns in Big Data. Whenever you see a computer make predictions—from predicting how much a home is worth to predicting the best time to buy an airline ticket to predicting which movies you will like—Big Data and machine learning are probably behind it.

However, Big Data and machine learning are nothing new to people in the sciences. We have been collecting big datasets and looking for patterns for decades. Most people in the biomedical sciences consider the Big Data era starting in the early to mid-1990s as various genome sequencing projects ramped up. The human genome project wrapped up in 2003, took more than 10 years, and cost somewhere north of $500 million. And that was to sequence just one genome. A few years later, the 1000 Genome Project started, whose goal was to characterize genetic differences across 1000 diverse individuals so that we can predict who is susceptible to various diseases among other things. This effort was partially successful, but we learned that 1000 genomes is not enough.

cost of human genome sequencingThe cost to sequence a human genome has fallen to around $1,000. So the ambition and scale of big biomedical data has increased proportionately. Researchers in the UK are undertaking a project to sequence the genomes of 100K individuals. In the US, the Precision Medicine Initiative will sequence 1 million individuals. Combining this data with detailed clinical and health data will allow machine learning and other techniques to more accurately predict a wider range of disease susceptibilities and responses to treatments. Private companies are undertaking their own big genomic projects and are even sequencing the “microbiomes” of research participants to see what role good and bad microbes play in health.

Like Moore’s law that predicted the vast increasing in computing power, the amount of biomedical data we can collect is on a similar trajectory. Genomics data combined with electronic medical records combined with data from wearables and mobile apps combined with environmental data will one day shroud each individual in a data cloud. In the not too distant future, maybe medicine will involve feeding a patient’s data cloud to an artificial intelligence that has learned to make diagnoses and recommendations by looking through millions of other personal data clouds. It seems hard to conceive, but this is the trajectory of precision medicine. Technology has a way of sneaking up on us and the pace of change keeps getting faster. Note that the management and analysis of all of this data will be very hard. I’ll cover that in a future post.

View