Rho site logo

Rho Knows Clinical Research Services

Current Gene Therapy Landscape:  Overview, Challenges, and Benefits

Posted by Kristin Gabor on Wed, Jan 15, 2020 @ 11:22 AM
Share:

In 2017, the US Food and Drug Administration (FDA) approved its first gene therapy, a chimeric antigen receptor T-cell (CAR-T) product to fight acute lymphoblastic leukemia (Kymriah). That same year, another CAR-T therapy was approved to fight certain types of B-cell lymphoma (Yescarta). Since then, 4 additional products have been approved to treat serious diseases such as B-thalassemia, spinal muscular atrophy, a rare form of vision loss and a rare form of primary immunodeficiency. There are currently over 450 gene therapy and gene-based medicine companies worldwide and around 800 gene therapy and gene-modified cell therapy clinical trials being conducted (ARM Q3 update).

Significant progress has been made in developing safe gene therapy products in recent years. Just over 20 years ago, the tragic case of 19-year old Jesse Gelsinger who died during a gene therapy trial led to a temporary collapse in the gene therapy field. Since then, the gene therapy industry has come a long way. In a joint statement made in January 2019, FDA Commissioner Scott Gottlieb and CBER Director Peter Marks noted that based on an assessment of the current pipeline and clinical success rates of gene therapy products, the FDA anticipates that by 2020 they will be receiving more than 200 INDs per year for cell-based or directly administered gene therapies and by 2025 the FDA will be approving 10 to 20 cell and gene therapy products a year.

The basics – what is gene therapy?kristin1

Gene mutations can be inherited or occur as cells age or become exposed to certain chemicals. Small changes to our genetic material can have large impacts on cellular function.

Gene therapy products mediate their effect by transfer of genes or alteration of human genetic sequences. It can be thought of as the introduction, removal, or change of genetic material within patient cells – for instance, repairing a faulty gene in order to produce new or modified proteins, increase proteins that will fight disease, or reduce proteins that cause disease. This can be achieved by gene replacement, gene silencing, gene addition, or gene editing.

The genetic material is typically transferred into patient cells using a specific type of delivery mechanism (described below). Once inside the cell, the gene will make functional protein or target the disease-causing gene.

Gene Delivery Mechanisms

Gene therapy administration can occur via ex vivo or in vivo mechanisms. With ex vivo gene therapy, targeted cells (eg, chimeric antigen receptors (CAR) T cell therapies, T cell receptor (TCR) therapies) are extracted from the patient and the cells are genetically modified in vitro before they are transferred back into the patient. In vivo administration occurs via direct delivery to the patient using a viral or non-viral vector and the target cells remain in the body of the patient.kristin2

Vectors (carriers of the gene), can come in the form of viral vectors, which are genetically engineered viruses that replace the virus genome with the therapeutic gene. Examples of viral vectors are retroviruses, lentiviruses, and adeno-associated virus (AAV; of note, the AAV method is believed by many to be the future of gene therapy). Plasmids (small DNA circles or “naked” DNA), engineered bacteria, and liposomes are other vectors of gene therapy administration to either carry and deliver a replacement gene or to help or silence an existing gene.

Another type of gene therapy product is genome edited cells, which can be generated using zinc finger nucleases (ZFNs); transcription activator-like effector-based nucleases (TALEN), or nucleases such as Cas9 and Cas12a that derive from the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR/Cas). For instance, one company is using a genome editing platform called ARCUS derived from a natural genome editing enzyme called a homing endonuclease in their CAR-T development programs.

Challenges and Benefits in Developing Gene Therapies

The unique benefits of gene therapy are that it targets the cause of the disease and has the ability to treat or maybe even cure rare, debilitating diseases that have few to no treatment options, often with a single administration. Over time, the high price tag of gene therapy could reduce or eliminate the need for a lifetime of expensive and/or painful ongoing treatments that many in this patient population would require.

Although gene therapy offers a number of unique benefits, a number of challenges exist that make the development of gene therapies difficult as well as expensive. Those include:

• Length of clinical trial process – long term safety follow up due to uncertainty surrounding the potential durability of effect and long-term safety effects or complications;
• Manufacturing is complex and time intensive, and many sponsors have faced capacity constraints due to the difficulty in mass production (of viral particles, for instance);
• Individual response variability;
• Some conditions have many mutations leading to a given disease and one gene therapy approach may only work for a subset of affected individuals;
• May stabilize disease pathology but not actually reverse existing damage;
• The efficiency of gene transfer with vectors and the limited control over where integrating viral vectors will integrate;
• Dose of vectors may be high and large doses may be needed to reach target tissues; vectors could be filtered out in the liver or cleared by an immune response;
• Unlike other therapeutics, once administered there is no dose titration and no ability to control expression levels; gene therapy cannot be turned on or off. It is also very likely that there is no ability to do repeat dosing due to immunity developed during initial treatment.

An Advancing Field

Gene therapy offers the potential for significant breakthroughs and a lot of exciting progress hBlue DNA helix background-1as been made with scientific advances in gene therapy and the recent FDA approvals. Their complexity to develop should not be underestimated, as evidenced by the FDA’s release of 6 additional guidances in 2018 intended to help product developers. The gene therapy industry is poised for the future, but due to its complexity the development process needs to be thoughtfully planned and managed. Critical to the success of the field will be the capacity to scale up for AAV manufacturing or other gene therapy manufacturing factories and a streamlined regulatory landscape.

Rho Can Help

There are several unique aspects of gene therapy clinical trials. Rho is currently leading several gene therapy studies and has conducted over 20 gene/cellular therapy trials in over 1000 patients.  One of our sponsors commented, “Our collaborative relationship with Rho has been instrumental in the implementation of a complex and rigorous first in human genetic medicine study, including strategic solutions for unique challenges faced in rare disease gene therapy trials.  Rho has leveraged existing relationships with patient advocacy organizations and worked closely with a centralized biosafety review partner early in study startup to help identify and mitigate potential challenges. “   

From both a clinical and regulatory standpoint, our experts can offer advice on your gene therapy trial from the preclinical phase through post-submission. For additional gene therapy considerations from our experts, please view our webinar, "Development Advice for Gene Therapy Products"  and for regulatory considerations,  David Shoemaker's article, "The Gene Therapy Product Development Process."

Need support designing and executing your next gene therapy trial? Ask our experts for help.

Kristin Gabor smallKristin Gabor, PhD, RAC, Research Scientist, has experience in both regulatory submissions and clinical operations management, with over 10 years of experience in scientific writing and editing clinical and nonclinical documents, which includes numerous publications in peer-reviewed scientific journals.  Dr. Gabor has led and participated in the authoring, review, and preparation of several regulatory and clinical documents, including protocols, clinical study reports, annual safety reports, modules of regulatory submissions (NDA, IND, etc.), and other regulatory documents in a variety of therapeutic areas at various stages of integrated product development programs. Her experience spans a spectrum of therapeutic areas, including cystic fibrosis, sickle cell disease, inflammation and immunology, infectious diseases, atopic dermatitis, multiple sclerosis, and rare diseases. Dr. Gabor earned an interdisciplinary PhD in Functional Genomics from the University of Maine and subsequently received an Intramural Research Training award from the NIH/NIEHS for her postdoctoral studies investigating the role of cholesterol metabolism and cell membrane perturbations in regulating the innate immune response in a rare genetic disease.  Dr. Gabor received her Regulatory Affairs Certification from the Regulatory Affairs Professionals Society (RAPS) in 2018 and is a current member of RAPS and the North Carolina Regulatory Affairs Forum (NCRAF).

 

Accentuate the. . . Negative: The Importance of Publishing Negative Clinical Study Results

Posted by Brook White on Wed, Nov 28, 2018 @ 11:36 AM
Share:

Jamison Chang, MD, Medical OfficerJamison Chang, MD, medical officer, is a board-certified internist with over 15 years of clinical experience with a broad range of disease entities in both the ambulatory and hospital settings. After completing his residency and chief residency at UNC Chapel Hill, he obtained additional training in nephrology as well as a master’s degree in clinical research (MS-CR). These experiences allow Dr. Chang to meld clinical pragmatism with scientific rigor to help plan and conduct high quality clinical trials. 

 balancing positive and negative“Stop being so negative about things” or “if you had a less negative attitude, things would go better for you.”  No matter the setting, there tends to be a strong distaste for negativity within our culture.  One notable exception to this is in pursuit of scientific progress.   Here, at least in theory, a negative finding (something does not work) should garner as much attention as a positive finding (something works).  Ideally, a scientific discipline then takes the balance of both supporting and refuting evidence to decide on the current state of knowledge.  This is science at its best.  When the scientific community falls short of this ideal by failing to consider the entirety of scientific evidence, the quality of scientific research can fall.  Lower quality research can result in poorly informed policies, wasted resources and in the extreme cases, harm.  While not the only scientific discipline affected, clinical research has well-documented cases of not devoting equal attention to both negative and positive findings.  The remainder of this post will discuss supporting evidence for this claim, attempt to explain why this may be occurring, explain the effects of this phenomenon on the clinical research enterprise, and then offer a few solutions on how we can start righting the ship.

The asymmetry in reporting positive and negative outcomes was recently highlighted by Aaron Carroll, MD, a professor of pediatrics who recently published an editorial in the NY Times (Sept 24,2018) titled “Congratulations Your Study Went Nowhere.”  Dr. Carroll cites a recent study in Psychological Medicine whose purpose was to explore possible biases in medical research related to antidepressants.  This group evaluated 105 antidepressant studies registered with the FDA. Half of these studies were “positive” and half were “negative” according to the FDA. Whether a study is generally declared positive or negative is based whether or not the study achieves or does not achieve the primary outcome or primary goal.  In these depression studies, a common primary goal is often to determine if there is an improvement in depression using commonly accepted scales. Notably of the 105 trials reviewed, 98% of the positive trials were published while only 48% of the negative trials were published.  Studies may also look at other outcomes, so called secondary outcomes or endpoints. In the case of depression trials, some examples of secondary outcomes may include hours of sleep or change in weight.  Clinical trials are unable to provide similar levels of statistical certainty regarding secondary vs primary outcomes.  Rather, these secondary outcomes are used to generate hypotheses for future trials.   Despite this well-accepted convention, the study in Psychological Medicine noted 10 of 25 trials considered negative by the FDA were reported as positive by researchers who appeared to shift the focus from a negative primary outcome to a favorable secondary outcome.   Dr. Carroll also cites a 2004 JAMA study where researchers reviewed more than 100 trials approved by a scientific community in Demark that resulted in 122 publications and over 3000 outcomes.  Half of these outcomes on whether drugs worked were not reported and two-thirds of cases of possible harm were not reported.  

This is not to cast a negative light on scientists or entities that do not fully report outcomes in clinical trials.  While there may be instances of deliberately not reporting certain findings, underreporting of outcomes likely derives from a collective “Eh” regarding negative trials from the clinical research collective.  Biomedical journals, grant funding agencies and biomedical scientists seem to lean more favorably toward studies that demonstrate an effect versus those that don’t show one.  Pick up any major medical journal and this phenomenon will be readily apparent.  Go to any major medical conference and you will witness hundreds of posters showing positive results to every 10 showing negative results.  Journals more often bolster their reputation by publishing breakthrough articles that more often demonstrate the effectiveness of a new therapy rather than a lack of effectiveness.  There is something inherently more attractive about reporting positive results than negative results in the current clinical research environment.  Unfortunately, this is doing a disservice to the quality of clinical research as a whole and potentially limiting our ability to further improve the health and well-being of patients.   

positive resultsSelective reporting has major implications for the current clinical research enterprise.  Starting with the most obvious implication: if more positive results are reported than negative results, new therapies or devices may actually be less effective in practice than in published literature supports.  There are major financial implications to this, with both insurers and payers utilizing resources on these therapies that might be better apportioned elsewhere.  Underreporting of negative trials and/or outcomes also greatly hinders one of the most critical aspects of scientific research: learning from both the past successes and the failures of other scientists. If this knowledge is not widely available to the scientific community, we are more likely to repeat the same mistakes, utilizing scarce resources inefficiently and hindering future scientific progress.  

Given the high stakes of underreporting the results of negative trials and outcomes, how might we go about addressing these issues?  Many governing bodies including the Food and Drug Administration (FDA) in the US have mandated registering clinical trials at ClinicalTrials.gov.  The requirements have evolved over the years but most of the requirements focus on disclosing details of the design of the trial including the primary endpoints, secondary endpoints and analysis plan.  The intent here is for more transparency so that stakeholders can validate whether a trial was carried out properly.  Inadequately conducted clinical trials can lead to erroneous conclusions about the effectiveness (or lack thereof) of a product. In other words, trials may be falsely negative (the new therapy may be effective but errors in trial conduct obscured this effect) or falsely positive (the new therapy does not work but improper trial conduct results in it appearing better than it actually is). The FDA further tightened reporting requirements in 2017 with the FDA Final Rule 42 CFR 11 (the National Institutes of Health (NIH) published similar regulations).  The results of these regulations are encouraging. From 2007 to 2017, major university reporting when from around 28% to 78% with a 20% improvement (58 to 78%) between 2015 and 2017 (1). Increased regulation has been helpful but other changes are cultural.  Fundamentally, we need to celebrate and encourage the reporting of important negative results as we do positive results.  We should implore journals to publish negative results so that the clinical research community can learn from and improve upon what has been done previously.   Reporting negative results may seem less newsworthy and to some, boring, but if that is the price of better science and better treatments for patients, maybe we should consider boring over bling? 

The scientific method is an enduring achievement that continues to benefit us.  However, the magnitude of this benefit is significantly curtailed when this method is not employed as it was intended.  In the case of clinical research, there appears to be an inherent bias toward popularizing and publishing things that “work” in lieu of things that didn’t “work”.  This asymmetry of accentuating the positive not only potentially leads to erroneous conclusions about current therapies but also impacts the direction and success of future biomedical research.   We need to urge culture change within the clinical research space.   “Accentuating the negative and not just the positive” might be an appropriate mantra for this culture change as we move forward emphasizing the need to place both positive and negative findings on equally footing.  In this way, we maximize our ability to obtain the best possible answers to our research questions and hopefully deliver the greatest benefits to patients. 

Acknowledgements: Many thanks to Dr. Aaron Carroll whose editorial in the NY Times helped clarify my thinking on this issue.  Thank you to Dr. Jack Modell for further clarifying the important issues. 

  1. Piller C and Bronshtein T.  Faced with public pressure, research institutions step up reporting of clinical trial results. STAT Jan 19, 2018.  

Collaboration versus Concentration: The Office

Posted by Brook White on Wed, Nov 07, 2018 @ 09:47 AM
Share:

Quick quiz for fans of The Office: Can you remember where each employee sat in the Scranton Dunder Mifflin office?  Even if you can’t get it perfect, chances are you can close your eyes and envision the layout.  The office was open with very few physical boundaries between desks.  Employees could see each other face-to-face and hear one another at all times.  It was a set deliberately configured to create the awkward interactions and comedic conflict that made the series so popular.  The design was perfect for sitcom parody, but it was disastrous for productivity.  

The open office concept has gained popularity in recent years, even becoming a sort of corporate status symbol suggesting that a company values openness, collaboration, and innovation.  However, recent research suggests the open office has the exact opposite effect on employees – reducing in-person interactions, driving up email and IM use, and diminishing productivity.  Several reasons have been given for these results, including: offices are too noisy and distracting, employees feel a loss of privacy and more stress, and individuals prioritize “looking busy” over doing impactful work.  

To quote Michael Scott, “We don’t hate it.  We just don’t like it at all, and it’s terrible.”

deep work, concentrationThe problem is not that having places of open collaboration are bad, it’s that an office cannot be constructed around this virtue alone.  Employees also need time for distraction-free, heads-down, concentration work.  In Deep Work, Cal Newport praises an alternative layout that maximizes the benefits of both “serendipitous encounters and isolated deep thinking,” which he dubs a hub-and-spoke design.  The concept is simple: have quiet personal areas of working that minimize distraction and interruption that are connected to large common areas that facilitate teamwork, mutual inspiration, brainstorming, and idea sharing.  

We took these lessons to heart when designing our new office space.  

collaboration and team workThe upper floors, which will house most employees’ work spaces, are built around the hub-and-spoke design.  Collaboration spaces (conference rooms, war rooms, huddle rooms, the pantry) are centrally located with cubes and offices spreading out from there.  Within the individual workspace areas, we alternate rows of cubes and offices which will dampen sound and prevent large areas of noisy cubes.  We are also providing more spaces for quiet concentration away from your desk with Focus Rooms, offices equipped with treadmill desks for shared use, and Libraries.  On the first floor, in addition to the main conference room suite, there will be more opportunities for collaboration with a much larger Hub and adjoining Game Room, and larger Patio.

We are really excited about the new space, but at the end of the day, it is still just an office.  A building doesn’t make us special.  Our employees do.  The best our physical workspace can do is provide a structure conducive to good work, but the onus is on each of us to adopt and implement productive behaviors.  

A primary goal of our new headquarters is to build a workspace that makes people excited about where they work.  We look forward to seeing how the move to the new building supports Deep Work and improves our collective productivity.

Ryan2Ryan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including the Inner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project.  Ryan is also part of the team at Rho that encourages and facilitates Deep Work.

Rho Participates in Innovative Graduate Student Workshop for the 8th Consecutive Time

Posted by Brook White on Thu, Aug 09, 2018 @ 09:18 AM
Share:

Petra LeBeau, ScD (@LebeauPetra) , is a Senior Biostatistician and Lead of the Bioinformatics Analytics Team at Rho. She has over 13 years of experience in providing statistical support in all areas of clinical trials and observational studies. Her experience includes 3+ years of working with genomic data sets (e.g. transcriptome and metagenome). Her current interest is in machine learning using clinical trial and high-dimensional data.

Agustin Calatroni, MS (@acalatr), is a Principal Statistical Scientist at Rho. His academic background includes a master’s degree in economics from the Université Paris 1 Panthéon-Sorbonne and a master’s degree in statistics from North Carolina State University. In the last 5 years, he has participated in a number of competitions to develop prediction models. He is particularly interested in the use of stacking models to combine several machine learning techniques into one predictive model in order to decrease the variance (bagging), bias (boosting) and improve the predictive accuracy.

At Rho, we are proud of our commitment to supporting education and fostering innovative problem-solving for the next generation of scientists, researchers, and statisticians. One way we enjoy promoting innovation is by participating in the annual Industrial Math/Stat Modeling Workshop for Graduate Students (IMSM) hosted by the National Science Foundation-supported Statistical and Applied Mathematical Sciences Institute (SAMSI).  IMSM is a 10-day program to expose graduate students in mathematics, statistics, and computational science to challenging and exciting real-world projects arising in industrial and government laboratory research.  The workshop is held in SAS Hall on the campus of North Carolina State University. This summer marked our 8th consecutive year as an IMSM Problem Presenter.  We were joined by industry leaders from Sandia National Laboratories, MIT Lincoln Laboratories, US Army Corps of Engineers (USACE), US Environmental Protection Agency (EPA) and , Savvysherpa.

samsi 2018

SAMSI participants 2018 Agustin Calatroni (first from left),Petra LeBeau (first from right), and Emily Lei Kang (second from right) with students from the SAMSI program.

Rho was represented at the 2018 workshop by investigators Agustin Calatroni and Petra LeBeau, with the assistance of Dr. Emily Lei Kang from the University of Cincinnati. Rho’s problem for this year was Visualizing and Interpreting Machine Learning Models for Liver Disease Detection. 

Machine learning (ML) interpretability is a hot topic as many tools have become available over the last couple of years (including a variety of very user-friendly ones) that are able to create pretty accurate ML models, but the constructs that could help us explain and trust these black-box models are still under development. 

The success of ML algorithms in medicine and multi-omics studies over the last decade has come as no surprise to ML researchers. This can be largely attributed to their superior predictive accuracy and their ability to work on both large volume and high-dimensional datasets. The key notion behind their performance is self-improvement. That is, these algorithms make predictions and improve them over time by analyzing mistakes made in earlier predictions and avoiding these errors in future predictions. The difficulty with this “predict and learn” paradigm is that these algorithms suffer from diminished interpretability, usually due to the high number of nonlinear interactions within the resulting models. This is often referred to as the “black-box” nature of ML methods.

In cases where interpretability is crucial, for instance in studies of disease pathologies, ad-hoc methods leveraging the strong predictive nature of these methods have to be implemented. These methods are used as aides for ML users to answer questions like: ‘why did the algorithm make certain decisions?’, ‘what variables were the most important in predictions?’, and/or ‘is the model trustworthy?’ 

The IMSM students were challenged with studying the interpretability of a particular class of ML methods called gradient boosting machines (GBM) on the prediction if a subject had liver disease or not. Rho investigators provided a curated data set and pre-built the model for the students. To construct the model, the open-source Indian Liver Patient Dataset was used which contains records of 583 liver patients from North East India (Dheeru and Karra Taniskidou, 2017). The dataset contains eleven variables: a response variable indicating disease status of the patient (416 with disease, 167 without) and ten clinical predictor variables (Age, Gender, Total Bilirubin, Direct Bilirubin, Alkaline Phosphatase, Alamine Aminotransferase, Aspartate Aminotransferase, Total Proteins, Albumin, Albumin and Globulin Ratio). The data was divided into 467 training and 116 test records for model building. 

The scope of work for the students was not to improve or optimize the performance of the GBM model but to explain and visualize the method’s intrinsic latent behavior.

The IMSM students decided to break interpretability down into two areas. Global, where the entire dataset is used for interpretation and local, where a subset of the data is used for deriving an interpretive analysis of the model. The details of these methods will be further discussed in two additional blog posts.

Rho is honored to have the opportunity to work with exceptional students and faculty to apply state of the art mathematical and statistical techniques to solve real-world problems and advance our knowledge of human diseases.

You can visit the IMSM Workshop website to learn more about the program, including the problem Rho presented and the students’ solution.


With thanks to the IMSM students Adams Kusi Appiah1, Sharang Chaudhry2, Chi Chen3, Simona Nallon4, Upeksha Perera5, Manisha Singh6, Ruyu Tan7 and advisor Dr. Emily Lei Kang from the University of Cincinnati

1Department of Biostatistics, University of Nebraska Medical Center; 2Department of Mathematical Sciences, University of Nevada, Las Vegas; 3Department of Biostatistics, State University of New York at Buffalo; 4Department of Statistics, California State University, East Bay; 5Department of Mathematics and Statistics, Sam Houston State University; 6Department of Information Science, University of Massachusetts; 7Department of Applied Mathematics, University of Colorado at Boulder

References:
Dheeru, D. and Karra Taniskidou, E. (2017). UCI machine learning repository.

What We Learned at PhUSE US Connect

Posted by Brook White on Tue, Jun 12, 2018 @ 09:40 AM
Share:

ryan-baileyRyan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including the Inner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which develops novel data visualizations and statistical graphics for use in clinical trials.

Last week, PhUSE hosted its first ever US Connect conference in Raleigh, NC. Founded in Europe in 2004, the independent, non-profit Pharmaceutical Users Software Exchange has been a rapidly growing presence and influence in the field of clinical data science. While PhUSE routinely holds smaller events in the US, including their popular Computational Science Symposia and Single Day Events, this was the first time they had held a large multi-day conference with multiple work streams outside of Europe. The three-day event attracted over 580 data scientists, biostatisticians, statistical programmers, and IT professionals from across the US and around the world to focus on the theme of "Transformative Current and Emerging Best Practices."

After three days immersed in data science, we wanted to provide a round-up of some of the main themes of the conference and trends for our industry.

Emerging Technologies are already Redefining our Industry

emerging technologyIt can be hard to distinguish hype from reality when it comes to emerging technologies like big data, artificial intelligence, machine learning, and blockchain.  Those buzzwords made their way into many presentations throughout the conference, but there was more substance than I expected.  It is clear that many players in our industry (FDA included) are actively exploring ways to scale up their capabilities to wrangle massive data sets, rely on machines to automate long-standing data processing, formatting, and cleaning processes, and use distributed database technologies like blockchain to keep data secure, private, and personalized.  These technologies are not just reshaping other sectors like finance, retail, and transportation; they are well on their way to disrupting and radically changing aspects of clinical research.

The FDA is Leading the Way

Our industry has gotten a reputation for being slow to evolve, and we sometimes use the FDA as our scapegoat. Regulations take a long time to develop, formalize, and finalize, and we tend to be reluctant to move faster than regulations. However, for those that think the FDA is lagging behind in technological innovation and data science, US Connect was an eye opener. With 30 delegates at the conference and 16 presentations, the agency had a strong and highly visible presence.

Moreover, the presentations by the FDA were often the most innovative and forward-thinking. Agency presenters provided insight into how the offices of Computational Science and Biomedical Informatics are applying data science to aid in reviewing submissions for data integrity and quality, detecting data and analysis errors, and setting thresholds for technical rejection of study data. In one presentation, the FDA demonstrated its Real-time Application for Portable Interactive Devices (RAPID) to show how the agency is able to track key safety and outcomes data in real time amid the often chaotic and frantic environment of a viral outbreak. RAPID is an impressive feat of technical engineering, managing to acquire massive amounts of unstructured symptom data from multiple device types in real time, process them in the cloud, and perform powerful analytics for "rapid" decision making. It is the type of ambitious technically advanced project you expect to see coming out of Silicon Valley, not Silver Spring, MD.

It was clear that the FDA is striving to be at the forefront of bioinformatics and data science, and in turn, they are raising expectations for everyone else in the industry.

The Future of Development is "Multi-lingual"  

A common theme through all the tracks is the need to evolve beyond narrowly focused specialization in our jobs. Whereas 10-15 years ago, developing deep expertise in one functional area or one tool was a good way to distinguish yourself as a leader and bring key value to your organization, a similar approach may hinder your career in the evolving clinical research space. Instead, many presenters advocated that the data scientist of the future specialize in a few different tools and have broad domain knowledge. As keynote speaker Ian Khan put it, we need to find a way to be both specialists and generalists at the same time. Nowhere was this more prevalent than in discussions around which programming languages will dominate our industry in the years to come.

While SAS remains the go-to tool for stats programming and biostatistics, the general consensus is that knowing SAS alone will not be adequate in years to come. The prevailing languages getting the most attention for data science are R and Python. While we heard plenty of debate about which one will emerge as the more prominent, it was agreed that the ideal scenario would be to know at least one, R or Python, in addition to SAS.

We Need to Break Down Silos and Improve our Teams

data miningOn a similar note, many presenters advocated for rethinking our traditional siloed approach to functional teams. As one vice president of a major Pharma company put it, "we have too much separation in our work - the knowledge is here, but there's no crosstalk." Rather than passing deliverables between distinct departments with minimal communication, clinical data science requires taking a collaborative multi-functional approach. The problems we face can no longer be parsed out and solved in isolation. As a multi-discipline field, data science necessarily requires getting diverse stakeholders in the room and working on problems together.

As for how to achieve this collaboration, Dr. Michael Rappa delivered an excellent plenary session on how to operate highly productive data science teams based on his experience directing the Institute for Advanced Analytics at North Carolina State University. His advice bucks the traditional notion that you solve a problem by selecting the most experienced subject matter experts and putting them in a room together. Instead, he demonstrated how artfully crafted teams that value leadership skills and motivation over expertise alone can achieve incredibly sophisticated and innovative output.

Change Management is an Essential Need

Finally, multiple sessions addressed the growing need for change management skills. As the aforementioned emerging technologies force us to acquire new knowledge and skills and adapt to a changing landscape, employees will need help to deftly navigate change. When asked what skills are most important for managers to develop, a VP from a large drug manufacturer put it succinctly, "our leaders need to get really good at change management."

In summary, PhUSE US Connect is helping our industry look to the future, especially when it comes to clinical data science, but the future may be closer than we think. Data science is not merely an analytical discipline to be incorporated into our existing work; it is going to fundamentally alter how we operate and what we achieve in our trials. The question for industry is if we're paying attention and pushing ourselves to evolve in step to meet those new demands.

Webinar: Understanding the FDA Guidance on Data Standards

“This drug might be harmful!  Why was it approved?”  What the news reports fail to tell us.

Posted by Brook White on Thu, Apr 19, 2018 @ 08:39 AM
Share:

Jack Modell, MD, Vice President and Senior Medical OfficerJack Modell, MD, Vice President and Senior Medical Officer is a board-certified psychiatrist with 35 years’ of experience in clinical research and patient care including 15 years’ experience in clinical drug development. He has led successful development programs and is a key opinion leader in the neurosciences, has served on numerous advisory boards, and is nationally known for leading the first successful development of preventative pharmacotherapy for the depressive episodes of seasonal affective disorder.

David Shoemaker, PhD, Senior Vice President, R&DDavid Shoemaker, PhD, Senior Vice President R&D, has extensive experience in the preparation and filing of all types of regulatory submissions including primary responsibility for four BLAs and three NDAs.  He has managed or contributed to more than two dozen NDAs, BLAs, and MAAs and has moderated dozens of regulatory authority meetings.  

Once again, we see news of an approved medication* being linked to bad outcomes, even deaths, and the news media implores us to ask:  

drugs and biologics in the news“How could this happen?”
“Why was this drug approved?”
“Why didn’t the pharmaceutical company know this or tell us about it?”
“What’s wrong with the FDA that they didn’t catch this?”
“Why would a drug be developed and approved if it weren’t completely safe?”

And on the surface, these questions might seem reasonable.  Nobody, including the drug companies and FDA, wants a drug on the market that is unsafe, or for that matter, wants any patient not to fare well on it.  And to be very clear at the outset, in pharmaceutical development, there is no room for carelessness, dishonesty, intentionally failing to study or report suspected safety signals, exaggerating drug benefits, or putting profits above patients – and while there have been some very disturbing examples of these happening, none of this should ever be tolerated.  But we do not believe that the majority of reported safety concerns with medications are caused by any intentional misconduct or by regulators failing to do their jobs, or that a fair and balanced portrayal of a product’s risk-benefit is likely to come from media reports or public opinion alone.

While we are not in a position to speculate or comment upon the product mentioned in this article specifically, in most cases we know of where the media have reported on bad outcomes for patients taking a particular medication, the reported situations, while often true, have rarely been shown to have been the actual result of taking the medication; rather, they occurred in association with taking the medication.  There is, of course, a huge difference between these two, with the latter telling us little or nothing about whether the medication itself had anything to do with the bad outcome.  Nonetheless, the news reports, which include catchy headlines that disparage the medication (and manufacturer), almost always occur years in advance of any conclusive data on whether the medication actually causes the alleged problems; and in many cases, the carefully controlled studies that are required to determine whether the observed problems have anything directly to do with the medication eventually show that the medication either does not cause the initially reported outcomes, or might do so only very rarely.  Yet the damage has been done by the initial headlines:  patients who are benefiting from the medication stop it and get into trouble because their underlying illness becomes less well controlled, and others are afraid to start it, thus denying themselves potentially helpful – and sometimes lifesaving – therapy.  And ironically, when the carefully controlled and adequately powered studies finally do show that the medication was not, after all, causing the bad outcomes, these findings, if reported at all, rarely make the headlines. 

Medications do, of course, have real risks, some serious, and some of which might take many years to become manifest.  But why take any risk?  Who wants to take a medication that could be potentially harmful?  If the pharmaceutical companies have safety as their first priority, why would they market something that they know carries risk or for which they have not yet fully assessed all possible risks?  There’s an interesting parallel here that comes to mind.  I recently airplane-1heard an airline industry representative say that the airlines’ first priority is passenger safety.  While the U.S. major airlines have had, for decades, a truly outstanding safety record, could safety really be their first priority?  If passenger safety were indeed more important than anything else, no plane would ever leave the gate; no passengers would ever board.  No boarding, no leaving, and no one could ever possibly get hurt.  And in this scenario, no one ever flies anywhere, either.  The airlines’ first priority has to be efficient transportation, though undoubtedly followed by safety as a very close second.  Similarly, the pharmaceutical industry cannot put guaranteed safety above all else, or no medications would ever be marketed.  No medications and no one could ever get hurt.  And in this scenario, no one ever gets treated for illnesses that, without medications, often harm or kill.  In short, where we want benefit, we must accept risks, including those that may be unforeseeable, and balance these against the potential benefits.

OK then:  so bad outcomes might happen anyway and are not necessarily caused by medication, worse outcomes can happen without the medications, and we must accept some risk.  But isn’t it negligent of a pharmaceutical company to market a medication before they actually know all the risks, including the serious ones that might only happen rarely?  Well, on average, a new medicine costs nearly three-billion dollars and takes well over a decade to develop, and it is tested on up to a few thousand subjects.  But if a serious adverse event did not occur in the 3000 subjects who participated in the clinical trials to develop the medicine, does this show us that the medicine is necessarily safe and unlikely to ever harm anybody?  Unfortunately, it does not.  As can be seen by the statistical rule of three**, this can only teach us that, with 95% confidence, the true rate of such an event is between zero and 1/1000.  And while it may be comforting that a serious event is highly unlikely to occur in more than 1/1000 people who take the medication, if the true rate of this event is, let’s say, even 1/2000, there is still greater than a 90% chance that a serious adverse event will occur in at least one person among the first 5000 patients who take the medication!  Such is the nature of very low frequency events over thousands of possible ways for them to become manifest.

So why not study the new medication in 10,000 subjects before approval, so that we can more effectively rule out the chances of even rarer serious events?  There is the issue of cost, yes; but more importantly, we would now be extending the time to approval for a new medicine by several additional years, during which time far more people are likely to suffer by not having a new and needed treatment than might ever be prevented from harm by detecting a few more very rare events.  There is a good argument to be made that hurting more people by delaying the availability of a generally safe medication to treat an unmet medical need in an effort to try to ensure what might not even be possible – that all potential safety risks are known before marketing – is actually the more negligent course of action.  It is partly on this basis that the FDA has mechanisms in place (among them, breakthrough therapy, accelerated approval, and priority review) to speed the availability of medications that treat serious diseases, especially when the medications are the first available treatment or if the medication has advantages over existing treatments.  When these designations allow for a medication to be marketed with a smaller number of subjects or clinical endpoints than would be required for medications receiving standard regulatory review, it is possible that some of these medications might have more unknown risks than had they been studied in thousands of patients.  In the end, however, whatever the risks – both known and unknown – if we as a society cannot accept them, then we need to stop the development and prescribing of medicines altogether.  

*Neither of the authors nor Rho was involved in the development of the referenced product.  This post is not a comment on this particular product or the referenced report, but rather a response to much of the media coverage of marketed drugs and biologics more broadly.

**In statistical analysis, the rule of three states that if a certain event did not occur in a sample with n subjects, the interval from 0 to 3/n is a 95% confidence interval for the rate of occurrences in the population.  https://en.wikipedia.org/wiki/Rule_of_three_(statistics)  

The probability that no event with this frequency will occur in 5000 people is (1 - .005)5000, or about 0.082.

Free Webinar: Expedited Development and Approval Programs

The Future, Today: Artificial Intelligence Applications for Clinical Research

Posted by Brook White on Tue, Feb 13, 2018 @ 08:37 AM
Share:

Petra LeBeau, ScD, Senior Biostatistician and Lead of the Bioinformatics Analytics TeamPetra LeBeau, ScD, is a Senior Biostatistician and Lead of the Bioinformatics Analytics Team at Rho. She has over 13 years of experience in providing statistical support for clinical trials and observational studies, from study design to reporting. Her experience includes 3+ years of working with genomic data sets (e.g. transcriptome and metagenome). Her current interest is in machine learning using clinical trial and high-dimensional data.

Agustin Calatroni, MS, is a Principal Statistical Scientist at Rho. His academic background includes a master’s degree in economics from the Univesité Paris 1 Panthéon-Sorbonne and a master’s degree in statistics from North Carolina State University. In the last 5 years, he has participated in a number of competitions to develop prediction models. He is particularly interested in the use of stacking models to combine several machine learning techniques into one predictive model in order to decrease the variance (bagging), bias (boosting) and improve the predictive accuracy.

Derek Lawrence, Senior Clinical Data ManagerDerek Lawrence, Senior Clinical Data Manager, has 9 years of data management and analysis experience in the health care / pharmaceutical industry. Derek serves as Rho’s Operational Service Leader in Clinical Data Management, an internal expert responsible for disseminating the application of new technology, best practices, and processes.

artificial intelligence in clinical researchArtificial Intelligence (AI) may seem like rocket science, but most people use it every day without realizing it. Ride-sharing apps, airplane ticket purchasing aggregators, ATM machines, recommendations for your next eBook or superstore purchase, or the photo library within your smartphone—all these common apps use machine learning algorithms to improve the user experience.

Machine learning (ML) algorithms make predictions and, in turn, learn from their own predictions resulting in improved performance over time. ML has slowly been making its way into health research and the healthcare system due in part to an exponential growth in data stemming from new developments in technology like genomics. Rho supports many studies with large datasets including the microbiome, proteome, metabolome, and the transcriptome. The rapid growth of health-related data will continue along with the development of new methodologies like systems biology (i.e. the computational and mathematical modeling of interactions within biological systems) that leverage these data. machine learning in clinical researchML will continue to be a key enabler in these areas. The ever-increasing amounts of computational power, improvements in data storage devices, and falling computational costs have given clinical trial centers the opportunity to apply ML techniques to large and complex data which would not have been possible a decade ago. In general, ML is divided into two main types of techniques: (1) Supervised learning, in which a model is trained on known input and output data in order to predict future outputs, and (2) unsupervised learning, where instead of predicting outputs, the system tries to find naturally occurring patterns or groups within the data. In each type of ML, there a large number of existing algorithms. Example supervised learning algorithms include random forest, boosted trees, neural networks, and deep neural networks just to name a few. Similarly, unsupervised learning has a plethora of algorithms.

Lately, it has become clear that in order to substantially increase the accuracy of a predictive model, we need to use an ensemble of models. The idea behind ensembles is that by combining a diverse set of models one is able to produce a stronger, higher performing model which in turn results in better predictions. By creating an ensemble of models, we maximize the accuracy, precision, and stability of our predictions. The power of the ensemble technique can be intuited with a real-world example: In the early 20th century, the famous English statistician Francis Galton (who created the statistical concept of correlation) attended a local fair. While there, he came across a contest that involved guessing the weight of an ox. He looked around and noticed a very diverse crowd; there were people like him who maybe had little knowledge about cattle, and there were farmers and butchers whose guesses would be considered that of an expert. In general, the diverse audience ended up giving a wide variety of responses. He wondered what would happen if he took the average of all these responses, expert, and non-expert alike. What he found was that the average of all the responses was much closer to the true weight of the ox than any individual guess alone. This phenomenon has been called the “wisdom of crowds.” Similarly, today’s best prediction models are often the result of an ensemble of various models which together provide a better overall prediction accuracy than any individual one would be capable of.

As data management is concerned, the current clinical research model is centered on electronic data capture systems (EDC), in which a database is constructed that comprises the vast majority of the data for a particular study or trial. Getting all of the data into a single system involves a significant investment in the form of external data imports, redundant data entry, transcription from paper sources, transfers from electronic medical/health record systems (EMR/EHR), and the like. Additionally, the time and effort required to build, test, and validate complicated multivariate edit checks into the EDC system to help clean the data as they are entered is substantial and can only utilize data that currently exist in the EDC system itself. As data source variety increases, along with surges in data volume and data velocity, this model becomes less and less effective at identifying anomalous data.

At Rho, we are investing in talent and technology that in the near future will use ML ensemble models in the curation and maintenance of clinical databases. Our current efforts to develop tools to aggregate that data from a variety of sources will be a key enabler. Similar to the ways the banking industry uses ML to identify ‘normal’ and ‘abnormal’ spending patterns and make real-time decisions to allow or decline purchases, ML algorithms can identify univariate and multivariate clusters of anomalous data for manual review. These continually-learning algorithms will enable a focused review of potentially erroneous data without the development of the traditional EDC infrastructure, saving not only time performing data reviews but also identifying potential issues of which we would normally have been unaware.

Webinar: ePRO and Smart Devices

Challenges in Clinical Data Management: Findings from the Tufts CSDD Impact Report

Posted by Brook White on Fri, Feb 09, 2018 @ 12:24 PM
Share:

Derek Lawrence, Senior Clinical Data ManagerDerek Lawrence, Senior Clinical Data Manager, has 9 years of data management and analysis experience in the health care/pharmaceutical industry.  Derek serves as Rho's Operational Service Leader in Clinical Data Management, an internal expert responsible for disseminating the application of new technology, best practices, and processes.

The most recent Impact Report from the Tufts Center for the Study of Drug Development presented the results of a study including nearly 260 sponsor and CRO companies into clinical data management practices and experience. A high-level summary of the findings included longer data management cycle times than those observed 10 years ago, delays in building clinical databases, a reported average of six applications to support each clinical study, and a majority of companies reporting technical challenges as it pertained to loading data into their primary electronic data capture (EDC) system.

These findings represent the challenges those of us in clinical data management are struggling with given the current state of the clinical research industry and technological changes. EDC systems are still the primary method of data capture in clinical research with 100% of sponsors and CROs reporting at least some usage. These systems are experiencing difficulties in dealing with the increases in data source diversity. More and more clinical data are being captured by new and novel applications (ePRO, wearable devices, etc.) and there is an increased capacity to work with imaging, genomic, and biomarker data. The increases in data changing EDC paradigmvolume and data velocity have resulted in a disconnect with the EDC paradigm. Data are either too large or are ill-formatted for import into the majority of EDC systems common to the industry. In addition, there are significant pre-study planning and technical support demands when it comes to loading data into these systems. With 77% of sponsors and CROs reporting similar barriers to effective loading, cleaning, and use of external data, the issue is one with which nearly everyone in clinical research is confronted.

EDC integrationRelated to the issues regarding EDC integration are delays in database build. While nearly half of the build delays were attributed to protocol changes, just over 30% resulted from user acceptance testing (UAT) and database design functionality. Delays attributed to database design functionality were associated with a LPLV-to-lock cycle time that was 39% longer than the overall average. While the Tufts study did not address this directly, it would be no great stretch of the imagination to assume that the difficulties related to EDC system integration are a significant contributor to the reported database functionality issues. With there already being delays associated with loading data, standard data cleaning activities that are built into the EDC system and need to be performed before database lock would most certainly be delayed as well.

Clinical data management is clearly experiencing pains adapting to a rapidly-shifting landscape in which a portion of our current practices no longer play together nicely with advances in data-mining.jpgtechnology and data source diversity. All of this begs the question “What can we do to change our processes in order to accommodate these advances?” At Rho, we are confronting these challenges with a variety of approaches, beginning with limiting the impulse to automatically import all data from external vendors into our EDC systems. Configuring and updating EDC systems requires no small amount of effort on the part of database builders, statistical programmers, and other functional areas. Potential negative impacts to existing clinical data are a possibility when these updates are made as part of a database migration. At the end of the day, importing data into an EDC system results in no automatic improvement to data quality and, in some cases, actually hinders our ability to rapidly and efficiently clean the data. In developing standard processes for transforming and cleaning data external to the EDC systems, we increase flexibility in adapting to shifts in incoming data structure or format and mitigate the risk of untoward impacts to the contents of the clinical database by decreasing the prevalence of system updates.

The primary motivation for loading data received from external vendors into the EDC system is to provide a standard method of performing data cleaning activities and cross-checks against the clinical data themselves. To support this, we are developing tools to aggregate that data from a variety of sources and assemble them for data cleaning purposes. Similar to the ways the banking industry uses machine learning to identify ‘normal’ and ‘abnormal’ spending patterns and make real-time decisions to allow or decline purchases, similar algorithms can identify univariate and multivariate clusters of anomalous data for manual review. These continually-learning algorithms will enable a focused review of potentially erroneous data without the development of the traditional EDC infrastructure. This will save time performing data reviews and also identify potential issues which we would normally miss had we relied on the existing EDC model. With the future state resulting in an ever-broadening landscape of data sources and formats, an approach rooted in system agnosticism and sound statistical methodology will ensure we are always able to provide high levels of data quality.

Highlights from TEDMED 2017

Posted by Brook White on Tue, Nov 07, 2017 @ 04:49 PM
Share:

TEDMED TheatreLast week I had the opportunity to attend TEDMED 2017 in Palm Springs and want to share some highlights of the experience.  This certainly isn’t a comprehensive summary, but rather highlights of some of the themes that were most interesting to me.  

Understanding the Brain

In order to make significant progress on mental illness and neurological disorders, we need a better understanding of how the brain works.  Several speakers shared progress and innovation in understanding the brain.  Geneticist Steven McCarroll discussed drop-seq, an innovative method for understanding which cell types have which molecules.  Using this technology, his team has been looking at what genetic variations in individuals with schizophrenia may tell us about the underlying biology of the disease.  Chee Yeun Chang and Yumanity Therapeutics are using yeast to better understand how improper protein folding relates to brain disease.  Dan Sobek of Kernel discussed how electrical stimulation may be used to “tune” the brain both as a method for addressing brain diseases as well as increasing performance in normal brains.  Guo-Li Ming talked about creating organoids which are essentially mini-organs created using stem cells.  These organoids have been used to look at neural development and the Zika virus.  Jill Goldstein discussed sex differences in brain development and how that relates to disparities in the prevalence of some mental illnesses between sexes.  Collectively, it is amazing to see some of the progress that is being made on some very difficult diseases.

Delivering Healthcare on the Frontlines

Some of the most touching stories came from those on the frontlines of healthcare.  Dr. Farida shared her stories as the only OB-GYN left in Aleppo and what it meant to put herself and her family in danger to ensure women still had access to care.  Camilla Ventura is a Brazilian ophthalmologist who first connected ocular damage to Zika infection.  Dr. Soka Moses shared stories from the Ebola outbreak in Liberia and the challenges of delivering care with severe shortages of equipment, staff, and supplies.  Agnes Binagwaho returned to her home country of Rwanda following the genocide and told her story of rebuilding her country’s healthcare infrastructure.  Each of these stories was inspiring and a testament to humanity at its best. 

The Opioid Crisis

There were a number of talks as well as a discussion group focused on various aspects of the opioid crisis.  Perspectives were shared from law enforcement personnel, those working on harm reduction programs such as supervised injection sites, and treatment programs for addiction.  One of the most moving talks was given by Chera Kowalsky.  Chera is a librarian in the Kensington area of Philadelphia, an area that has been hit hard by the opioid crisis.  They’ve instituted an innovative program where librarians have been trained to deliver naloxone, and she shared her personal story of using naloxone to help save the life of one of the library’s visitors.  Despite the challenges posed by the crisis, it was uplifting to see the range of solutions being proposed as well as the commitment of those working on them.

The Hive

One of the most interesting aspects of TEDMED was the Hive. Each year, a selection of entrepreneurs and start-ups come to TEDMED to share their innovations in healthcare and medicine. These companies were available throughout the conference to talk with attendees. There was also a special session on day 2 where each entrepreneur had two minutes to share their vision with the audience.

Finally, perhaps the most valuable part of the experience was all of the people I had a chance to meet, each of whom is playing a unique role in the future of healthcare.

Webcharts: A Reusable Tool for Building Online Data Visualizations

Posted by Brook White on Wed, Jan 18, 2017 @ 01:39 PM
Share:

 

This is the second in a series of posts introducing open source tools Rho is developing and sharing online. Click here to learn more about Rho's open source effort.

When Rho created a team dedicated developing novel data visualization tools for clinical research, one of the group's challenges was to figure out how to scale our graphics to every trial, study, and project we work on. In particular, we were interested in providing interactive web-based graphics, which can run in a browser and allow for intuitive, real-time data exploration.

Our solution was to create Webcharts - a web-based charting library built on top of the popular Data-Driven Documents (D3) JavaScript library - to provide a simple way to create reusable, flexible, interactive charts.

Interactive Study Dashboard

interactive study dashboard--webcharts

Track key project metrics in a single view; built with Webcharts (click here for interactive version)

Webcharts allows users to compose a wide range of chart types, ranging from basic charts (e.g., scatter plots, bar charts, line charts), to intermediate designs (e.g., histograms, linked tables, custom filters), to advanced displays (e.g., project dashboards, lab results trackers, outcomes explorers, and safety timelines). Webcharts' extensible and customizable charting library allows us to quickly produce standard charts while also crafting tailored data visualizations unique to each dataset, phase of study, and project.

This flexibility has allowed us to create hundreds of custom interactive charts, including several that have been featured alongside Rho's published work. The Immunologic Outcome Explorer (shown below) was adapted from Figure 3 in the New England Journal of Medicine article, Randomized Trial of Peanut Consumption in Infants at Risk for Peanut Allergy. The chart was originally created in response to reader correspondence, and was later updated to include follow-up data in conjunction with a second article, Effect of Avoidance on Peanut Allergy after Early Peanut Consumption. The interactive version allows the user to select from 10 outcomes on the y-axis. Selections for sex, ethnicity, study population, skin prick test stratum, and peanut specific IgE at 60 and 72 months of age can be interactively chosen to filter the data and display subgroups of interest. Figure options (e.g., summary lines, box and violin plots) can be selected under the Overlays heading to alter the properties of the figure.

Immunologic Outcome Explorer

immunologic outcome explorer using webcharts


Examine participant outcomes for the LEAP study (click here for interactive version)

Because Webcharts is designed for the web, the charts require no specialized software. If you have a web browser (e.g., Firefox, Chrome, Safari, Internet Explorer) and an Internet connection, you can see the charts. Likewise, navigating the charts is intuitive because we use controls familiar to anyone who has used a web browser (radio buttons, drop-down menus, sorting, filtering, mouse interactions). A manuscript describing the technical design of Webcharts was recently published in the Journal of Open Research Software.

The decision to build for general web use was intentional. We were not concerned with creating a proprietary charting system - of which there are many - but an extensible, open, generalizable tool that could be adapted to a variety of needs. For us, that means charts to aid in the conduct of clinical trials, but the tool is not limited to any particular field or industry. We also released Webcharts open source so that other users could contribute to the tools and help us refine them.

Because they are web-based, charts for individual studies and programs are easily implemented in RhoPORTAL, our secure collaboration and information delivery portal which allows us to share the charts with study team members and sponsors while carefully limiting access to sensitive data.

Webcharts is freely available online on Rho's GitHub site. The site contains a wiki that describes the tool, an API, and interactive examples. We invite anyone to download and use Webcharts, give us feedback, and participate in its development.

View "Visualizing Multivariate Data" Video

Jeremy Wildfire, MS, Senior Biostatistician, has over ten years of experience providing statistical support for multicenter clinical trials and mechanistic studies related to asthma, allergy, and immunology.  He is the head of Rho’s Center for Applied Data Visualization, which develops innovative data visualization tools that support all phases of the biomedical research process. Mr. Wildfire also founded Rho’s Open Source Committee, which guides the open source release of dozens of Rho’s graphics tools for monitoring, exploring, and reporting data. 

Ryan Bailey, MA is a Senior Clinical Researcher at Rho.  He has over 10 years of experience conducting multicenter asthma research studies, including theInner City Asthma Consortium (ICAC) and the Community Healthcare for Asthma Management and Prevention of Symptoms (CHAMPS) project. Ryan also coordinates Rho’s Center for Applied Data Visualization, which developsnovel data visualizations and statistical graphics for use in clinical trials.