Q&A Session at PHUSE US Connect 2021

By - | Aug 5, 2021

Background

The questions were asked during a live session at the PHUSE US Connect 2021, led by Ilan Carmeli, Co-Founder and Chief Product Officer at Beaconcure and Hugh Donovan, former EVP clinical research services, at Parexel.

The session was aimed at answering the question of how ML-driven automation technology can overcome the limitations of double programming.

Question

How would ML know the rules to calculate numbers in the table?

Answer

We have access to a large number of outputs. SMEs guide our clinical analysts on how to label and validate clinical tables. After the labeling process was completed, we trained our algorithms based on the labeled data. We made sure to cover both common and unique scenarios during the labeling process.

Question

Have you seen cycle time reductions in a company because of this change?

Answer

The implementation of AI technology in this area can be very beneficial for the industry. Although we do not have the exact metric for time reduction yet, we see time reduction in key aspects:

  1. Programmers and statisticians do not have to repeat validations that the system has already performed. Thus, the second programmers receive fully validated tables.
  2. Since cross table checks are automated, statisticians do not have to do it manually, allowing them to focus on more value-added tasks.
  3. Because the validations are done automatically, locking the database process is faster.

Question

How much time and resource does it take to set up your product ‘Verify’ for a study?

Answer

‘Verify’ does not have to be set up before every study. During the implementation process, we receive outputs from the customer and train our algorithms based on them. Once the implementation process is completed, the customer can use the software for every study (drug or vaccine), in any TA and every phase.

Question

Is there a comparison of efficiency between traditional double programming and ML driven validation?

Answer

With ML validation, the output will only be programmed once while in double programming it will be programmed twice by two different programmers. With ‘Verify’, the output produced by the first programmer will pass through the automated validation process. If the automated validation finds discrepancies, the programmer will fix and modify accordingly until no discrepancies are found.

Question

How can we implement a correct/valid double programming?

Answer

There are technical solutions available that improve independency however, they are not dependable enough. Programmers can still communicate with each other. The most recommended solution is a combination of automating the QC process using ML and working with SMEs. ML solution reduces effort and improves accuracy and consistency.

Question

The example showed in the presentation is a cross check between tables. Does it also check the source datasets?

Answer

Currently, we do not go up stream to the source data. However, we plan to develop this feature in the near future. Initially, we would check the data listings for discrepancies. We will also use the listings to create targeted listings that will identify the discrepant data. This will simplify the resolution process. Ultimately, we plan to access the raw data to perform checks.

Question

Can you please explain what type of ML algorithm are you using? and in which format (XML etc.) you convert the table data?

Answer

The ML algorithms are part of our intellectual property. We are working with outputs in HTML/RTF/PDF/DOCX/etc and are converting the data into a DB format.

Question

Do we need to buy your product in order to use it? Do you need to train your ML for every customer?

Answer

It is a SAAS cloud based solution. The customer buys a yearly license. We are responsible for implementing the system and training the relevant team. The implementation phase usually takes up to 3 months. During this time we train and configure our software according to the customer’s data.

Question

The challenges mentioned in the session around double programming seem to be more of an issue with the validation plan. Programming specifications should be reviewed and approved by programmer, validation programmer and statistician. Can process changes address the issues you highlighted?

Answer

Process changes can reduce some of the challenges we identified, for example, using a 3rd programmer to reconcile any differences. Such a change will require adherence to the process for every deliverable. Based on our experience, the same level of checking will not be applied for all deliverables. Sometimes, towards the end of a project, the time pressure increases, last minute changes are being required, and processes are not followed. Automated approach takes considerably less time and resources and creates consistency for every deliverable. Another aspect you need to consider is that double programming does not apply between outputs.

Question

What is your opinion on standardized macros vs. ML with regards to double programming. Please provide the pros and cons for both side

Answer

The introduction of standard macros is a step in the right direction, but from our experience, the macros modified frequently when new rules had to be added as new scenarios were encountered are changed. Furthermore, macros do not apply to all tables, and you must start from scratch once a non-standard table is needed. Our discussions with Pharma and CROs companies indicate that non-standard tables may account for 20%-40% of all tables in a study. In order for them to succeed, outputs, formats, nomenclature, etc. must be highly standardized, which unfortunately, is not the case in real life.

Question

It looks like we can use ‘Verify’ for safety set of data displays. How difficult is it to use and implement such NLP algorithms for efficacy related outputs?

Answer

It is more challenging than the safety outputs, however, it has been already established and we are also successful in validating efficacy outputs. The advantage of ML over standard macros is that it can be applied across many different types of outputs. For example, a table of cure rates requires the same types of checks as a demographics table. There are other types of checks that, although not necessarily relevant to safety, can be applied to a number of different efficacy outputs. For example, the cumulative rate of treatment effectiveness monotonically that is increasing over time.

Question

Double programming does not mean visual validation (it could be a part of it though). How ML can be applied to this process?

Answer

In theory, ML could be applied to compare the two outputs but PROC COMPARE is a more suitable tool if you are using double programming. The goal is to supplement double programming in order to overcome its deficiencies. We believe that the need for double programming can be greatly reduced, perhaps eliminated, through a combination of ML and involvement of an SME in the specification and review of outputs. From our experience, in the software development space, we see that specifications can be understood differently by two different programmers. The PROC COMPARE function is too sensitive and cannot be applied between outputs.

Question

We generally focus on ADaM for our double programming – can the metadata approach supports this?

Answer

We are planning on developing features for validating AdaM datasets. It is too early to say if the metadata approach will support this.