One of the most painful parts of EDC is validating case report forms (CRFs). CRFs must be rigorously tested. The standard used by most governing bodies, including the
FDA, is
IQ/OQ/PQ. We’ll try to define each in the CRF validation context:
- IQ – Installation Qualification. Does the form capture the intended data? Mostly this can be checked against the protocol.
- OQ – Operational Qualification. When data is entered into the form and the data is then submitted to the database, does that data actually make it to the database and is that data 100% accurate?
- PQ – Performance Qualification. This has to do mostly with edit checks. Is the data submitted clean and without errors? Are the edit checks defined actually firing and forcing the user to make corrections when necessary?
IQ is probably the easiest of the qualification methods. Unfortunately, OQ/PQ are not so easy.
To test OQ, you must enter every possible combination of data into the form and be sure that it resides as it was entered, in the database. That is not an easy check, which can make you wonder if this requirement has ever been satisfied. Imagine a study with thousands of data points containing hundreds of coded fields and large numeric ranges. To test all combinations of a study like that may take months and many FTE’s. This would drive the price of the study up considerably. Even if the qualification is not completely satisfied, it would take weeks to make a good effort and testing the validity of the data being entered. This explains why traditional EDC is so expensive.
PQ has to do with the performance of the form. Does it clean the data? Are users forced to take corrective action when data is entered that does not jive with the protocol? For the most part, do your edit checks work and properly cleanse the data?
All the testing and validation must be logged and made available to a potential auditor, whether that auditor be from the sponsor or the FDA. This can easily exceed 1,000 pages. How can an auditor get to a specific field or edit check?
Now the truly painful part: after validating a CRF when making a change, then what? Does the process start all over again? If the answer to that is “No”, how can one possibly test the potential dependencies of the change that was made? In almost all cases, the CRF validation must be redone.
Now let’s put this in the context of DIY EDC. Does the DIY vendor provide tools to at least help in the validation process? If not, do they deploy SOPs for just how to do it? Has the CRO considered validation when determining the cost of a study?
Now you see why validation is such a painful process.
TrialKit automates the entire process. In our next post, we’ll explain how TrialKit virtually eliminates the cost of CRF validation.