A new note from Sheila Minor: Hi George! Thank you for responding to my question in regard to using an intermediate sensitivity PTT reagent for general screening and heparin management and a LAC-sensitive reagent like PTT–LA for LAC detection.
I have one more question about reporting the results of the DRVVT/Confirm and Ratio results. I was reading the ISTH 2009 and CLSI 2014 publications, and both recommend reporting out the results as normalized ratios. ISTH 2009: “Results should be expressed as ratios (normalized) of patient to NPP for all procedures. Results for Screen to Confirm Ratios should be reported as a percent correction (screen-confirm)/screen X 100.” CLSI H60: “Results should be expressed in ratios (normalized) of patient to the mean of the RI for each assay (where applicable). Clarifies that a ratio of screening and confirmatory test results for paired tests be reported either as a normalized ratio or a percent correction of normalized ratios.” Our Lab currently does not report results in this manner. To be compliant with the latest recommendations, what would you suggest that we do? Thanks.
Hi, Sheila, thanks for your question. The ISTH and CLSI recommendations are opinions from recognized experts, and do not require compliance. I’ve attached McGlasson DL, Fritsma GA. Comparison of six dilute Russell viper venom time lupus anticoagulant screen/confirm assay kits. Semin Thromb Hemost 2013;39:315–9. We computed ratios for six competing reagents normalizing on the pooled normal control plasma value, and then recomputed normalizing on the mean of the normal range. We compared both computations for all six reagents with non-normalized ratios. Out of 24 comparisons we found only one discrepant result, and in that case the normalized ratio misidentified an LA-positive sample as negative, while the “raw” ratio gave the correct interpretation.
At some recent international meetings, Dave McGlasson and I asked two laboratory directors whether their (large and prominent) labs normalizes their DRVVT results, and both said no. Based on our findings, we hold the contrary opinion that the inherent manual computation errors made while attempting to normalize results are likely to create more erroneous interpretations than normalization is likely to correct. Here is the article:
My answer is provocative and is likely to draw the attention of the authors of the ISTH and CLSI documents, so watch this space for additional comments.
One thing I would like to
One thing I would like to point out is that in our study we used “real world” specimens. In other words none of our specimens were ‘spiked’ but performed on actual patients that been previously identified by two confirmatory assays as having a lupus anticoagulant. The specimens were not manipulated in any way by lyophilizing or mixing with any other adulterants. Many of the other studies on this issue were not performed on actual patients but mixed specimens. Dave McGlasson.
From Geo: I’d like us to
From Geo: I’d like us to address the potential for mathematical errors in this discussion. I lack an authoritative reference, but have heard that the most careful transcriptionists produce on average a 3% error rate. I assume that someone performing manual mathematical calculations may err at the same or a greater rate. There are no automated normalization formulas available from reagent distributors, so most of us would find it necessary to compute manually. I contend a 3% error rate would be unacceptable and would create more clinical errors than would be generated by the theoretical error rate associated with non-normalized ratios.
From Dave McGlasson: Dr.
From Dave McGlasson: Dr. Favaloro always states his case beautifully and eloquently. The problem with comparing the normalization of the DRVVT method to the INR is that there are INR standards with the ability to locally calibrate the method. Also the ISI is a standardized value for each and every PT reagent. We have no such standards for the clottable LA testing. Therefore using a contrived ratio of a ratio to come to some kind of standard consensus is confusing to me. When we compared three different ways to look for the differences of the DRVVT/DRVVC ratio using the old method of dividing the DRVVT/DRVVC versus using the mean of the reference interval (MRI) and the daily PNP we still found no difference in the results. When Bob Gosselin, Dr. Marlar and Dr. Friedman were quizzed by me personally on how they performed this step in the determination of the ratio each had a different answer. Looking at the previous studies I still say if you don’t have a physical reference standard plasma how can you have a standardization of the calculation? I still don’t see much of a difference that requires the laboratorian to take the extra step in the reporting of the test result with another mathematical calculation.
More from Dr. Thomas Exner,
More from Dr. Thomas Exner, Heamatex: Hi George,
Thanks for your provocative comments on normalization of DRVVT test results. I agree that LA screening tests should be expressed simply as clotting times relative to a reference interval. But if the result is beyond the upper limit (+3SD) and a confirmatory test is carried out, then a ratio of screen/confirm would be useful. Especially if this is normalized.
Normalization simply refers everything back to an index of 1.00 and enables better comparisons of inhibitor potency or test system sensitivity across different lots of reagent. This is standard policy for most current reagent or instrument changeovers. It need not be as complex a calculation as some labs currently do. It can be as simple as multiplying the raw screen/confirm ratio by a constant. This constant is the ratio of mean normal confirm result to the mean normal screen result and should not vary much beyond 0.9–1.1.
I cannot add anything more to what Emmanuel has already eloquently pointed out.
Thanks again for the challenge. Best wishes from Tom Exner.
Continuing the discussion (go grab a coffee—its a long blog):
Perhaps George may want to edit Bob G’s statement to soften the political impact before posting: “During CLSI document meeting, when I asked about the evidence for performing the normalizing, the room went kinda silent as there was no supporting evidence (published) that indicated this was necessary.”
I have been on many consensus groups and I even wrote a ‘blog’ on the process once with a solid phase APL colleague: Wong RCW, Favaloro EJ. A consensus approach to the formulation of guidelines for laboratory testing and reporting of antiphospholipid antibody assays. Sem Thromb Hemost, 2008;34:361–72. Admittedly for solid phase APL assays and before the last set of LA guidelines came out, but the sentiments expressed there still hold true more broadly to include LA testing to my mind: and I quote: “… because of the paucity of good-quality published evidence, there is a heavy reliance on expert opinion, and thus the existing consensus guidelines for aPL testing and reporting are largely eminence based rather than evidence based. This may potentially bias recommendations to reflect the personal preferences of those who have the greatest influence during the guideline formulation process.” and “Given a heavy reliance on expert opinion, such “eminence” based recommendations largely rely on a “consensus” process. It is also very important to recognize that such consensus processes are not foolproof and usually result in broad (e.g., 70 to 90%) rather than complete (100%) agreement on most of the more contentious issues.”
In relation to the question of LA normalisation. We still do it. However, I would support the previous comments that this is not necessary for lab ‘diagnosis’ of LA in experienced hands, so I don’t think what Bob, George and Dave have said and place into practice reflects inappropriate practice. They have assessed their lab testing practice and in their hands they see no benefit from the extra step of assay normalisation. However, lets take the broader view. We know that normalisation reduces the inter-lab variation in test results. We know this from published works with lab networks. example: Pradella P, Azzarini G, Santarossa L, Caberlotto L, Bardin C, Poz A, D’Aurizio F, Giacomello R. Cooperation experience in a multicenter study to define the upper limits in a normal population for the diagnostic assessment of the functional lupus anticoagulant assays. Clin Chem Lab Med 2013;51:379–85. And also from external quality assessment (EQA) practice (e.g.: Favaloro EJ. Variability and diagnostic utility of antiphospholipid antibodies including lupus anticoagulants. Int J Lab Hematol; 2013:35:269–74.). I think of this the same way I look at warfarin monitoring, albeit not at the same level of concern: We can use the prothrombin time (PT) to monitor warfarin. Providing a lab has assessed their own lab practice appropriately and the patient only ever gets tested at that laboratory, everything should be OK, and they can use the PT. However, differences in PT reagents and the fact that patients move around means that we can’t really use the PT for this more broadly, so we use the INR. OK, there are not so many differences in LA reagents compared to PT reagents; however, this is essentially fortuitous and stems from the marketing of essentially the same reagent by many different manufacturers under their different labels. I believe this is mostly due to the efforts of my previous work colleague Dr Exner, who so successfully produced the basic DRVVT formulation many years ago that was subsequently taken up by many manufacturers. But will this always be the case? In the study by George and Dave, one manufacturer had diverted from the tried and tested formulation (with adverse outcome!). I know that other manufacturers have now also diverted from this formulation. So onwards to the future.
By now, you should have finished the coffee!
Emmanuel J Favaloro
Editor In Chief
Seminars in Thrombosis and Hemostasis
And a response from Dave: I
And a response from Dave: I will put in more of my two cents on this issue. If it didn’t make any difference between six companies reagents which obviously were not the same lot numbers with three different methods of obtaining the ratio why would there be that much lot to lot variability? Another thing we did find in the study was that one reagent was not performing as it should have with two different lots of the reagent making it unsuitable for use. So normalizing two bad lots makes no difference either. At least with the lots that we obtained for testing.
From FF technical advisor Bob
From FF technical advisor Bob Gosselin: During the CLSI document meeting, when I asked about the evidence for performing the normalizing, the room went kinda silent as there was no supporting evidence (published) that indicated this was necessary. However, Mark Triscott (from IL) offered an excellent opinion, and that normalizing may help transitioning between lots of reagents (I would agree, but from the belly, not from data). Normalizing may also help decipher data from different reagents, although that also is theory. My two pennies…
A provocative response from
A provocative response from co-author Dave McGlasson: As George wrote, we did it a third way just by using the old method of dividing the DRVVT/DRVVC and reporting that ratio. We found that all three of the results for using the mean of the reference interval, the daily pooled normal plasma and the way I just stated and we found no clinical or statistical difference between all three methods with all six reagent/instrument combinations. If you can prove it is not necessary you don’t have to do it.
We had data not just opinion. At the THSNA 2014 conference I asked the moderator of the LA session point blank if his lab was going to do it and he replied no. Many coagulation analyzers can’t handle the calculation anyway and it has to be done manually which can cause the possibility of faulty results being tabulated. This is one time I say why fix it if it “ain’t broke.”