Study Objectives: To assess the accuracy of warfarin dosing decisions and the degree of numeric bias between two point-of-care devices using a local reference laboratory's international normalized ratio (INR) as the standard measure, and to determine the relationship between dosing decisions and INR values obtained with the point-of-care devices.
Design: Prospective study.
Setting: Outpatient anticoagulation clinic.
Subjects: Two hundred two patients taking oral warfarin and 10 control subjects.
Interventions: For the two point-of-care devices, AvoSure and ProTime, the finger-stick method was used to collect capillary blood samples in each subject. At the same visit, one venous blood sample was collected from each subject for the laboratory analysis.
Measurements and Main Results: Dosing agreement was assessed as the proportion of agreement between each device and the laboratory in terms of maintenance dosage adjustments (increase, decrease, or no change). The level of agreement between each device and the laboratory was evaluated by dosing agreement analysis, bias analysis, and concordance coefficient analysis. In the dosing agreement analysis, 78% of INR values from the AvoSure device would have resulted in the same dosing decision as that with the laboratory INR values compared with 66% from the ProTime device (p<0.001). The mean bias for the ProTime device (0.5 ± 0.4 INR units) was significantly higher (p=0.005) than that for the AvoSure device (0.4 ± 0.5 INR units). The ProTime device overestimated low INR values to a greater extent than did the AvoSure device. Concordance between the laboratory measurement and each device was similar (rc = 0.82 for ProTime, rc = 0.76 for AvoSure).
Conclusions: Assessing dosing decisions yielded distinct, useful clinical information. The AvoSure device is associated with less systematic bias and a higher degree of clinical agreement with our reference laboratory measurement than those of the ProTime device.
Point-of-care testing devices are commonly used for measuring the international normalized ratio (INR) in patients who are taking oral anticoagulation therapy. These devices are less invasive than traditional laboratory methods that require a venous blood sample and provide nearly instantaneous results. This allows for immediate assessment of the INR and adjustment of the anticoagulant dosage, if necessary, during the patient encounter.
Whether a point-of-care device is interchangeable with (i.e., leads to the same dosing decisions as) a standard method is important to assess. This is especially true when the point-of-care device is used to make dosing decisions for drugs with narrow therapeutic indexes, such as warfarin. To make this assessment, accuracy (or the degree to which the point-of-care device and the standard agree) has been analyzed in various ways. Common assessments involve visual analysis of bias plots and a determination of whether the laboratory and the point-of-care measurements agree that a patient's INR is within a given therapeutic range or an expanded therapeutic range. Although some of these analyses include clinical assessments, they rely on mathematic formulas as criteria for clinical evaluations. Little consistency exists among these analytic methods, however, and the various formulas that determine clinical significance have not been validated. Therefore, whether these mathematic methods of assessing clinical agreement are reasonable estimates of clinical decision making is unknown.
Furthermore, the relationship between measurement error and clinical decisions as a function of the INR value is not known. It is widely reported that accuracy and precision decrease as INR increases. Therefore, much of the focus in previous studies has been on large discrepancies between standard methods and point-of-care testing devices at high INR values. The impact of smaller differences in INR measurements at the lower end of the INR range goes largely unexplored.
No known published studies directly measure whether a point-of-care device leads anticoagulation providers to make the same dosing decision as they would if using a reference standard INR value, or whether differences in dosing decisions occur systematically as a function of the INR value obtained with the point-of-care testing device. We assessed the accuracy of warfarin dosing decisions and the degree of numeric bias between two point-of-care devices using a local reference laboratory's measurement of INR as the standard measure. In addition, we sought to determine the relationship between dosing decisions and INR values obtained with the point-of-care devices.