WHAT IS TEST UNCERTAINTY RATIO IN CALIBRATION?

As required by ISO-17025, all accredited calibration labs are required to calculate Test Uncertainty Ratio (TUR). TUR can be defined as the ratio between an instrument’s allowed tolerance and the uncertainty measured during calibration. TUR is used to evaluate an instrument’s measurement risk and validate the correct calibration methods needed. In other words, TUR helps calculate accuracy and precision of an instrument, which includes repeatability of measurements.

HOW TEST UNCERTAINTY RATIO WORKS

During calibration, technicians calculate the uncertainty of each measurement taken on an instrument and then determine the TUR for each reading. The uncertainty calculated during calibration represents the variation of the allowed tolerance and the labs calculation. This uncertainty includes multiple components and contributors including:

Each contributor and component affects the TUR. An ideal TUR is 4:1. This is because the 4:1 ratio is known for in-tolerance probability to stay at 100% the longest.

TEST UNCERTAINTY RATIO VS. TEST ACCURACY RATIO

It is important to be aware of the difference between TUR and TAR. TAR is Test Accuracy Ratio. Both of these are test ratios can be used to indicate if the calibration process was successfully enough to make a statement of compliance, however, both ratios are very different. TAR is defined as the ratio of an instrument’s accuracy against the accuracy of the standard used to report the error of said instrument. TUR is a more correct ratio indicator, therefore, it is used more frequently during the calibration process.

Should you be sending your equipment out to be calibrated? Or can you do it in-house?
Find out in our guide.