intervals are a defined period of time between calibrations for testing and measuring
equipment. Calibration intervals are established to ensure that measuring
equipment operates within their specified tolerance limits during their use
within that interval time frame.
calibration intervals are based on numerous factors such as manufacturer
recommendations, industry standard regulations, or the typical ‘annual’
calibrations are performed over time, adjustments to the initial calibration
interval may be required to ensure that the equipment is capable of producing
reliable measurement results. It may be
found that a piece of equipment is not used as expected, had a tendency to
drift, or is not as reliable as what is required for its use. The equipment may
also be used in a harsh environment or used for highly accurate and critical
Optimal Calibration Interval
An optimal calibration
interval is one that balances the cost of the calibrations, the downtime
associated with the calibration process, the ability to meet the stated
specifications, and the quality risks that come with instruments performing outside
of their specifications.
If an interval is too
short it could lead to higher calibration costs and increased equipment
downtime. If the interval is too long, it could lead to out of tolerance
measurements, risk of recall, reduced confidence in the measurement and unscheduled
the optimal calibration interval between successive calibrations for all
equipment should be one of the goals of every calibration program. It could
lead to significant cost savings within an organization, streamline calibration
scheduling and minimize the risk of nonconforming products.
Calibration Interval Adjustment Methods
A wide range
of methods are available for reviewing and adjusting the calibration intervals.
Adjustments can be either upward, if the measuring equipment is found to be
in-tolerance, or downward if out-of-tolerance.
of the method used and the adjusted interval values should be retained for
further analysis. To be effective, an organization must have the proper
policies and procedures in place to ensure that the established calibration
intervals are enforced.
Simple/Automatic Adjustment Method
the Simple/Automatic adjustment, the interval is adjusted after every
calibration or series of calibrations set between maximum and minimum values.
The amount of each adjustment can be a fixed value, such as 3 months, or a fraction
of the existing interval, such as one-half. One problem with this method is
that the interval rarely stays the same and finding the ‘optimal’ interval is not
Certain modifications to
this method can minimize the sequence of fluctuating adjustments such as
shortening the adjustment value at each successive calibration or making
adjustments only after the equipment has been either in-tolerance or out-of-tolerance
during a specific number of calibrations.
Reliability/Performance Based Methods
and performance-based methods take successive calibration results and calculate
a ‘reliability’ number based on the percentage of times the item meets or fails
to meet its specification. For example, if an item has been intolerance 9 out
of 10 calibrations, its reliability is calculated to be 90%.
simple table can be used to determine the interval based on the reliability
number. This method establishes more frequent calibrations for items with
questionable reliability to ensure that they are able to meet the required
specifications. Using a percentage format smooths out the interval adjustments,
especially over time, and can assist in identifying an optimal interval.
Statistical Analysis Methods
Methods are similar to reliability and performance-based methods except that
they make adjustments by performing a technical analysis of critical measurement
data results, instead of just using calibration pass/fail criteria. The
measurement data used can be obtained through calibrations, intermediate
checks, interlaboratory comparisons or proficiency tests.
methods require substantial amounts of data, and time, for analysis. Typically,
twenty measurements are required to complete a control chart and thirty
measurements are used for most R&R or CPK studies. For equipment on a
yearly calibration cycle, it could take a decade or more to obtain the
statistical data required. Implementation is more difficult and some statistical
expertise is required, although statistical analysis software can be used.