Proficiency Testing is an assessment that provides calibration laboratories with an objective
means to demonstrate competence to perform specific calibration processes
and achieve the proper measurement results within the claimed measurement
uncertainty. Proficiency testing is also referred to as ‘Interlaboratory
Comparisons’ as the measurement results of an
individual laboratory are compared with the results obtained by other
primary objective of proficiency testing is to provide the laboratories with
the information to identify
issues and implement corrective actions to
improve the quality of their measurements and to provide greater
confidence and reliability for the laboratory’s customers. Participation in
proficiency testing can also validate that the laboratory’s calibration
procedures, technical competence, traceability, and uncertainty budgets yield
results that are within the range of other laboratories.
Proficiency Testing Requirements
In the 2005 revision of the ISO/IEC 17025 standard,
proficiency testing was one of the recommended items for ensuring the validity
of the laboratory results, but it was not required. The new 2017 revision of
the standard now specifically requires laboratories to participate in
proficiency testing or other inter-laboratory comparisons.
The requirements for
proficiency testing vary between each accrediting body however, most of them
use the recommendations published by ILAC by requiring laboratories to
participate in a minimum of two proficiency tests per year and cover all major
sub-disciplines on the laboratory’s scope every four years. Each laboratory is
also required to maintain a Proficiency Testing Plan that documents the
schedule of the proficiency tests that will be performed to cover the
disciplines on their scope during the four-year period. Documented evidence must
also be kept of all proficiency testing results, including the procedure used,
data obtained, and submitted uncertainty budgets.
Proficiency Testing Process
A Proficiency Testing
provider first sends a test artifact for a specific calibration discipline to an
accredited reference laboratory to determine the reference value for the
The Proficiency Testing
provider will then send the artifact to a participating laboratory. The
laboratory will measure the artifact according to a given set of instructions
and report its measurement results and associated uncertainty to the provider.
The Proficiency Testing
provider will compare the results reported by the laboratory to the reference
value for the artifact and issue a preliminary report of the comparison to the laboratory. This initial report is sent for the laboratory to
review the results and to ensure that the correct data has been recorded. Any
discrepancies can be corrected at this time.
sufficient number of laboratories have submitted their results to the PT
provider, the laboratory can request the final report where the results are
compared with the other participants’ results. The laboratory can see their
results, but the other participants are only identified by a number, to protect
The laboratory will then forward
the artifact to the next participating laboratory. After each participating
laboratory has completed testing, the artifact is returned to the provider.
A technical advisor for the specific proficiency
test will monitor the results of each laboratory throughout the course of the
proficiency test and make any adjustments to the reference value by using
statistical evaluations of the data if needed.
Proficiency Testing Results
results are commonly evaluated using two performance statistics, En and Z-score.
En, or Normalized Error,
is a statistical evaluation used to compare proficiency testing results and the
reported measurement uncertainty between the participant and the reference
laboratory. It is the primary evaluation used to determine whether a
participant’s results are satisfactory or unsatisfactory. If the value of En is
between -1 and +1 the results are considered satisfactory. If the value is
outside this range, the results are considered unsatisfactory.
Z-score is a statistical
measurement of a participant’s testing results as compared to the reference
artifacts assigned value and a performance statistic of the results of all
participants, such as standard deviation and population mean. The Z-score is
used to identify outliers in the measurement results of all participants and that
data may be removed from the proficiency testing results. If the value of the
Z-score is less than 2, the results are considered satisfactory. If the value
is greater than 2, the results are considered questionable or unsatisfactory.
If a laboratory has any
unsatisfactory results, the laboratory is required to investigate the discrepancy
and document the investigation by implementing the corrective action process.
The investigation should determine the cause of the discrepancy and determine
if the results may have affected any customer calibrations. If a laboratory
fails to do these tests or they continually receive unsatisfactory results,
that specific calibration discipline could be removed from their accredited
scope or their measurement uncertainty may be adjusted to encompass the results