It is a requirement for laboratories accredited to the ISO 15189 standard that Measurement Uncertainty (MU) is established for all tests that are provided. Furthermore, limits of acceptable performance must also be defined against the MU of each measurement procedure, thereby providing a convenient way to establish if a particular test is ‘fit for purpose’.

## Preamble

When we developed the mLabs point of care accreditation system at POCD we needed to ensure the software met the requirements for MU reporting and, more importantly, determined if a test was fit for purpose. Because we were designing the system to minimise the operational burden of such tasks, we automated the ongoing calculation of MU and continually assessed it against acceptance limits every time a Quality Control (QC) test was performed. Outliers were excluded using the 3 sigma rule and staff were automatically alerted if there was a problem.

Given the prevalence of technology today you’d expect this approach to be standard practice, but performing the calculations on a minimum schedule just to meet the requirements is more the norm.

Along the way we developed a thorough understanding of how MU values are calculated, what criteria they should be assessed against and why. What follows is the distillation of that knowledge into a short and concise treatise on the subject which we use to train GP’s who are learning to become the Approved Pathology Providers of mLabs facilities – we’re publishing it here in the hope that others in the scientific industry find it useful too.

## Introduction

Uncertainty of measurement is the doubt that exists about the result of any measurement. You might think that well-made laboratory analysers should be trustworthy and give the right answers without fail. But for every measurement – even the most careful – there is always a degree of doubt.

In everyday speech, this might be expressed as ‘give or take’… *e.g.* a stick might be 2 metres long ‘give or take a centimetre’. Since there is always a margin of doubt about any measurement, we need to ask ‘How big is the margin?’ and ‘How bad is the doubt?’

Thus, two numbers are really needed in order to quantify an uncertainty. One is the width of the margin, or interval. The other is a confidence level, and states how sure we are that the ‘true value’ is within that margin.

For example, to be more exact we might say that the length of a certain stick measures 20 centimetres plus or minus 1 centimetre, at the 95 percent confidence level. This result could be written as follows:

**20 cm ±1 cm, at a level of confidence of 95 %.**

In other words, this tells us there is 95 % certainty that the stick is between 19 centimetres and 21 centimetres long.

## Calculating MU

For labs in Australia the National Pathology Accreditation Advisory Council (NPAAC) have provided guidelines on how to Estimate Measurement Uncertainty (MU) and this is the convention followed by the mLabs system. The following provides a brief appraisal of the process. If you have any trouble understanding the terms, consider reading out post on statistics first and then picking up here where you left off.

The uncertainty of a result provides a measure of the confidence in the value that is reported, so it’s calculated using historical QC results by the mLabs system according to the following equation:

*CV*_{A}= (100 × SD) ÷ AverageThe analytical CV_{1} is then combined with the uncertainty from any reagent calibration CV_{2} (*e.g.* thromboplastin used in INR tests) and the bias measurement CV_{3} (calculated from your labs QAP data (published in the end of cycle reports):

*u*_{c}= [ (CV_{1})^{2}+ (CV_{2})^{2}+ (CV_{3})^{2}]0.5and then expanded by a factor of 1.96 to yield the expanded relative uncertainty for a test:

*U = u*_{c}× 1.96For mLabs users the analytical CV_{1} used in the calculations is automatically updated when QC tests are performed, so the uncertainty value can be used to calculate an up-to-date figure based on the magnitude of the test result. The other values for and CV_{2} and CV_{3} don’t change very often, so are entered periodically as data becomes available (but still employed in all uncertainty calculations).

If you don’t have a fancy system then it’s easy to do it the old way by setting up these calculations in a spreadsheet and performing a discreet precision study at the frequency required by your laboratory policy.

## Fitness for Purpose

As part of the measurement uncertainty calculations, it is convention to assess whether a test is fit for use by determining if the analytical precision (CV_{A}) is less than 75 % of the intra-individual biological variation (CV_{I}). In simpler terms, if the analytical variation in a result is close to (or greater than) the estimated biological variation, then the test probably isn’t very useful!

It’s convenient that the Westgard website has published a biological database, which lists all of the values found throughout the literature. The decision points for assessing fitness for purpose are as follows:

**( CV**_{A}< 0.25 CV_{I}) → Optimum**( 0.25 CV**_{I}< CV_{A}< 0.50 CV_{I}) → Desirable**( 0.50 CV**_{I}< CV_{A}< 0.75 CV_{I}) → Minimum**( 0.75 CV**_{I}< CV_{A}) → Unfit

Whenever a measurement of uncertainty is calculated the CV_{I} values from the biological database should be compared with your labs analytical CV_{A}, which may be calculated using QC data. If the value is found to fall below the minimum threshold the underlying issue behind the imprecision should investigated.

Very helpful, thank you!

Praesent vehicula nunc eget ex aliquet rutrum. Maecenas porttitor, turpis nec dignissim faucibus, nibh est fermentum mauris, eu maximus ante tellus ut massa. Integer non pharetra ex, in tincidunt ex. Sed quis felis non tortor rhoncus porta vel ut lorem. Fusce finibus ante nec nulla eleifend accumsan.