Many quality standards require calibration to reduce measurement risk. But how does calibration reduce measurement risk? How do you know if your calibration provider has been successful at reducing measurement risk?
Measurement risk is the probability of making an incorrect pass / fail test decision. The measurement process has four possible outcomes: a correct pass, a correct fail, a false pass, and a false fail. A correct pass enables you to move forward in your design or manufacturing process. A failure occurs when your measurement system correctly identifies a problem. A false pass is a flawed process; it can lead to additional warranty costs, product recalls, or even loss of life. A false failure halts the process unnecessarily. False failures can delay time to market or create additional scrap and rework costs.
Measurement risk directly relates to the consistency, accuracy, and repeatability of your measurements. In metrology, measurement uncertainty helps us calculate risk. Measurement uncertainty takes all the possible errors and combines them to produce a standard deviation. Figure 1 is an example of how you can then use statistics to calculate risk.
Measurement error comes from four general sources: intrinsic, environmental, installation, and operational. Limitations or inaccuracies of the measurement instrument cause intrinsic errors — environmental errors come from the instrument’s surroundings. Installation error considers all errors caused by everything hooked up to your instrument. Finally, the operational components are measurement issues caused by the engineer or technician operating the equipment. You need to consider both short-term and long-term measurement uncertainty sources for each segment. Instrument performance comes from three primary sources: short-term repeatability, instrument wear, and component aging. You can easily measure statistical dispersion or short-term repeatability by making a series of measurements and calculating the standard deviation. Usage causes instrument wear — the analog electrical components in the test equipment will drift. Voltage and power measurements often drift in a Gaussian random walk, which means performance can change rapidly. Crystal oscillators will often drift in a linear fashion. The environmental error includes temperature, altitude, and humidity. Temperature is usually the most significant source of error. To reduce the impact caused by electronations in temperature, keep electronic instruments within ±5°C unless noted otherwise. For other test equipment, such as gauge blocks, the temperature is the primary source of measurement uncertainty. Installation error includes all connected accessories, such as cables, connectors, probes, and switch boxes. Power spikes or fluctuations in your electrical circuit will erode your margins. Human error occurs during the testing process. Standardized test software, training, and your measurement quality system mitigate any reproducibility problems you might see from human error.
Figure 2 shows how the intrinsic performance of instruments can change over time. The uncertainty surrounding the performance increases and decreases your ability to make consistent, accurate, and repeatable measurements. Proper calibration accurately measures the test equipment’s performance. When you measure your instrument’s performance, the intrinsic uncertainty is the uncertainty of the calibration process. The basic definition of calibration is measuring the performance of a test asset against international metrology standards. It does not always include pass / fail decisions or adjustments. Different calibrations have different deliverables.
How do you evaluate the quality of calibration service performed on your instrument? There are four areas you should look at when you receive your instrument back from a calibration provider to ensure that the instrument is properly calibrated. They consist of the extent of testing, data provided, test accuracy, guardbanding and adjustment from the provider. This is important to ensure your ability to make consistent, accurate, and repeatable measurements with your instrument as intended.
Different calibration providers offer a different extent of coverage, as indicated by the number of tests run and points tested. The more thoroughly you test an instrument, the greater the confidence in the calibration result. The original equipment manufacturer (OEM) will suggest the parameters to test and the test points. No calibration standards specify the extent of testing required. It is common for a calibration supplier to limit the number of tests and points tested to reduce costs. Calibration providers skip tests for one of three reasons • The supplier does not have the equipment to perform the test. • The tests are complex, require specialized skills, or take an extensive time to complete or develop the test methods. • The contractual price does not give time to perform all tests.
You can estimate the measurement uncertainty of an instrument in one of three ways. The easiest way is to have the calibration provider generate it for you. Another way is to estimate the instrument’s measurement uncertainty based on the standards used. You can use the instrument’s specification as the standard deviation to calculate the uncertainty. This formula accounts for the assumed rectangular distribution specifications. It also includes some of the uncertainty between the device under test and the instrument’s connector. You can also estimate the measurement uncertainty based on the company’s scope of accreditation. Figure 4 shows that different suppliers do not use the equipment audited for the scope of accreditation for regular calibration. You need to check the traceability report to confirm the use of proper standards.
Guardbanding is one technique to reduce measurement risk. The guardband is the offset between the acceptance limit and the specification. In most cases, the acceptance limit is tighter than the specification to reduce the risk of false passes. The trade-off is an increase in false failures. Adjustments alter the instrument’s performance to be as close to nominal as possible. Adjustments for legacy analog instruments simply require turning a potentiometer with a screwdriver. Today’s adjustments require very accurate instruments, automated procedures, and access to internal field-programmable gate arrays. Not every calibration supplier can provide these sophisticated adjustments. If the supplier cannot provide an adjustment, any specification considered out of tolerance requires repair.