Understanding Accuracy Statements

Understanding accuracy statements.
An accuracy statement defines the accuracy for a device. Several conditions must be met for the device to operate at this published specification. These restrictions are not always clearly disclosed and include:

• Stability over time
• Pressure range
• Compensated temperature range

An accuracy statement must include all potential effects of linearity, hysteresis, repeatability, temperature, and stability. If any of these are missing, they must be included for an overall assessment of the device.

Accuracy: The closeness of the agreement between the result of a measurement and the (conventional) true value of the measurand. [NCSL RP-1].
Measurand: A quantity subject to measurement. As appropriate, this may be the measured quantity or the quantity to be measured. [NCSL RP-2].
Precision: The closeness of the agreement between repeated measurements of the same quantity under the same conditions. A word often called repeatability. [NCSL RP-2].
Repeatability: The degree of agreement between measured values of the same quantity or parameter under the same conditions. [NCSL RP-3]
Linearity: The closeness of a calibration curve to a specified line. Linearity is expressed as the maximum deviation of any calibration point on a specified straight line, during any one calibration cycle. [ANSI/ISA-S37.1-1975, R1982].
Hysteresis: A lag effect in a body when the force acting on it is changed. [NBS TN 625]
Stability: The magnitude of the response of a measurement attribute to a given stress (for example, energization, shock, time, etc.) divided by the magnitude of its tolerance limit(s). Roughly stated, it’s the tendency of an attribute to remain within tolerance. [NCSL RP-1}

Stability over Time

Every pressure device allows some measurement drift over time. A key design requirement is to limit the amount of drift for a specific period after calibration. This period is called Stability over Time – the interval for which the gauge maintains the accuracy specified in the Accuracy Statement. An easy way to inflate a product’s performance is to shorten this interval, or refrain from publishing it, thereby obscuring the accuracy degradation that occurs over time. While shorter periods and more frequent calibration may be acceptable for some applications, repeated calibrations should be factored into the total cost of ownership. In cases where Stability over Time is not part of the Accuracy Statement, asking the manufacturer about the “one-year accuracy” of a device will provide a basis for comparison to other devices.

Pressure Range

Inside the operating pressure range, the device retains its stated accuracy. Outside this range – either higher or lower – readings have an unknown error. Operating a device outside its pressure range can also lead to gauge damage.

Some devices warn users against taking readings outside the pressure range, with a flashing display or a blinking indicator. In extreme cases, where damage occurs, the gauge prevents the user from taking a reading at all.

In other devices, sensor damage is not apparent. These products continue reporting incorrect readings without any warning. This is especially common in analog gauges, which are sensitive to over-pressure and have no self-diagnostics to check for damage.

Some products with piezo-resistive silicon sensors can handle extreme over-pressure several times greater than their maximum rating. This feature is important if the potential exists for water hammer or other extreme over-pressure conditions.

All Crystal products warn against over-pressure, contain sensor self-diagnostics, and feature high over-pressure capability.

Compensated Temperature Range

Some products specify a narrow compensated temperature range, but make allowance for a wider operating temperature range. This distinction is important because the compensated range indicates the temperatures between which the device corrects for temperature changes.

Many devices report excellent performance in a narrow band around room temperature, with a small adder for every degree of temperature outside that band. While this adder may seem insignificant, it can quickly overwhelm the basic specification at common working temperatures most users are likely to experience. See our explanation on temperature effects for more details.

% of Full Scale vs. % of Reading

Pressure measurement devices are commonly specified as percent of full scale or percent of reading, and the difference is significant. If an accuracy statement simply names a percentage (e.g., 0.1 percent), it is normally specifying a percent of full scale device. See our explanation on of reading vs. of scale for more details.

Factory Calibration

The original factory calibration documents how the gauge was operating when it left the factory. The quality of this calibration varies widely between products. The best will include measurements at several pressures and temperatures, documented by an NIST-traceable certificate from an ISO17025 accredited lab.

Resolution, Sensitivity, & Displayed Units

There are two issues related to resolution, which may diminish the accuracy of a gauge.

First, the last displayed digit – called the least significant digit – may not change in increments of one on some gauges. It may change in increment by 2s, 3s, or even 5s. This occurs due to inadequate sensitivity of the analog to digital converter, and is especially noticeable in finely decrements units – such as millimeters of mercury or on metric scales like kPa.

Second, the resolution of the gauge must be adequate to display the accuracy of the gauge. For example, if a certain gauge claims an accuracy of ± 0.02 psi, then the gauge display must also have a sufficient number of digits to show changes of ± 0.02 psi. If the gauge lacks the resolution to display the advertised accuracy, the user should reduce the accuracy to match the resolution of the device.

Conclusion 

A wide variety of issues exist related to the performance and accuracy of a pressure gauge. Above all, the most important consideration is to match the specifications of the gauge to its intended application. Installing a gauge with inadequate accuracy leads to flawed measurement data, while installing a gauge with excessively high accuracy increases the cost to purchase, calibrate, and maintain that gauge. While manufacturers usually make pertinent information available, the burden remains on the user to make educated decisions on their required accuracy and the devices they use.