Join Our Newsletter Now
Why do we need to measure ambient conditions?
Ambient conditions are the prevailing conditions of air temperature, the moisture content of the air (relative humidity), and the temperature at which condensation will occur (dew point). Most coating specifications have set requirements for monitoring and documenting results for surface and air temperature, relative humidity and dew point. These conditions are to be measured and recorded in the specific areas where surface preparation and coating application will occur, then compared to the specified ranges and/or the coating manufacturer’s restrictions listed on the product data sheet.
While theoretically a surface temperature only slightly above the dew point temperature would preclude condensation, the 5°F safety factor accounts for instrument inaccuracies and changing or varying conditions.
You should not rely on prevailing conditions from a local weather service or from the internet as conditions at the project site and the specific work area can vary considerably. And surface temperature won’t be reported. Ambient conditions should be measured where the work will occur and recorded prior to start-up of operations and at 4-hour intervals thereafter, unless conditions appear to be changing. In this case, more frequent checks may be required.
Using Instruments for Assessing Prevailing Conditions
Whirling (Sling) Psychrometer: When discussing the measurement of ambient conditions using a whirling psychrometer (ASTM E337, Standard Test Method for Measuring Humidity with a Psychrometer (the Measurement of Wet- and Dry-Bulb Temperatures), you hear the terms wet bulb temperature and dry bulb temperature used on a regular basis, but how are these terms defined? Wet bulb temperature is an indication of the latent heat loss caused by water evaporating from a wetted sock or wick on the end of a bulb thermometer mounted in the psychrometer housing. While whirling the instrument away from your body in 20-30 second increments, the water evaporates from the wetted sock into the air, so there is a cooling effect on the thermometer causing a decrease in temperature. This process is repeated until two temperature readings from the wet bulb thermometer are with 0.5° of one another. The depression of the wet bulb thermometer from the dry bulb (air) thermometer is the calculated difference between the air temperature and the stable wet bulb temperature. For example, a dry-bulb temperature of 70°F and a wet-bulb temperature of 60°F nets a difference of 10°F. this is known as the wet-bulb depression.
Psychrometric tables are used to look-up the relative humidity and dew point temperature. First choose the table of interest (relative humidity or dew point temperature, then select the table corresponding to the prevailing barometric pressure for the geographical location that the project is in. Intersect the dry bulb (air temperature) with the difference between the dry and wet bulb temperatures, known as the depression of the wet bulb to determine the relative humidity or dew point temperature. A separate thermometer is used to measure the temperature of the surfaces to be prepared and/or coated. The temperatures and the relative humidity can then be compared to the requirements listed in the specification to determine conformance.
Digital Psychrometer: The use of a digital psychrometer for assessing prevailing ambient conditions and surface temperature is a much simpler process compared to the use of a whirling psychrometer, psychrometric charts and surface temperature thermometer. Most of the digital psychrometers will display the relative humidity, air temperature, surface temperature, dew point temperature and the difference (spread) between surface temperature and dew point temperature. Data are constantly updated and displayed simultaneously for easy recognition. This eliminates the need to use psychrometric tables to determine the relative humidity and dew point temperature, as well as any need for a separate surface temperature thermometer. The data can be auto-logged and uploaded to cloud-based software or downloaded to a device using USB or Blue Tooth®.
Which Method Wins the Duel?
Whirling psychrometers were first invented in the 1600’s (see image to right), and the US Weather Bureau Psychrometric Tables were first published in 1941. So, one may conclude that newer technology wins the duel. Not so fast! Digital psychrometers also have limitations and without user knowledge they too can produce erroneous data.
While having all the ambient conditions and the temperature of the surface readily displayed is a great benefit, there are important steps that must be followed when using these electronic instruments. It is very important that the digital psychrometer be allowed to ‘stabilize’ to the atmospheric conditions where the work is occurring. This could take anywhere from 20 to 30 minutes. That is, accurate readings are not possible immediately after departing an air-conditioned vehicle and walking onto the jobsite. Additionally, the humidity sensor used by most instrument manufacturers has a tendency to dry out during periods of inactivity, resulting in false, low humidity readings. To re-saturate the sensor, the manufacturers recommend placing the probe of the digital psychrometer in a re-sealable plastic bag or sealed container with a damp (not wet) cotton cloth for 24-hours. This will extend the life of the sensor and help ensure representative readings. And most instrument manufacturers recommend annual calibration.
Whirling psychrometers also have their limitations and the potential mis-use. These instruments cannot be used in freezing temperatures and proper use (thorough saturation of the wick with deionized water and reading the wet-bulb temperature after several 20-30 second increments of whirling until the wet bulb temperature stabilizes) is very important.
Despite the availability and apparent convenience of the digital psychrometers, many quality control and quality assurance personnel still rely on older “tried and true” technology. Both will work well when used properly.
Wet film thickness, or WFT is the measured thickness of any applied wet paint that is liquid-based. A wet film thickness gage should be used by the applicator as the coating is being applied to ensure that the measurement is representative of the calculated wet film before significant solvent evaporation occurs. Even slight delays in taking wet film thickness measurements can result in false low readings, since the solvents may have evaporated from the film before the measurements are acquired, which is why a WFT gage is largely regarded as an applicators tool rather than an inspector gage.
Why is WFT important? Measuring the WFT of a coating enables the applicator to adjust the spray gun speed, number of spray passes and to make spray gun adjustments (when possible) or select other spray tips to apply the correct amount of coating to achieve the specified dry film thickness.
What is the relationship between WFT and production? Time is money. The need to apply a build-up coat or even worse the need to reduce thickness (by sanding) can impede production and reduce profitability. While under-thickness can frequently be corrected by adding more coating, excessive thickness can cause solvent entrapment, runs and sags and, if uncorrected can lead to adhesion problems. Each of these events can negatively impact a project schedule. Perhaps more importantly, the performance properties of most coatings are based on achieving the specified dry coating thickness, and applying the correct wet film thickness can help to meet this requirement. Measuring wet film thickness during application immediately identifies the need for in-process adjustments by the applicator.
How is the WFT calculated? The coating manufacturer may indicate the range of wet film thickness to be applied to achieve the desired dry film on the product data sheet (PDS). However, many manufacturers only list the recommended DFT since the amount of thinner that will be added by the contractor is unknown and that amount effects the target WFT. Specifications typically list the desired end-result (the DFT) and not the means/methods of achieving it (the WFT). The wet film thickness target (or range) can be calculated. The equations for calculating the WFT, both with and without thinner addition, are shown. The dry film thickness range (per coat) is extracted from the specification or the coating manufacturer’s PDS (whichever is the governing document) and the volume solids content is listed on the PDS.
What are volume solids in paint? The volume solids content of a coating is an expression of the film-forming ingredients, or the material left behind after the solvents have evaporated from the applied coating. On a very basic level paint contains solvent, resin, pigments, and additives. The volume solids content is the percentage of the formulation that is non-volatile and will remain on the surface after the coating dries and cures.
Wet Film Thickness (WFT) = Dry Film Thickness (DFT) ÷ Percent Solids by Volume
Specified Dry Film Thickness = 3 – 5 mils
Volume Solids Content = 65% (0.65)
WFT = 3 ÷ 0.65 = 4.6 mils; 5 ÷ 0.65 = 7.7 mils
Based on this example, provided the applicator applies between 5 and 8 mils WFT, the specified DFT of 3-5 mils should be achieved.
The equation for calculating the target WFT with thinner added in the shop or field requires that the volume solids content of the coating (as manufactured) be adjusted based on the volume of thinner added, as a percentage of the total volume of coating.
WFT = DFT ÷ (Volume solids content ÷ 100% + % of thinner added)
Specified Dry Film Thickness = 3 – 5 mils
Volume Solids = 65%
Thinner by Volume = 20% (e.g. added 2 gallons of thinner to 10 gallons of mixed coating)
Step 1: (0.65 ÷ 1.20) = 0.54, or 54% adjusted volume solids based on 20% thinner addition
Step 2: 3 ÷ (0.54) = 5.6 mils 5 ÷ (0.54) = 9.3 mils
Based on this example, provided the applicator applies between 6 and 10 mils WFT, the specified DFT of 3-5 mils should be achieved, even with 20% thinner, which is part of the wet film but not the dry film.
How is WFT measured? Wet film thickness gages are used to monitor the thickness of the applied wet coating to achieve a specified dry film thickness. They measure all types of wet organic coatings, such as paint, varnish, and lacquer on flat or curved, smooth surfaces. The units of measure for these gages is typically micrometers (microns) or mils. Wet film thickness is measured according to ASTM D4414, Standard Practice for Measurement of Wet Film Thickness by Notch Gages. Pictures are gages commonly used to measure WFT.
How is a WFT gage used? Using a WFT gage is quite simple. First, verify that the notches (teeth) are clean and free of any dry paint. Immediately insert the end of the gage perpendicularly into the wet coating. The two end teeth will penetrate down to contact the underlying surface and will be wetted with coating. Withdraw the gage and read the highest wetted step. If none of the numbered notches contain wet coating, rotate the head of the gage to a lower WFT range and remeasure. If all of the numbered notches contain wet coating, rotate the head of the gage to a higher WFT range and remeasure. A diagram describing the proper use of a WFT gage is shown. The teeth of the gauge must be wiped off after every individual reading.
What units are used when measuring WFT? Typical units of measure are mils and microns. A mil is a unit of length equal to one thousandth (10−3) of an inch (0.0254 millimeter). A micron is a metric unit of measure for length equal to 0.001 mm, or about 0.000039 inch. Its symbol is µm. 25.4 microns is equal to 1 mil.
Conclusion: Calculating and properly measuring wet film thickness can reduce rework, improve productivity, and help ensure a properly applied coating. The proper use of a WFT gage by the applicator is critical to achieving the desired dry film thickness.
Coating thickness measurement is one of the most common quality assessments made during industrial coating applications. SSPC-PA 2, Procedure for Determining Conformance to Dry Coating Thickness Requirements is frequently referenced in coating specifications. As SSPC-PA 2 has evolved over the past four decades, a number of procedures and measurement frequencies are referenced in both the mandatory portions of the standard and in the non-mandatory appendices. While the measurement frequencies were never intended to be a statistical process, it is helpful to understand the statistical implications of the measurement process. And it is helpful to know what coating thickness variability is reasonable. This brief article explores how scanning probe technology can help to acquire a larger number of measurements (in a relatively short period of time) to better assess the consistency of the applied coating thickness, particularly on larger, more complex structures.
Scanning Illustration, courtesy of Elcometer Ltd.
There are two industry standards that are widely specified for measurement of coating thickness. These include ASTM D7091, Standard Practice for Nondestructive Measurement of Dry Film Thickness of Nonmagnetic Coatings Applied to Ferrous Metals and Nonmagnetic, Nonconductive Coatings Applied to Non-Ferrous Metals and SSPC-PA 2, Procedure for Determining Conformance to Dry Coating Thickness Requirements. The ASTM standard focuses on gage use, while the SSPC standard focuses on the frequency and acceptability of coating thickness measurements. The standards are designed to be used in conjunction with one another. In 2012, all references to measurement frequency were removed from the ASTM standard so that it did not conflict with SSPC-PA 2.
The frequency of coating thickness measurements is defined by gage readings, spot measurements and
FXS Probe designed to withstand rough surfaces, courtesy of DeFelsko Corporation
area measurements. A minimum of three (3) gage readings is obtained in a 1.5” diameter circle and averaged to create a spot measurement. Five spot measurements are obtained in a 100-square foot area. The number of areas to be measured is determined by the size of the coated area. If less than 300 square feet are coated (i.e., during a work shift), then each 100-square foot area is measured (maximum of three areas, each composed of five spot measurements with a minimum of three gage readings in each spot). If the size of the coated area is between 300 and 1000 square feet, three – 100 square foot areas are selected and measured. If the size of the coated area exceeds 1000 square feet, three areas are measured in the first 1000 square feet, with one additional area measured in each additional 1000 square feet, or portion thereof. For example, if the size of the coated area is 4500, square feet, 7 – 100 square foot areas are measured (total of 35 spot measurements and minimum of 105 gage readings).
Other measurement frequencies are included in non-mandatory appendices to SSPC-PA 2, including Appendix 2 & 3 for steel beams, Appendix 4 & 5 for test panels, Appendix 6 for measurement of coating thickness along edges and Appendix 7 for pipe exteriors.
Gauge display containing scanned data, courtesy of Elcometer Ltd.
The number of gage readings, spot measurements and area measurements prescribed by SSPC-PA 2 was never intended to be based on a statistical process. Rather, the frequency of measurement was based on what was reasonable in the shop or field to adequately characterize the thickness of the coating without unduly impeding production. Consider the impact of checking the thickness of a previous day’s application to 4,000 square of steel if every 100 square feet needed to be measured. That’s 40 areas, 200 spot measurements a minimum of 600 gage readings. And that frequency may not be considered a statistically significant sampling either. Further, obtaining additional measurements above the number prescribed by SSPC-PA 2 (when invoked by contract) may be considered “over inspection.”
Using Scanning Technology to Acquire Higher Volumes of Data
Several manufacturers of electronic coating thickness gages have incorporated “scanning probe” technology and the associated support software into the data acquisition process. This newer technology enables the gage operator to obtain large sets of coating thickness data in a relatively short time frame. For example, coating thickness data was obtained by a certified coatings inspector on an actual bridge recoating project that included 12 batches of readings (nearly 600 readings) in just under 8 minutes (measurement time only) on bridge girders across four panel points. So it may be possible to obtain a more representative sampling of the coated area without impeding production. However, there are concerns with acquiring such large data sets, such as management of the data, handling outliers, determining the statistical significance of the data (i.e., what is an acceptable standard deviation or coefficient of variation), applicability of the Coating Thickness Restriction Levels 1-5 in SSPC-PA 2), etc. The scanning probe set-up on the gage itself is relatively easy to perform, and the software is capable of handling the large volume of data coming into the gages.
The SSPC Committee on Dry Film Thickness Measurement may consider adding a 10th non-mandatory appendix to SSPC-PA 2 to give the specifier the option of acquiring a much larger data set of coating thickness measurements without impeding production. In this manner, an owner may gain greater confidence regarding the uniformity and consistency of the applied coating film.
Surface profile is defined as a measurement of the maximum peak-to-valley depth generated by abrasive blast cleaning and impact-type power tools. These operations effectively increase the surface area and provide an “anchor” for the applied coating system. The surface profile depth must be compatible with the total coating system thickness; typically, the thicker the coating system, the deeper the surface profile. For example, a 3-coat 15 mil system may require a 2-3 mil surface profile, while a 40-mil coating system may require a 4-5 mil surface profile. The maximum achievable surface profile is generally 6-7 mils (in steel) using a G10 or G12 abrasive.
Abrasive blast cleaned and power tool cleaned steel surfaces are routinely checked to verify the specified surface profile has been achieved. Industry standards such as ASTM D4417, Standard Test Methods for Field Measurement of Surface Profile of Blast Cleaned Steel, NACE International SP0287, Standard Practice for Field Measurement of Surface Profile of Abrasive Blast-Cleaned Steel Surfaces Using Replica Tape, and SSPC: The Society for Protective Coatings PA 17, Procedure for Determining Conformance to Steel Profile/Surface Roughness/Peak Count Requirements describe the procedures for performing these measurements, as well as the recommended frequency of measurements and acceptability of the values. However, the standards assume smooth steel was prepared; little is written about measurement of surface profile on rough or irregular surfaces such as pitted steel, weathering steel, or cast iron surfaces. This brief article describes a few methods that may be considered for measuring surface profile on these types of irregular surfaces.
Many steel structures that have been in service for relatively long periods of time may have irregular rough surfaces due to corrosion. Often this results in steel thickness loss (section loss), and may even require modification or replacement. But when it is determined that not enough metal loss has occurred to warrant repairs to the steel substrate, the applicator is faced with complying with contract requirements for cleanliness and profile generation on surfaces that have a roughness that often exceeds the surface profile requirements of the contract. Likewise, other steel surfaces such as cast iron and weathering steel (ASTM A588, A242, A606-4, A847, and A709-50W) typically have a rougher surface than abrasive blast cleaned ASTM A36 steel after they have weathered from atmospheric conditions, and may result in a higher surface profile yield than the specification allows and an ensuing nonconformance.
Measuring surface profile on rough or pitted surfaces can often lead to false high readings, since the measurements are indicative of the depth of the pits or the inherent roughness of the steel versus the surface profile generated by the abrasive or impact-type power tool itself. This begs the question, “how do you verify the surface profile on these types of surfaces with any degree of accuracy?” There are a few alternatives that can be considered; however, they should be discussed, and an approach negotiated during the preconstruction meeting rather than during in-process measurement, when possible.
The first alternative is to obtain measurements in an adjacent area (that is not rough) using whichever method has been selected/specified (depth micrometer or replica tape). However, this may not be feasible when the pitting or rough steel is uniform. Of the three methods listed in the referenced standards, the depth micrometer (Method B in ASTM D4417) is generally considered optimum in these situations because a measurement of a single valley can be obtained, and the upper range of the instrument is higher (20 mils) than the maximum value that can be reasonably measured using replica tape (5 mils). Multiple measurements (a minimum of ten) are made in an area and the average surface profile is calculated.
Another option is to rely on a visual comparator and a reference disc. The comparator is a lighted magnifier (typically 5-10x power) that enables the user to closely examine the surface roughness and compare it to replica discs containing varying degrees of roughness (5 segments per disc). The appropriate reference disc that represents the abrasive employed (grit/slag or shot) is placed on the prepared steel and the user selects the segment that most closely matches the surface profile of the steel.
A third option is to measure the surface profile on a companion piece of steel such as a test plate that is abrasive blast cleaned with the same abrasive and pressure being used on the rough steel. This procedure has been accepted in the nuclear power industry for many years when painting cast iron motor housings.
Lastly, the abrasive manufacturer can be consulted regarding the typical surface profile values produced by the type and size abrasive being used. Some abrasive manufacturers can provide a Certificate of Conformance that states the measured range for a given lot under laboratory conditions. Note that surface hardness greatly influences surface profile depth, so the abrasive manufacturer’s data may be misleading.
The important point to remember is that when the surface is rough or irregular, one or more of these methods can be used to more accurately determine the surface profile depth. Further, rough surfaces may require the application of a thicker coat, or additional coating layers to help ensure corrosion protection. The coating manufacturer should be engaged when making these decisions.
 According to SSPC-SP 11 and SP 15, verification of minimum 1-mil surface profile created by power tools can only be measured using Method B (depth micrometer) described in the ASTM D4417 standard.
 SSPC-PA 17-2012 addresses measurement of surface profile on pitted steel in Appendix C (Section C2.5.3)
 ASTM D4417 instructs the operator to report the maximum of the ten measurements; however, this is not recommended on rough/pitted surfaces. The standard does allow averaging of the readings.