Our Gify Products
How We Work
Our New Article
This is Part 2 of a 2-part series describing the instrumentation used to inspect the quality of cleaning and painting. Part 1 described the instruments used for determining the quality of cleaning and paint. Part 2 addresses moisture detection. The moisture content of the concrete should be determined prior to painting. In the author’s experience, moisture within the substrate is a leading cause of coating failures on concrete. If the moisture is elevated, the source(s) should be identified and corrected before paint is applied. One the paint is applied, the continuity of the film should also be determined to confirm that flaws are not present in the applied coating that will allow air, and therefore moisture, to pass through the film, to subsequently dampen the substrate in the future.
A number of instruments and techniques are used to determine the moisture content of concrete substrates. Some (calcium chloride and RH probes) are primarily used on floors, while others (radio frequency, conductivity, electrical impedance, and plastic sheet) are suitable for any concrete substrate. An instrument used to determine that the coating is free of flaws that could lead to future wetting of the substrate is based on creating a pressure differential across the film to locate detects. All of the aforementioned tests and methods are described in this article.
The instrument described below utilizes radio frequency to assess and monitor the relative moisture content in concrete to a depth of ~ 1 inch. It provides readings on a relative scale between 0 – 999. The instrument displays results using both a color and a number. The green zone is from 0 to 145 units and signifies “safe air-dry conditions.” The yellow zone is between 146 and 230 units and signifies “moisture levels are higher than normal but not critical; further investigation is recommended.” The red zone is greater than 230 units and represents “excessive moisture levels.” See Photo 1.
Step 1 – Press the top button to turn the gage on and set the instrument to the prevailing weather conditions by pressing the lower “arrow” button for 3 seconds until the word “nul” shows. Nul will flash and when it disappears, the gage is ready for use under the current ambient conditions. If the instrument is being used on the exterior of a building, but you move to the interior, repeat this step when inside the building.
Step 2 – Hold the instrument (gage) flush to the concrete substrate with your fingers on the black plastic perimeter of the gage body. Do not allow your fingers to extend to the front of the gage beyond the black. The gage requires a firm 2-point contact to take a reading (the front nose of the gage and the protruding rounded base).
Step 3 – The instrument will give an audible signal when the reading stabilizes. Record the value from the digital display and note the color.
Electrical Resistance (Conductivity)
The instrument described below utilizes conductivity to determine moisture content. Two contact pins on the end of the instrument are pushed against the surface to measure the conductivity (relative moisture content) of the material between the pins. Masonry nails can also be driven into the surface about ¼ inch in depth to assess moisture content below the surface. The pins of the probe are touched to the heads of the nails.
Depending on the model, the instrument uses either an analog or digital scale. When using the analog instrument, readings can be taken from 2 scales. The scale for concrete is a relative scale from 0 to 100. See Photo 2.
Caution: When using the analog scale, many users inadvertently record readings from the “wood” scale which is a percentage. The percentage on the wood scale has no relationship to the moisture in concrete. The concrete scale can be interpreted as follows:
- Green <85 units (<2% moisture content)
- Yellow 85 to 95 units (2% to 4% moisture content)
- Red >95 units (>4% moisture content)
Step 1 – Turn the instrument on and check the calibration. For the analog instrument, press the button with the “√.” The needle should read 20 on the wood scale. For the digital model, press the Read Button (a moisture droplet insignia is printed on the button) and the Calibration (check) button simultaneously. It should display 12% (+/- 0.2). If the readings are not in the above ranges, change the battery.
Step 2 – For the digital instrument, press the scale button (*) and set the scale to “2” for concrete. For the analog instrument, nothing needs to be set, but make sure you are using the “reference” scale for concrete. A very common mistake is to read the “wood” scale, which will provide incorrect results.
Step 3 – Press the probe firmly against the surface, making certain that both pins are in intimate contact with the concrete.
Step 4 – For the analog instrument, press the button with the “moisture droplet” insignia and record the number from the “reference” scale. For the digital instrument, push the “moisture droplet” button and a digital reading will be displayed.
Step 5 – To obtain readings below the surface, drive concrete nails into the surface and hold the pins of the instrument probe on the nail heads (1 pin on each nail head).
The instrument described below utilizes electrical impedance to determine moisture content to a depth of ~1 inch. The electrical impedance is measured by creating a low frequency electrical field between the electrodes on the bottom of the unit. The moisture readings are displayed on a moving coil meter ranging from 0% to 6%. See Photo 3.
Step 1 – Press the on/off button to power up the instrument. The lower LED will flash. If both lights flash, replace the battery.
Step 2 –Hold the instrument flush to the concrete substrate. All of the spring loaded feet of the gage should be in full contact with the surface.
Step 3 – Read the percentage from the top 0 to 6% scale.
Plastic Sheet Test
The plastic sheet test is a qualitative method for determining the presence of moisture within the substrate. The test is addressed in ASTM D4263-83 (2012), Standard Test Method for Indicating Moisture in Concrete by the Plastic Sheet Method.
Step 1 – Cut a sheet of clear plastic approximately 18 in x 18 in size. Note that when used on concrete block (CMU), in order to get a good seal with tape in Step 2, it may be necessary to cut the plastic in the shape of the block(s) so that the outside perimeter falls onto mortar joints.
Step 2 – Firmly tape the perimeter of the plastic to the surface to create a continuous seal.
Step 3 – Allow the plastic to remain in place for a minimum of 16 hours.
Step 4 – At the end of the exposure time, examine the underside of the sheet and surface of the concrete for the presence of moisture. See Photo 4.
This method is addressed in ASTM F1869-11, Standard Test Method for Measuring Moisture Vapor Emission Rate of Concrete Subfloor Using Anhydrous Calcium Chloride. The test requires exposing the concrete slab to anhydrous calcium chloride for a given length of time (Photo 5). The results are expressed as the moisture vapor emission rate (MVER), reported in pounds of moisture over a 1,000 square foot area during a 24-hour period.
The testing should be conducted at the same temperature and humidity that is expected during normal use. If this is not possible, the ambient conditions should be controlled to 75°F ± 10°F and 50% ± 10 % relative humidity for 48 hours prior to testing and during the test. An exception to testing at the expected service temperature/relative humidity involves floors that operate at temperature or humidity extremes (e.g., cold storage rooms). In these cases, the temperature/humidity criteria listed above should be maintained.
ASTM F1869 recommends a test frequency of 3 locations for the first 1,000 square feet, with an additional test location for each 1,000 square feet of floor area, or fraction thereof.
Step 1 – Lightly abrade a 20 in x 20 section of the concrete surface by grinding to produce a slight profile equal to ICRI CSP-1 to CSP-2 and to remove the thin layer of finished concrete, but not exposing large aggregate. If floor coverings or coatings were removed in the test area, the concrete must be exposed for 24 hours after grinding before initiating the test. If the concrete was not covered, or the coverings have been removed for more than 30 days, testing can begin immediately after grinding and clean up. The 24 hour waiting period is not required.
Step 2 – Remove all dust from the surface.
Step 3 – Weigh the sealed plastic container containing the anhydrous calcium chloride to the nearest 0.1 gram.
Step 4 – Place the container on the prepared concrete and carefully remove the tape and lid to expose the calcium chloride. Store the lid and tape for reuse when the test is complete.
Step 5 – Cover the container of calcium chloride with the transparent dome provided by the manufacturer. Press firmly to complete seal the gasket material to the concrete around the perimeter of the dome.
Step 6 – Allow the container to remain in place for no less than 60 hours, nor longer than 72 hours.
Step 7 – At the completion of the test period, place the lid on the container and firmly tape it in place with the same tape that was originally on the container.
Step 8 – Reweigh the container using the same scale used for the pre-test weighing.
Step 9 – Insert the pre and post weights and exposure time into the formula supplied with the test kit to determine the MVER, reported as pounds/1000 sq ft/24 hours.
This method is addressed in ASTM F2170-11, Standard Test Method for Determining Relative Humidity in Concrete Floor Slabs Using in situ Probes. This is a destructive test that requires drilling small holes in the slab, inserting hollow sleeves, and after a given waiting period, inserting probes into the sleeves to determine the relative humidity. The results are directly displayed as relative humidity; no conversions are needed (see Photo 6).
The slab should be at service temperature and the occupied air space above the floor should be at the service temperature and relative humidity for at least 48 hours prior to testing. The hole depth for the probes is based on a percentage of the slab thickness. If the slab is drying from the top only (e.g., slab on grade, or slab on a metal deck), the hole is drilled to a depth of 40% of the total thickness of the slab. For a 4 inch thick slab, the hole depth is approximately 1.5 inches. If the slab dries from both the top and bottom (e.g., elevated reinforced slab not on a metal deck), the hole is drilled to a depth of 20% of the total thickness of the slab. For a 4 inch thick slab, the hole depth is approximately 0.75 inches.
ASTM F2170 recommends a test frequency of 3 locations for the first 1,000 square feet, with an additional location for each 1,000 square feet of floor area, or fraction thereof. For on-grade and below-grade slabs, one location is to be within 3 feet of each exterior wall.
Step 1 – Drill the hole using a hammer drill and drill bit (dry). The diameter of the hole is established by the manufacturer.
Step 2 – Vacuum the dust from the hole, use a round wire brush sized to the hole diameter to thoroughly scour the hole to remove any loose material and vacuum again.
Step 3 – Some manufacturers require the insertion of a sleeve in the hole or a sensor. In both cases, the hole containing the sleeve or sensor is capped and allowed to remain undisturbed for 72 hours prior to testing to achieve equilibrium.
Step 4 – Follow the manufacturer’s instructions to obtain a reading. First allow the instrument to reach equilibrium in the test environment. Depending on the instrument being used a probe attached to an RH gage is inserted into the sleeved hole to obtain a reading, or a reader is attached to the pre-installed sensor to obtain a reading.
Paint Film Continuity – Air Leak Detector
ASTM E1186, Air Leakage Site Detection in Building Envelopes and Air Barrier Systems, describes a number of methods that are used to determine whether air barriers installed on buildings are effective. Some of the methods test the entire enclosure, while others test specific locations, such as coatings, joints, penetrations, and junctions. The instrument described below (Photo 7) is used to test specific locations on a structure to determine if the surface is properly sealed. While the instrument can be used during commissioning, it should be used during construction to confirm that the installation practices are creating an effective non-leaking air barrier. Random areas are tested to confirm that the paint application techniques are suitable for creating a continuous film.
Step 1 – Adjust the leak detector to the specified pressure differential limit and rate of depressurization. The common test parameters are 500Pa and 25 Pa/sec, respectively.
Step 2 – Clean the area around the detail to be tested.
Step 3 – Apply a specially formulated liquid test solution to the surface.
Step 4 – Place the test chamber over the test area.
Step 5 – Start the instrument and carefully observe the test area for the formation of bubbles. Bubbles indicate the presence of a leak and poor film continuity. If not bubbles are present, the test area is free of air leaks.
Step 6 – Clean the test area to remove the test solution.
While inspections to confirm the quality of surface preparation and paint application are the most visible and obvious part of the painting process, an equally important aspect of the process involves the detection of moisture within the substrate and continuity of the film. These aspects are often “invisible” and therefore not fully appreciated. Common methods for detecting moisture and film continuity have been discussed in this article. See Part 1 of this Series for a discussion of instruments used for cleaning and painting.
ABOUT THE AUTHOR
Kenneth Trimber is the president of KTA-Tator, Inc. He holds a Bachelor of Science degree from Indiana University of Pennsylvania, is an SSPC Protective Coatings Specialist, is certified at a Level III coating inspection capability in accordance with ANSI N45.2.6, is a NACE-certified Coating Inspector and an SSPC-C3 Competent Person.Trimber has more than 40 years of experience in the industrial painting field, is a past president of SSPC, chairman of the Committee on Surface Preparation, chairman of the Visual Standards Committee, chairman of the Task Group on Containment and chairman of the SSPC Commercial Coatings Committee. He is also past chairman of the ASTM D1 Committee on Paints and Related Coatings, Materials, and Applications.Trimber authored The Industrial Lead Paint Removal Handbook and co-authored Volume 2 of the handbook, Project Design. He was the recipient of the John D. Keane Award of Merit at the SSPC National Conference in 1990 and is a former technical editor of JPCL. In 2009 and 2012 he was named by JPCL as one of the 25 Top Thinkers in the coatings and linings industry and in 2015 was the recipient of the SSPC Honorary Life Member Award.
When you spend money on a product or service, you expect quality, regardless of the cost. If you purchase the most inexpensive Chevrolet that is made, you still expect quality. While the braking system in the Chevrolet may be less sophisticated than a Mercedes, the brakes better work and exhibit quality commensurate with its design. You expect the parts and installation to meet all of the standards imposed by the manufacturer for that class of vehicle.
Expectations of quality for cleaning and painting commercial buildings are no different. Owners expect the paint in the can to meet the quality standards established by the manufacturer, and the installation to meet the requirements of the specification, whether it involves a sophisticated fluorourethane on a highly visible entrance awning, or a low cost acrylic on a back wall that is hidden from view.
But how is the quality of cleaning and painting determined? For many, it simply involves rubbing a hand across the surface when cleaning is finished and after the application of each coat. It isn’t clear what rubbing the surface does, but the hand cannot identify if the levels of moisture within the substrate are acceptable, or whether the ambient conditions and surface temperature are suitable, or if each coat is applied to the proper thickness. When coatings are required to resist penetration from wind-driven rain or serve as an air barrier, verification of proper workmanship at each stage of the installation is critical.
Many standards and instruments are available for verifying the quality of cleaning and painting. Not only must the appropriate instruments be selected, but they must be used properly. This article describes the operation of some of the common instrumentsused to evaluate the quality of cleaning and painting. Part 2 of this series addresses instruments and methods used for the detection of moisture.
Surface Cleanliness – Steel
SSPC: The Society for Protective Coatings (SSPC) has published standards that describe different degrees of cleaning when using hand or power tools, dry and wet abrasive blast cleaning, and water jetting. In addition to the written words, photographic guides are also available to depict the appearance of the different grades of cleaning. Some of the SSPC work was done in cooperation with NACE International (NACE).
The visual guides that depict surface cleanliness are (Photo 1):
- SSPC-VIS 1, Guide and Reference Photographs for Steel Surfaces Prepared by Dry Abrasive Blast Cleaning
- SSPC-VIS 3, Guide and Reference Photographs for Steel Surfaces Prepared by Power and Hand Tool Cleaning
- SSPC-VIS 4/NACE VIS 7, Guide and Reference Photographs for Steel Surfaces Prepared by Waterjetting
- SSPC-VIS 5/NACE VIS 9, Guide and Reference Photographs for Steel Surfaces Prepared by Wet Abrasive Blast Cleaning
SSPC-VIS 3 is described below as the example for using the guides. All four are used in the same manner.
Step 1 – Identify the initial condition of the steel so that the correct series of photographs is selected for the assessment of the quality of cleaning. The initial conditions in SSPC-VIS 3 are:
- Condition A – not painted – adherent mill scale
- Condition B – not painted – mill scale and rust
- Condition C – not painted – 100% rusted
- Condition D – not painted – 100% rusted with pits
- Condition E – painted – light colored paint, spots or rust over blasted steel
- Condition F – painted – zinc rich paint over blasted steel
- Condition G – painted – heavy paint over mill scale
Step 2 – Determine the degree of cleaning required by the project specification. The degrees of cleaning depicted in SSPC-VIS 3 are:
- SSPC-SP2, Hand Tool Cleaning (hand wire brush cleaning depicted)
- SSPC-SP3, Power Tool Cleaning (both power wire brush and sanding disc cleaning depicted)
- SSPC-SP15, Commercial Grade Power Tool Cleaning (needle gun/rotary peening cleaning depicted)
- SSPC-SP11, Power Tool Cleaning to Bare Metal (needle gun/rotary peening cleaning depicted)
Step 3 – Locate the reference photograph for the degree of cleaning over the initial substrate condition. For example, the photograph of power tool cleaning (sanding disc) of a coating that exhibits light rust before cleaning is photo E SP3/SD (E represents the initial condition; SP3/SD represents power tool cleaning with a sanding disc). See Photo 2.
Step 4 – Compare the prepared surface with the photograph to determine if the degree of cleaning has been met.
Surface Profile – Steel
The surface profile (roughening) of the steel is commonly determined using a depth micrometer or replica tape. The methods for measuring surface profile are described in ASTM D4417, Standard Test Methods for Field Measurement of Surface Profile of Blast Cleaned Steel. Method B describes the use of a depth micrometer and Method C describes the use of replica tape.
Surface Profile Depth Micrometer (Method B of ASTM D4417)
The depth micrometer described in the ASTM standard contains spring loaded, 60° cone-shaped pin that projects from the base of the instrument. The base of the instrument rests on the peaks of the surface profile and the pin projects into the valleys. The distance that the cone projects into the valleys is displayed in 0.1 mil increments; readings can also be displayed in micrometers (µm).
Step 1 – Zero the instrument on the piece of plate glass supplied with the gage (the plate glass has been ground smooth to remove waviness), then place a horseshoe-shaped shim (also supplied with the gage) on the plate glass. Measure the thickness of the shim to verify the accuracy of the gage.
Step 2 – Hold the gage just above the probe and firmly push it against the surface to be measured. Record the reading. Readings can also be stored in memory and uploaded or printed later.
Step 3 – Pick the gage up and reposition it on the surface to take another reading. Do not drag it across the surface as dragging can blunt the tip.
Step 4 – Take a minimum of 10 readings at each test location. The maximum value of 10 readings (removing obvious outliers) represents the profile at that location.
Surface Profile Replica Tape (Method C of ASTM D4417)
The tape is used to create a replicate of the surface profile that is measured using a light spring-loaded micrometer. One instrument manufacturer has also developed an attachment for a digital gage to read the replica tape and store the results electronically. The directions below apply to the use of the spring micrometer to measure the replica tape.
Step 1 – Select the replica tape that covers the expected profile range. The tape is most accurate mid-range:
- Coarse – 0.8 to 2.5 mils
- X-Coarse – 1.5 to 4.5 mils
- X-Coarse Plus – 4.0 to 5.0 mils
Step 2 – Prepare the area to be tested by removing surface dust or contamination. This can be done by brushing.
Step 3 – Remove the paper backing from the tape. The measuring area consists of the 2.0 mil thick film of Mylar® (a polyester film) that holds a thin layer of compressible foam. The foam conforms to the depth and shape of the surface profile.
Step 4 – Attach the replica tape to the surface and burnish the back of the white Mylar circle (3/8” diameter) with a burnishing tool. See Photo 3.
Step 5 – Remove the tape and place it in the anvils of the micrometer. The surface profile is the total reading less 2.0 mils (2.0 mils is the thickness of the Mylar that holds the compressible foam). Alternatively if the micrometer is set to -2.0 mils prior to inserting the tape into the anvils, the displayed reading is a direct indication of surface profile. Two readings are taken at each location and averaged to determine the surface profile.
Note – If the surface profile measured with the Coarse tape is 1.5 to 2.5 mils, the same area must be measured with the X-Coarse tape. If that reading is also between1.5 to 2.5 mils, average the two values to determine the surface profile depth. If the second reading with the X-Coarse tape is >2.5 mils, record that value as the surface profile.
Surface Profile – Concrete (ICRI 310.2R-2013)
ICRI Guideline No. 310.2R-2013, Selecting and Specifying Concrete Surface Preparation for Sealers, Coatings, and Polymer Overlays, and Concrete Repair describes methods of surface preparation used on concrete in both written text and through the use of tactile concrete surface profile (CSP) coupons that are replicas of the type of profile (surface roughness) created by the various methods of surface preparation. While much of the standard addresses the roughness of floor surfaces, some of the methods apply to surfaces other than floors. The coupons range in texture from very smooth, typical of pressure washing (CSP1) to very rough, typical of jack-hammering (CSP 10):
- Detergent scrubbing – CSP1
- Low-pressure water cleaning – CSP1
- Grinding – CSP1-CSP2
- Acid etching – CSP1-CSP3
- Needle scaling – CSP2-CSP4
- Abrasive Blasting – CSP2-CSP7
- Shotblasting – CSP2-CSP9.
- High/ultra-high pressure water jetting – CSP3-CSP10.
- Scarifying – CSP4-CSP7
- Rotomilling – CSP6-CSP9.
- Scabbling – CSP7-CSP9.
- Handheld Concrete Breaker – CSP7-CSP10
Step 1 – Identify the method of surface preparation required by the specification or manufacturer’s requirements.
Step 2 – Select the concrete surface profile coupon(s) that represents the texture or range of textures that can be expected to be created based on the 310.2R-2013 guideline. See Photo 4.
Step 3 – Compare the prepared surface with the coupon(s) to determine if the degree of roughening is acceptable.
For our purposes, the term “ambient conditions” encompasses air and surface temperatures, relative humidity, and the dew point temperature. See Photo 5. If the ambient conditions are outside of the limits of the specification or the coating manufacturer’s requirements, coating adhesion and film formation can be compromised, leading to reduced performance or failure. The measurements must be obtained where the work is being performed because conditions can vary at different parts of a building (e.g., in the direct sun versus the shade). The least expensive way to measure ambient conditions is through the use of a sling or whirling psychrometer and contact surface temperature thermometer. More expensive methods involve the use of digital or electronic psychrometers that contain a sensor that is exposed to the environment to determine air temperature, dew point temperature, and relative humidity. A separate probe is touched to the surface, or a non-contact infrared sensor is used to measure the surface temperature. Many different electronic models are available and the operating instructions are straight forward.
The instructions below apply to the most inexpensive method – the sling psychrometer and surface contact thermometer.
Sling Psychrometer and Surface Temperature Thermometer
Step 1 – The sling psychrometer contains two identical tube thermometers. The end of one is covered with a wick or sock (called the “wet bulb”). The other is uncovered (called the “dry bulb”). Saturate the wick of the wet bulb with clean water.
Step 2 – Whirl the instrument through the air for 20 to 30 seconds and take a reading of the wet bulb temperature.
Step 3 – Whirl the instrument again (without re-wetting) for another 20 seconds and take a reading of the wet bulb.
Step 4 – Continue whirling and reading until the wet bulb remains unchanged (or within 0.5°F) for 3 consecutive readings. Record the stabilized wet bulb temperature and the dry bulb temperature.
Step 5 – Plot the dry bulb temperature and the difference between the dry and wet bulb temperatures (delta) in the Psychrometric Tables or charts to determine the relative humidity and dew point temperature.
Step 6 – Attach a contact thermometer to the surface and allow it to stabilize for a minimum of 2 minutes to determine the surface temperature.
Step 7 – Compare the results with the specification requirements for air and surface temperature, relative humidity and the spread between the surface temperature and dew point temperature (typically the surface temperature must be at least 5°F above the dew point temperature before painting proceeds).
Wet Film Thickness (ASTM D4414)
Measurement of the wet film thickness of the coating during application provides assurance that the proper amount of coating is being applied. The coating manufacturer can stipulate the range of wet film thickness to be applied to achieve the desired dry film, or the required wet film thickness can be calculated as follows:
Wet film thickness = Specified dry film thickness ÷ Volume solids content of the paint
The volume solids content will be shown on the can label or on the product data sheet. If the solids by volume is 60% and the specified dry film thickness is 3 mils, the target wet film thickness is 5 mils (3 mils ÷ 60% = 5 mils), as 40% of the applied wet film will evaporate into the air, while 60% of the applied wet film will remain on the surface.
Wet film thickness is measured in accordance with ASTM D4414, Standard Practice for Measurement of Wet Film Thickness by Notch Gages.
Step 1 – Make sure the tips of the numbered notches (or teeth) of the wet film thickness gage are clean and free of any paint.
Step 2 – Immediately after the paint is applied, push the gage into the paint, making certain the end points of the gage make firm contact with the underlying surface (substrate or previously applied coating layer). See Photo 6.
Step 3 – Withdraw the gage and examine the numbered “teeth” that are wetted with paint. If none of the teeth are wetted, use a different face of the gage that displays lesser thickness. If all of the teeth are wetted, use a different face that displays greater thickness.
Step 4 – Determine the wet film thickness by looking at the numbers on the gage (in mils or micrometers). The wet film thickness is indicated by the highest wetted tooth or step.
Step 5 – Wipe all paint off the gage before it dries.
NOTE: The surface being measured has to be smooth in order to avoid irregular wetting of the teeth. For example, the gage cannot be used on split-faced block, but it could be used on the adjacent mortar joints.
Dry Film Thickness – Ferrous and Non-Ferrous Metallic Substrates (ASTM D7091 and SSPC-PA2)
There are a number of instruments available for the measurement of coating thickness on metallic substrates that are based on both electromagnetic induction and eddy current principles. The use of the instruments is covered in two standards:
- ASTM D7091, Standard Practice for Nondestructive Measurement of Dry Film Thickness of Nonmagnetic Coatings Applied to Ferrous Metals and Nonmagnetic, Nonconductive Coatings Applied to Non-Ferrous Metals
- SSPC-PA2, Procedure for Determining Conformance to Dry Coating Thickness Requirements
All of the instruments are calibrated by the gage manufacturer or an accredited calibration laboratory, but the accuracy of the instrument must be verified each time it is used, and the instrument may require an adjustment to compensate for substrate roughness. The specific instructions of the manufacturer need to be followed, but the following steps apply to all of the Type 2 (electronic) instruments:
Step 1 – Use certified coated standards in the intended range of use to verify that the instrument is operating properly (known as verification of accuracy).
Step 2 – Place a certified or measured shim (in the intended range of use) onto the prepared, uncoated metal surface and adjust the instrument (as necessary) to closely match the value indicated on the shim. This step effectively removes any influence of the base metal (metallurgy, roughness, curvature, etc.) on the coating thickness readings (Step 3).
Step 3 – After verifying the accuracy of the instrument and adjusting it for substrate properties, take a minimum of 3 measurements within a 1.5” diameter circle and average them together to determine the thickness at the specific location. See Photo 7. This is known as a spot measurement. Multiple clusters of spot measurements are taken in 100 square foot areas to determine the thickness of the applied coating.
NOTE: When measuring the thickness of a coating over existing paint or galvanize, the thickness of the existing paint or galvanize must be measured and subtracted from the total reading (i.e., the gages measure the cumulative thickness of all coats that are present on the substrate). One instrument manufacturer provides a gage that will measure the cumulative thickness of the galvanize-coating layers, then display the thickness of each layer separately.
Dry Film Thickness – Cementitious Substrates, Plaster, and Drywall (ASTM D6132 and SSPC-PA 9)
The dry film thickness of coatings applied to cementitious substrates is often estimated by measuring the wet film thickness at the time of application, calculating coverage rates, using a Tooke Gage (destructive in-situ technique described later) or removing chips of the dry coating for microscopic measurement of cross-sections. If a sample of the coating can be removed with none of the substrate attached (although being able to remove such a sample could be an indication of problems), a micrometer can be used. There is also one relatively new technique available for the non-destructive measurement of dry film thickness. It involves a special instrument that measures thickness using ultrasound principles. See Photo 8. The technique is addressed in ASTM D 6132, Standard Test Method for Nondestructive Measurement of Dry Film Thickness of Applied Organic Coatings Using an Ultrasonic Gage; the frequency of measurement and the acceptability of the measurements is addressed in SSPC-PA 9, Measurement of Dry Coating Thickness on Cementitious Substrates Using Ultrasonic Gages.
The specific methods for using the instrument should be followed according to the manufacturer’s instructions, but the following basic steps apply:
Step 1 – Allow the probe to reach ambient temperature in the same environment where the readings will be taken by holding the probe in the air and pressing “zero” in the main menu. This helps the gage to compensate for temperature extremes and the effects of wear on the probe.
Step 2 – Verify the accuracy of the gage using known reference standards. For polymer coatings, place a plastic shim (reference standard) on the bare substrate, apply a drop of couplant on the surface of the shim, and place the probe on shim through the couplant to measure the thickness of the shim. The couplant carries the ultrasound signal from the probe to the coated surface (the shim in this case). Adjust the gage to register the thickness of the shim.
Step 3 – Set the “gates,” which are the minimum and maximum range of thickness expected to be measured.
Step 4 – To measure the thickness of the coating, apply a drop of couplant to the surface of the coating and firmly place the probe on the coating through the couplant. A second reading in the same area can be taken without reapplying the couplant. But when moving to a new location, couplant must be reapplied to take a reading.
Dry Film Thickness (Destructive) – Any Substrate (ASTM D4138)
The Tooke Gage is a destructive microscope technique (50x ocular) for the measurement of coatings applied to essentially any substrate (all metals, cementitious substrates, wood, plaster, drywall, plastics). See Photo 9.
The Tooke Gage is used in accordance with ASTM D4138, Standard Practices for Measurement of Dry Film Thickness of Protective Coating Systems by Destructive, Cross Sectioning Means. In addition to total thickness, the Tooke Gage allows for the measurement of the thickness of each coat in multi-coat systems. The gage requires the use of special cutting tips to make a precision incision through the coating layers(s) at a known angle and the thickness is determined using basic trigonometry. By measuring the width of the scribe, the depth or thickness of the coating can be determined because the angle of the cut is known. However, knowledge of mathematics is not required to use the instrument. All of the conversions are established by the instrument.
Step 1 – Select the cutting tip that is in the range of coating thickness to be measured. Three cutting tips are available. The differences between them are the cutting angle provided by the tip:
- 10x tip – for thickness <3mils
- 2x tip – 3 to 20 mils
- 1x tip – 20 to 50 mils
Step 2 – Create a benchmark on the coating using a marker. Use the selected cutting tip to make an incision approximately 1” in length through the coating in the area of the benchmark. The instrument requires 3-point contact when making the cut (two legs and the cutting tip). Pull the cutting tip toward you to make an incision with the legs leading the tip.
NOTE: For the readings to be accurate, the incision must be made perpendicular to the surface. Because of this, the area being measured must be smooth. If the surface is irregular, the cutting angle will not be consistent and the results invalid.
Step 3 – View the incision through the 50x ocular with the reticule perpendicular to the incision. See Photo 10.
Step 4 – Count the number of divisions of the reticule that line up with each coat to be measured. The correlation between the number of divisions and thickness depends on the model of microscope supplied with the gage because 2 different oculars with reticules have been available. Verify the conversion with the instructions supplied with the instrument, but it will be one of the following:
|Microscope A(typically purchased before 2011)||Microscope B (Universal ocular)(typically purchased after 2011)|
|1x Tip||1 division = 1 mil||1 division = 2 mil|
|2x Tip||1 division = 0.5 mil||1 division = 1 mil|
|10x Tip||1 division = 0.1 mil||1 division = 0.2 mil|
There are a variety of standards and instruments available for verifying the quality of cleaning and painting. The tests are easy to conduct, but specific steps are required to make certain that the instruments are used properly and that the results are valid. A few tests and inspections during the work can make the difference between successful coating performance and premature coating failure.
See part 2 in this Series for a discussion of instruments and methods used for the detection of moisture in cementitious substrates.
ABOUT THE AUTHOR
Kenneth Trimber is the president of KTA-Tator, Inc. He holds a Bachelor of Science degree from Indiana University of Pennsylvania, is an SSPC Protective Coatings Specialist, is certified at a Level III coating inspection capability in accordance with ANSI N45.2.6, is a NACE-certified Coating Inspector and an SSPC-C3 Competent Person. Trimber has more than 40 years of experience in the industrial painting field, is a past president of SSPC, chairman of the Committee on Surface Preparation, chairman of the Visual Standards Committee, chairman of the Task Group on Containment and chairman of the SSPC Commercial Coatings Committee. He is also past chairman of the ASTM D1 Committee on Paints and Related Coatings, Materials, and Applications. Trimber authored The Industrial Lead Paint Removal Handbook and co-authored Volume 2 of the handbook, Project Design. He was the recipient of the John D. Keane Award of Merit at the SSPC National Conference in 1990 and is a former technical editor of JPCL. In 2009 and 2012 he was named by JPCL as one of the 25 Top Thinkers in the coatings and linings industry and in 2015 was the recipient of the SSPC Honorary Life Member Award.
Most industrial and marine protective coatings rely on a mechanical bond to the substrate to remain attached while in service. This bond is generally provided by a surface profile or anchor pattern that is imparted into the surface prior to application of the coating system and effectively increases the surface area of the substrate (e.g., steel). A surface profile is typically generated by abrasive blast cleaning; although some types of rotary impact-type power tools can also create a surface texture. Without this mechanical “tooth” the coating system may become detached as the substrate and coating system expand and contract (e.g., due to temperature fluctuations and/or service loading/unloading) while in service, Coating specifications frequently invoke a minimum and maximum surface profile depth (e.g., 2-4 mils), but rarely invoke a minimum peak count or peak density.
The Significance of Peak Density
While surface profile depth is important, the number of peaks per unit area is also a significant factor that can improve long term coating system performance. According to a study conducted in the early 2000’s a high peak count characteristic of surface profile helps resist undercutting corrosion when the coating system becomes damaged while in service, and provides the coating system with better adhesion to the prepared substrate. More recent research conducted by the DeFelsko Corporation confirmed that a greater peak density (pd) promotes coating system adhesion. So, while there is typically a maximum peak height invoked by a specification (to prevent pinpoint rusting resulting from unprotected rogue peaks), there is little concern over too many peaks. The more peaks there are within a given area, the greater the surface area; the greater the surface area, the better the adhesion. Note that this is the primary reason why thermal spray coatings (metallizing) cannot be applied to steel surfaces prepared with steel shot. While the surface profile depth may be adequate (i.e., 3-4 mils), the peak density of a peened surface will not provide the necessary surface area for proper adhesion, and disbonding will likely occur.
Peak Density Verses Peak Count
Peak density and peak count are similar, but slightly different in how they are reported. According to ASTM, relative peak count or rpc is defined as, “the number of peak/valley pairs, per unit of length, extending outside a “deadband” centered on the mean line,” and is typically reported in peaks/cm. Peak density (pd) is the number of peaks present within a given surface area, and is typically reported in peaks/mm2.
Governing Industry Standards
Surface profile or anchor pattern is quantified/semi-quantified according to one of the three methods described (comparator, depth micrometer, replica tape) in ASTM D4417, Standard Test Methods for Field Measurement of Surface Profile of Blast Cleaned Steel, and peak count is quantified according to the method described in ASTM D7127, Standard Test Method for Measurement of Surface Roughness of Abrasive Blast Cleaned Metal Surfaces Using a Portable Stylus Instrument. The frequency and acceptability of the acquired measurements is described in SSPC-PA 17, Procedure for Determining Conformance to Steel Profile/Surface Roughness/Peak Count Requirements.
Quantifying Peak Count and Peak Density
Peak count is quantified using a portable stylus-type instrument. According to ASTM D7127, the apparatus consists of a portable skidded or non-skidded electronic surface roughness measurement instrument capable of measuring Rpc in compliance with ASME B46.1. The apparatus should have a vertical range of at least 300 μm and permit a sampling length of 2.5 mm and an evaluation length of 12.5 mm. The apparatus should include a stylus with a tip radius of 5 μm, and permit recording of Rpc up to 180/cm. Surface deviations are sensed by the stylus and converted to electrical signals within the device. Internal processing converts these signals into standard surface characterization parameters, which are then displayed or printed. ASTM D7127 recommends obtaining a minimum of five (5) traces per area to characterize the surface. Many of the stylus-type instruments that will measure peak count were designed for manufacturing and/or the machine finishing industry rather than for field use. When used in the field, extreme care is necessary as the tip of the stylus can easily be damaged.
Peak density can be quantified using replica tape; however, this procedure requires the use of a slightly different version of the tape (called Optical Grade) than is traditionally used to measure surface profile depth per ASTM D4417, Method C (Coarse, X-Coarse and X-Coarse Plus). While the burnishing procedures are the same, the type of tape and the way that the tape is read differs. Both peak height and peak density are measured and reported using the Optical Grade replica tape and a Replica Tape Reader (RTR). ASTM recommends obtaining two measurements per area to characterize the surface.
Use of Optical Grade Replica Tape to Determine Peak Density
The model RTR-P incorporates a digital camera and light source. Light is passed through the replica tape and imaged by the camera. Peak counts are revealed as bright spots on the photograph as taken by the PosiTector RTR’s digital image sensor (camera). The intensity of light that passes through the replica tape is inversely proportional to the thickness of the compressed foam. The below photographs of a back-lit piece of replica tape reveals light areas of higher compression (peaks) and dark areas of lower compression (valleys). An illustration using an image from a US coin is also provided below that demonstrates how the camera distinguishes higher and lower compression areas. All images are courtesy of DeFelsko Corporation.
Since peak density can now be readily measured in the field (and measured simultaneously with peak height using the same replica tape), it is likely that specifications will start requiring measurements of peak density, especially for materials such as metallizing that rely on mechanical bonding. Not so fast… simply requiring the measurement of peak density will accomplish little without establishing a minimum acceptance criteria, just as specifying the measurement of coating thickness without an acceptable range is of little value. The minimum required peak density for proper bonding of the coating system will need to be established, and will likely vary depending on the coating system. In addition, the steps required to increase peak density without adversely affecting peak height will also need to be investigated.
 The Effect of Peak Count of Surface Roughness on Coating Performance; Hugh J. Roper, Raymond E.F. Weaver, Joseph H. Brandon; Journal of Protective Coatings & Linings, Volume 21, No. 6; June 2005
 Replica Tape – Unlocking Hidden Information; David Beamish; Journal of Protective Coatings & Linings, Volume 31, No. 7; July 2015
ANSI/ASHRAE/IES Energy Standard 90.1-2010 and the 2012 International Energy Conservation Code require that building envelopes be designed to limit uncontrolled air leakage into and out of buildings. Air leakage in buildings is controlled at the time of construction by installing air barrier systems. ASTM E1186-17, “Standard Practices for Air Leakage Site Detection in Building Envelopes and Air Barrier Systems” defines an air barrier system as “a system in building construction that is designed and installed to reduce air leakage either into or through a building envelope.”
ASTM E1186 describes seven practices for detecting air leakage sites in building envelopes to determine if a functional air barrier system has been installed. Five of the methods test the entire building and are performed at the completion of construction. The remaining two methods are designed for localized testing, and are used when depressurizing or pressurizing the entire building is impractical.
One of the two ASTM E1186 methods of localized testing, Chamber Depressurization in Conjunction with Leak Detection Liquid, is particularly useful during construction to examine the effectiveness of the installation in configurations that typically create the greatest challenges for air tightness, such as joints between materials, penetrations through membranes (e.g., brick ties), and the seams of roof membranes. Also, since the IECC recognizes paint as an air barrier when applied to concrete masonry units (CMU), this method can be used to assess the continuity of the coating to make certain the porous masonry surface is completely sealed. Although not addressed in ASTM E1186, the instrument can also be used to detect locations of water leaks.
The PosiTest® AIR Leak Tester meets the requirements of ASTM E1186, Chamber Depressurization in Conjunction with Leak Detection Liquid. The requirements of this method, as stated in ASTM E1186, are as follows (the text in the brackets and photos have been added by KTA – they are not part of the standard):
Leak Detector Liquid in Conjunction with Depressurized Chambers Practice—This practice relies on the principle that a pressure differential across a liquid film at an air leakage site will form bubbles in the film. The film is located on the low pressure side of the specimen within a transparent test chamber to allow visual observation of the test specimen during the test.
7.8.1 Background—This practice is suitable for locating air leakage sites at specific details when depressurizing or pressurizing the entire building envelope is impractical, and enables the testing of penetrations and joints in rigid air barrier materials such as metal liners or membranes supported by rigid substrates. The practice subjects a test specimen and the surrounding area to a desired pressure differential which is limited by the structural capacity of the specimen.
Photo 2 – Penetration being tested during construction to verify that the installation process is creating an air-tight seal.
7.8.2 Test Chamber—The test chamber consists of a well-sealed, transparent chamber which is capable of resisting the pressure differentials of the test. The chamber must be sufficient in size to enclose the test specimen. .A pressure tap may be installed to allow the measurement of the pressure differential across the specimen during the test with a manometer.
[Photo 3 shows a test chamber designed for examining seams in roofing. The pressure differential is programmed directly into the instrument. It does not require the use of an external manometer.]
7.8.3 Leak Detector Solution—A leak detector liquid which can be easily applied over the test specimen surface may be used. The viscosity should be sufficient so that the liquid remains in an even coat on the test specimen during the test. Bubbles should not form in the liquid during application.
[Photo 4 shows the application of a leak detector liquid to the test area.]
7.8.4 Air Exhaust System—The air exhaust system consists of a fan which is able to provide sufficient airflow to achieve the desired pressure differential across the test specimen. A means of increasing the airflow at a rate of approximately 25 Pa/s or less enables the bubbles to form gradually without breaking at large air leakage sites.
7.8.5 Details—The leak detector liquid is applied evenly over the surface of the test specimen and the test chamber is fitted over the specimen and sealed to the surrounding air barrier system. Care must be taken so that bubbles are not formed in the liquid by the application technique. The fan is used to extract air from the test chamber until the desired pressure differential across the specimen is reached. Bubbles or visible distention of the leak detector liquid indicates the existence of air leakage sites through the air barrier system. An estimate of the relative size of the leak can be made based on the size and speed with which the bubbles form.
[Photo 5 shows the instrument display and the formation of bubbles at a leakage site. Pressure differential can be set up to 900 Pascals (Pa) in 100 Pa increments, with the rate of depressurization set from 5 Pa/sec to 30 Pa/sec in 5Pa/sec increments. The instrument will run until the preset pressure differential is reached. Common values are a pressure differential of 500 Pa and a rate of depressurization of 25 Pa/sec, which translates to a test time of 20 seconds. Note that 500 Pa is not a great pressure differential. It is approximately 10 pounds/square foot. The report section of the standard makes documentation of the pressure differential mandatory, which is recorded directly from the display on the instrument.]
7.8.6 Limitations—A knowledge of potential air leakage sites is necessary to limit the search
area using this practice. This practice is only suitable when the air barrier system is accessible and has sufficient rigidity that it is not pulled into the test chamber during the test. Care must be taken during the test that air leaks at the seal between the test chamber and the air barrier system are not confused with air leakage sites through the test specimen.
Even if one of the “whole building” tests will be used upon completion of the structure, localized testing during construction will help to identify processes that need to be improved while the work is in progress to avoid having to repair joints or seams, or repaint a CMU wall after-the-fact. It is also a valuable tool for use during periodic audits to complement visual inspections and adhesion testing. Photo 6 shows the identification of an incomplete joint between materials. By identifying it during construction, changes can be made. Photo 6 shows the instrument being use on a CMU wall to determine if the application process is sound.
ANSI/ASHRAE/IES Energy Standard 90.1-2010 and the 2012 International Energy Conservation Code require that building envelopes be designed to limit uncontrolled air leakage into and out of buildings ASTM E1186 provides seven methods for verifying that these mandates have been achieved. One of the methods, Leak Detector Liquid in Conjunction with Depressurized Chambers, is an excellent means for verifying the quality of installation while the work is being performed, so that changes to the installation process can be made to achieve compliance. The PosiTest® AIR Leak Tester complies with this method. The following link contains a video that describes the instrument and demonstrates its use: https://www.youtube.com/watch?v=zQnDhwgpM7M
Chemical contaminants on a surface can include chlorides, ferrous ions, sulfates and nitrates, among other types of soluble salts. Chloride may come from deicing materials or marine/coastal environments, ferrous ions are a by-product of corrosion, sulfates can be airborne, particularly in industrial environments (e.g., coal-fired power plants) and nitrates may come from the soil (e.g., fertilizers). These chemicals are deposited onto surfaces while the structure is in service, or during transportation of new steel to the fabrication shop, or from the shop to the field. They can typically be removed from surfaces by pressure washing or water jetting using clean water or water with the addition of a proprietary salt removal-enhancing solution. The effectiveness of the pressure washing step is dependent on the condition of the surface. That is, contamination is relatively easy to remove from smooth surfaces, but may be more challenging if the surfaces are pitted or are configured with difficult-access areas, as contamination will tend to concentrate and become trapped in these areas. If the salts are not detected or are not adequately dissolved and rinsed from the surfaces, they can become trapped beneath a newly-installed coating system. Provided there is a sufficient quantity of water in the service environment, and the concentration of the water-soluble contaminant trapped beneath the coating system is high enough, water can be drawn through the coating film by a process known as “osmosis.” This drawing force can be quite powerful, and will continue until the concentration of salt in water is the same on both sides of the coating film (the concentration reaches equilibrium). This process creates a build-up of water and pressure beneath the coating film, oftentimes enough to cause blistering of the coating (known as osmotic blistering), underfilm corrosion and premature coating failure.
It is for these reasons that many specifications require inspection of surfaces for chemical contaminants after surface preparation operations are complete, but before application of the primer. Because this type of contamination cannot be detected visually, the surface must be sampled and the “surface extraction” tested for the contaminant(s) of concern. SSPC Guide 15, “Field Methods for Retrieval and Analysis of Soluble Salts on Steel and Other Nonporous Surfaces” describes common methods for sampling and analysis of soluble salt contamination, with the intent of assisting the user in selecting an extraction and analysis procedure. Guide 15 is contained in Volume 2 of the SSPC Steel Structures Painting Manual, “Systems and Specifications.” A copy of the Guide is available from SSPC (www.sspc.org).
Common methods of extracting soluble salts from surfaces for analysis include: surface swabbing; latex patches/cells (ISO 8502, Part 6) and latex sleeves. Common methods of analysis of the extracted soluble salts include ion-specific test strips/tubes for chloride, ferrous ion and nitrate salts; drop titration for chloride; and turbidity meters for sulfate ion detection. Each of these methods of analysis are considered “ion-specific.”
Except when chemical additives are employed, the methods of reducing the surface concentrations (i.e., pressure washing [low or high pressure], steam cleaning or other methods) are not ion-specific. So consideration may be given to performing the analysis of the extracted solution using a non-ion specific method of analysis known as conductivity (ISO 8502, Part 9), rather than conducting multiple ion-specific tests on the extracted sample(s), since the method of removal typically addresses all soluble salts. In this case, a sample is extracted from the surface using any of the methods listed above (swab, latex or latex patch) using distilled or deionized water. Once the extraction is complete, the solution is placed directly onto a conductivity meter (verified for accuracy first; see below) that will accommodate small samples and that automatically compensates for the temperature of the liquid (temperature compensation is very important for the proper use of conductivity meters).
The conductivity meter displays the concentration of the ionic contamination in millisiemens/cm (mS/cm) or microsiemens/cm (µS/cm). To convert from mS/cm to µS/cm, multiply mS/cm by 1000 (e.g., 0.35 mS/cm is 350 µS/cm). Note that for the values from the conductivity meter to have any meaning, the area of the surface being sampled and the volume of water used in the extraction must also be known, which will be the case when using the sampling methods listed above, particularly ISO 8502, Part 6 and Part 9. The conductivity meter will not reveal the type of ionic contamination; that is, it will remain unknown whether the conductivity reading is due to chloride, ferrous ion, nitrate, sulfate or other soluble salts. All that is known is that there is ionic contamination in the extracted test sample. Naturally the conductivity of the extraction solution (the distilled or deionized water) should be tested (known as a “blank”) and any conductivity reading of the water deducted from the reading of the surface extraction sample(s). For example, if the conductivity of the surface extraction is 354 µS/cm and the conductivity of the distilled/deionized water is 3 µS/cm, the reported conductivity is 351 µS/cm.
Many specifications have established thresholds for the maximum amounts of surface salt contamination based on the type of salt (e.g., 7 µg/cm2 chloride; 10 µg/cm2 nitrate and 17 µg/cm2 sulfate). If conductivity testing is substituted for ion-specific testing, then the specifier will need to establish thresholds based on conductivity values (in µS/cm). For example, the US Navy has established thresholds of 70 µS/cm for atmospheric (non-critical) service and 30 µS/cm for immersion (critical) service.
There can be considerable cost savings associated with changing from ion-specific testing to conductivity measurements, since each ionic contaminant of interest must be analyzed using different methods. And none of the kits contain re-usable supplies, so contractors must purchase many kits for each project. Naturally these costs are passed on to the owner, as part of the contractor’s bid. By performing conductivity instead of ion-specific analyses, the costs are reduced since the conductivity meter can be used for thousands of readings, as long as it remains accurate and within the manufacturer’s tolerance. Most of the portable conductivity meters come with a standard solution (known as a buffer solution) with a known conductivity for verifying the accuracy of the meter. Verification of accuracy before each use is recommended.
Finally, it is worth mentioning that there are a few devices on the market that perform both extraction and analysis of the surface, and display the surface salt concentrations in PPM, mS/cm, µS/cm or µg/cm2. Similar to the conductivity meter these instruments are not ion-specific, but are typically more costly than a portable conductivity meter. They do not use any expendable supplies (other than distilled water) and they too compensate for temperature.
The hardness of a coating material is a relative property that may be interpreted in a variety of ways by different industries that use coating/lining materials. While hardness testing is frequently used to assess the degree of cure, they can also be used to measure hardness properties formulated into coatings. The absolute hardness of a coating is not always the ultimate goal of the formulation, and an increase in hardness can be accompanied by brittleness or a decrease in the flexibility of the coating. The balance of hardness with other final film properties is determined by the end use of the product.
When comparing hardness values listed on a coating manufacturer’s product data sheet, the information is frequently considered an indication of its degree of cure and its inherent performance characteristics. An uncured coating that remains soft can sustain damage while in service; for example, backfilling a ditch too quickly and damaging a newly-applied pipe coating that has not achieved its full cure, which can adversely impact pipeline integrity
There are two common test methods described in this article: the indentor-type tester (Durometer) and pencil hardness, which is more of a sheer hardness test. The selection of the test method is frequently dictated by the thickness of the coating materials, although variations in the thicknesses tested are permitted if useful information can be obtained. The hardness of thick film coatings (the method indicates “thick film” is considered a minimum of 6 mm or 240 mils) is typically measured using an indentor-type tester that measures the resistance to indentation under a specific spring force load, while thin film coatings (a reference to thickness is not included the method) are frequently assessed for the degree of hardness using the pencil hardness test. Note that there is no correlation between the hardness testing methods described herein and there is no pass/fail criterion indicated in the respective ASTM standard test methods. The project specification should indicate the minimum acceptable hardness value prior to placing the coating system into service. The minimum acceptable hardness value is often established by the manufacturer of the coating.
Measuring Durometer Hardness
Durometer hardness testing is performed according to the procedure described in ASTM D2240, Standard Test Method for Rubber Property – Durometer Hardness. This standard covers twelve types of measurement devices (Types A, B, C, D, DO, O, OO, OOO, OOO-S and R). This article describes the use of a Shore D Durometer, since many of the thick film, chemically resistant coatings fall into the hardness range that a Shore D durometer can accurately measure. For softer, thick film coating materials, a Shore A durometer may be more useful since it has a lower spring force. According to the ASTM standard, Durometer hardness values less than 20 and greater than 90 are not considered reliable and it suggests not recording readings below 20 and above 90 for either the Shore A or Shore D instruments.
A Shore D Durometer is a small hand-held device (Figure 1) that is used to measure the indention hardness of various materials like hard rubber, plastics, soft metals, and epoxies and other coating materials. A small cone-shaped indentor protrudes from the pressor foot (the base of the tester). The durometer contains a calibrated spring that is used to apply perpendicular force to the indentor. Naturally, a cured, hardened coating will provide resistance to the indentor under the force of the applied load. This resistance to indention is displayed on the gage dial or digital display as a hardness value.
The surface of the coating to be tested should be clean and smooth. Any inherent surface roughness may produce erroneous hardness values.
Since temperature and humidity can influence the hardness value, the surface temperature of the coated surface and the relative humidity of the surrounding air should be measured and recorded prior to testing. While the temperature and humidity data is required to be reported by the ASTM standard, there is no correction of the hardness values based on the prevailing ambient conditions.
Digital durometers with separate (remote) probes (Figure 2) are becoming increasingly popular. Their use is more amenable to curved surfaces because of the relative small diameter of the test foot compared to the base of the standard Durometer; however, the full measuring surface of the probe must sit flush on the surface without rocking to obtain a reliable reading.
After verifying accuracy using the test blocks, the remote probe is pressed into the coating until the presser foot is in full, flat contact with the surface, and held in place. After the durometer emits a single audible signal, it will display a symbol indicating a reading is in the process of being obtained. The durometer will emit a double signal and display the measurement value.
Verifying Durometer Operation
Durometers should be calibrated annually by the manufacturer or their authorized service center. Some will even provide a 10-point calibration certification traceable to a National Metrology Institution like the National Institute of Standards and Technology (NIST). The operator cannot calibrate a durometer, but should verify proper operation prior to each period of use.
Test blocks are used to verify proper operation. The set shown (Figure 3) represents hardness values of 25, 46 and 75 on the D scale. A measurement is taken on each test block and compared to the hardness value displayed on the durometer. If the value displayed by the durometer does not conform to the tolerance of the test block value (for example 25 +/- 5, which means that the displayed hardness value obtained on the test block can range from 20-30),the durometer should not be used to measure the hardness of a coating and should be returned to the manufacturer or service center for repair and calibration.
Measuring Pencil Hardness
Pencil hardness testing is performed according to the procedure described in ASTM D3363, Standard Test Method for Film Hardness by Pencil Test. Pencils containing lead centers with various hardnesses (shown in Figure 4) are prepared then used to attempt to scratch or gouge the coating. The coating’s inherent resistance to scratch or gouge damage is an indication of hardness. This test can be used on thin film coatings that cannot typically be evaluated using the indentor-type hardness testers.
To conduct pencil hardness, a series of 14 drafting pencils representing various hardnesses of lead (from 6B to 6H; per Figure 4) and a piece of emery cloth or extra fine sand paper are required.
Remove the wood from the end of the pencil to expose a minimum of 1/8″ of
the lead. Blunt the tip of the lead to create a 90° cylinder (rather than a conical shape ending in a point) using the sandpaper or emery cloth (Figure 5). Hold the pencil at a 45° angle to the coated surface and attempt to push the edge of the blunted lead “cylinder” into the coated film (Figure 6). One of three outcomes will occur:
- The edge of the pencil lead will scratch but not gouge the coating film;
- The edge of the pencil lead will gouge the coating film; or
- The edge of the pencil lead will bevel or break, indicating the coating is harder than the pencil lead.
Any time a pencil lead is re-used, it first needs to be “re-dressed” using emery cloth or sandpaper to regenerate the cylindrical-shaped end of the lead.
The pencil hardness test method indicates that the testing should be conducted at a temperature and humidity conditions of 23 ± 2°C (73.5 ± 3.5°F) and 50 ± 5% relative humidity. Shop/field conditions will rarely conform to these ranges; therefore, hardness measurements obtained outside of these conditions should be reported with the actual conditions and noted as being obtained at conditions not listed in the method.
Certified pencils are available for purchase, but calibration of the pencils is not normally performed since the cost for calibration frequently exceeds the cost of purchasing new pencils. Similar to durometer hardness, the results of this method can be altered by texture or defects on the surface and notation of any surface irregularities is suggested.
What is Surface Profile and Why is it Necessary?
Surface profile is defined as a measurement of the maximum peak-to-valley depth created by abrasive impingement against a surface during abrasive blast cleaning operations, or by an impact-type power tool. During abrasive blast cleaning, the mass of the abrasive and the velocity of the abrasive movement created by compressed air generates kinetic energy (the abrasive can reach speeds of over 600 miles per hour as it exits the blast nozzle). When the abrasive impacts the surface, it cuts into the surface (angular abrasives) or peens the surface (round abrasives) and creates a series of peaks and valleys in the surface.
The creation of this peak-valley pattern in the surface effectively increases the surface area, providing an anchor for the coating system. Both the structure and the coating system protecting the structure will move while in service. This movement may be caused by expansion and contraction of the substrate due to temperature fluctuation, or live loads placed onto a structure; for example, traffic crossing a bridge. The surface profile must be compatible with the coating system. Typically, the higher the coating thickness the greater the surface profile depth. Peak density (the number of peaks per unit area) also plays a key role in maintaining adhesion of the coating system and provides greater resistance to corrosion undercutting when the coating system gets damaged while in service.
Standards for Measurement of Surface Profile
There are currently four primary standards for measurement of surface profile in steel surfaces. Note that ASTM D4417 and SSPC-PA 17 also address measurement of surface profile generated by impact-type power tools. The four standards are:
- ASTM D4417, Standard Test Methods for Field Measurement of Surface Profile of Blast Cleaned Steel;
- ASTM D7127, Standard Test Method for Measurement of Surface Roughness of Abrasive Blast Cleaned Metal Surfaces Using a Portable Stylus Instrument;
- NACE SP0287, Field Measurement of Surface Profile of Abrasive Blast-Cleaned Steel Surfaces Using a Replica Tape; and
- SSPC-PA 17, Procedure for Determining Conformance to Steel Profile/Surface Roughness/Peak Count Requirements.
How is Surface Profile Depth Quantified?
ASTM D4417 contains three methods of measuring surface profile depth: Method A describes the proper use of a comparator; Method B describes the use of a depth micrometer, and Method C addresses the use of replica tape (as does NACE SP0287). Today, Method B and Method C are the most commonly used, so that is what we will be focusing on.
ASTM D4417, Method B
Method B in the ASTM standard describes the use of a depth micrometer. The surface profile depth micrometer measures the depth of the valleys of the surface profile relative to the height of the peaks using a 60° cone-shaped point protruding from the base of the gage. The base of the instrument rests on the peaks of the surface profile while the cone-shaped point projects into the valley. The depth is displayed on the gage in mils (0.001”) or micrometers (0.001mm; there are 25.4 micrometers [µm] in 1 mil). This method can be used to measure the surface profile depth that is created by abrasive blast cleaning or impact-type power tools. Models from a few gage manufacturers are available that conform to this standard.
According to ASTM D4417, a minimum of 10 readings is obtained per area; the maximum surface profile is reported (discarding obvious outliers). SSPC-PA 17 states that a minimum of three 6” x 6” areas are measured per work shift or 12 hours of preparation time, whichever is shorter. The measurements must conform to the minimum and maximum surface profile requirements listed in the project specification.
ASTM D4417, Method C
Method C in the ASTM standard describes the use of replica tape. A mirror image of the peak-valley pattern generated by abrasive blast cleaning is created in an emulsion foam applied to the underside of a 2-mil polyester film (Mylar®) by pressing the Mylar using a burnishing tool using medium pressure. Once the burnishing process is complete, the replica tape is removed from the surface and the image is measured using a spring-loaded micrometer.
The Mylar thickness (2 mils) is deducted from the measurement, revealing the depth of the surface profile within the measured area (approximately 3/8” diameter). Alternatively, a Replica Tape Reader (RTR) can be used to read the replica tape.
According to ASTM D4417, a minimum of 2 readings is obtained per area; the average of the two readings is reported. SSPC-PA 17 states that a minimum of three 6” x 6” areas are measured per work shift or 12 hours of preparation time, whichever is shorter. The measurements must conform to the minimum and maximum surface profile requirements listed in the project specification.
Same Surface, Different Results?
When a project specification simply invokes ASTM D4417 and not a specific method, the results of the surface profile measurements may differ when two different methods are used on the same project, even on the same surface and within the same area (i.e., the contractor’s quality control inspector is using replica tape and the facility owner’s quality assurance inspector is using the depth micrometer). While both the depth micrometer and replica tape methods conform to ASTM D4417, the measurement acquisition principles are quite different. The depth micrometer is measuring a single valley depth in relationship to potentially hundreds of “peaks” beneath the base of the instrument. Conversely, the replica tape image represents many peaks/valleys, and the micrometer is measuring a portion of those obtained (the test area on the replica tape is approximately 3/8” diameter and the anvils of the micrometer are approximately 1/8” in diameter). So, in effect the reading on the micrometer or the RTR from the replica tape represents several peaks and valleys, while the depth micrometer does not. Therefore, differences are inevitable, particularly with deeper surface profiles, and the results may or may not fall within the specified range for one of the two methods. To avoid these discrepancies, it is recommended that a single method be employed on a project. This can be discussed and agreed upon at the pre-construction conference.
“Do I have to maintain inspection gage calibration and certification? Why don’t they retain their calibration over time? Do I need to verify gage accuracy on a regular basis? How often? What is the difference between calibration and verification of accuracy?”
These are common questions in the coatings industry. A simple answer is that without routinely calibrating/certifying coating inspection gages using standards traceable to a national metrology institution and verifying the accuracy of your equipment prior to use, the gages only reveal values, and there is no way to determine whether those values are representative. Quality Assurance and Quality Control inspectors have an obligation to make certain that the values being displayed by the gages are accurate and represent the quality of the work performed, as decisions regarding acceptability of work performed, or the need for rework are made based on gage readings. So, calibration and verification of accuracy are both important, but are distinctly different.
Differentiating Calibration from Verification of Accuracy and Adjustment
Calibration is defined as a controlled and documented process, and is performed by the gage manufacturer, their authorized agent, or by an accredited calibration laboratory. Calibration must be performed in a controlled environment that is not typically found in a shop or in the field.
Verification of accuracy is performed by the gage operator and does not need to be performed in a controlled environment. Based on the accuracy verification process, adjustments may be necessary to compensate for shop or field conditions during the measurement process. An example, based on a dry film thickness gage is provided below.
ASTM D7091, Standard Practice for Nondestructive Measurement of Dry Film Thickness of Nonmagnetic Coatings Applied to Ferrous Metals and Nonmagnetic, Nonconductive Coatings Applied to Non-Ferrous Metals and SSPC-PA 2, Procedure for Determining Conformance to Dry Coating Thickness Requirements both contain information on calibration, verification of accuracy and adjustment (incidentally, all three are required prior to obtaining coating thickness measurements when one or both of these standards are invoked by the project specification). For electronic gages (known as Type 2 gages), verification of accuracy is performed using traceable, certified coated standards or using certified shims placed on smooth metallic substrate. This is typically accomplished by using coated standards or certified shims that are slightly below and slightly above the anticipated dry film thickness range (known as two-point verification).
SSPC-PA 2 states that verification of accuracy should be performed (at a minimum) at the beginning and end of each work shift, and recommends verifying accuracy during measurement acquisition, especially if a large data set is being obtained or the gage is dropped or suspected of being out of tolerance. This step makes certain that the gage is working properly, but another step, adjustment, is necessary before using the gage to measure coating thickness.
Adjustment is the act of aligning a gage to correct for substrate metallurgy, curvature, roughness (including surface profile), and other characteristics that may affect the measurements. This is accomplished by placing certified or measured shims onto the prepared, uncoated substrate and adjusting the gage to align with the shim value. One point or two point adjustments using shims may be performed. After this step, the instrument is ready for use in measuring coating thickness.
Alternatively, after verification of accuracy is performed a series of 10 or more Base Metal Readings (BMRs) can be obtained from the prepared, uncoated surface. The average BMR is deducted from the coating thickness. BMR is not the same as surface profile. Surface profile is defined as a measurement of the maximum, peak-to-valley depth created by abrasive blast cleaning and some rotary impact power tools. BMR is the effect of this roughness on a coating thickness gage. For example, a 3-mil surface profile may yield an average 0.7 mil BMR. Never deduct surface profile from coating thickness measurements.
By adjusting the gage to a known thickness over the prepared surface (i.e., using a measured shim) or by measuring and deducting a BMR, the thickness of the coating above the peaks of the surface profile is revealed.
While the focus of this column has been on dry film thickness gages, any gage that takes a measurement should be calibrated (typically annually). This includes temperature gages, micrometers, pressure gages, conductivity and pH meters and any other instrumentation used to verify the quality of workmanship.
Matt Fajt is a Vice President and Business Unit Manager for the Instrument Sales and Service Group for KTA-Tator, Inc. He is an NACE Level 2 certified coatings inspector, SSPC PCI Level 1 and a frequent workshop facilitator on inspection instrument use. He can be reached at firstname.lastname@example.org.
For many in the health care/fitness industry, BMR is an acronym for basal metabolic rate. Sorry to disappoint if you thought this would be a health science article about expending energy. Rather, this article is about a different BMR: Base Metal Reading. We’ll describe what it is, its significance, how to obtain it, and how it impacts coating thickness.
Introduction to Coating Thickness Standards
There are two common industry standards that govern measurement of coating dry film thickness on metal substrates, including ASTM D7091, Standard Practice for Nondestructive Measurement of Dry Film Thickness of Nonmagnetic Coatings Applied to Ferrous Metals and Nonmagnetic, Nonconductive Coatings Applied to Non-Ferrous Metals and SSPC-PA 2, Procedure for Determining Conformance to Dry Coating Thickness Requirements. Both address the use of Type 1 (magnetic pull-off) and Type 2 (electronic) gages as well BMR acquisition. SSPC-PA 2 also addresses measurement frequency and the acceptability of the measurements.
What is BMR?
BMR is the effect of substrate roughness on a coating thickness gage. The roughness is created by preparation of the substrate (e.g., abrasive blast cleaning or power tool cleaning), which generates a surface texture or “profile,” or by a manufacturing process that imparts roughness into the substrate. Instruments that measure the dry film thickness of the applied coating reach part way down into the roughened metal surface to operate properly (illustrated by the red line). However, specifications list the required coating thickness as measured from the tops of the peaks of the surface profile (illustrated by the blue bar). This inherent delta is known as the base metal effect. It is deducted from the coating thickness measurements to eliminate any effect of surface roughness. If the BMR is ignored, the thickness of the coating from the tops of the peaks of the surface profile may be overstated.
Acquisition of a BMR is not predicated on the gage type (Type 1 magnetic pull-off verses Type 2 electronic), but rather the way the gage is set-up by the operator to compensate for surface roughness. For both Type 1 (see photo, left) and Type 2 gages a BMR may be acquired and deducted from the coating thickness.
As an alternative, for Type 2 gages one or more measured shims (one shim is considered a one-point adjustment while the use of two shims spanning the range of intended use is considered a two-point adjustment) may be placed onto the prepared (roughened) metal surface and the gage adjusted to correspond to the shim thickness, effectively removing any need to measure and deduct a BMR. According to SSPC-PA 2, these measured shims are not permitted to be used with Type 1 gages unless explicitly allowed by the gage manufacturer, so in most cases a BMR will be required when using a Type 1 gage.
Obtaining Base Metal Readings
Section 6.2 in SSPC-PA 2 states, “To compensate for any effect of the substrate itself and surface roughness, obtain measurements from the bare, prepared substrate at a minimum of ten locations (arbitrarily spaced) and calculate the average value. This average value is the base metal reading.” Here are the steps:
- Verify the accuracy of the coating thickness gage before use. Traceable coated standards are required for both Type 1 and Type 2 coating thickness gages.
- Obtain a minimum of ten readings on the prepared, uncoated substrate in random locations. To avoid forgetting to acquire a BMR, it is best to take the measurements at the same time surface profile measurements are obtained.
- Measure the coating thickness.
- Deduct the average BMR.
The BMR is not only deducted from the primer thickness, but the cumulative layer thickness measurements as they are obtained. This is illustrated below:
Measured primer thickness: ———————————————————- 4.9 mils
BMR: ———————————————————————————— (0.6 mil)
Actual primer thickness from the top of the peaks of the surface profile: —— 4.3 mils
Cumulative primer & topcoat thickness: ——————————————— 9.2 mils
BMR: ————————————————————————————- (0.6 mil)
Actual cumulative thickness from the top of the peaks of the surface profile: — 8.6 mils
It is important to recognize that BMR and surface profile are related, but they are not the same. Surface profile is a measurement of the maximum peak-to-valley depth created by abrasive blast cleaning or some type of impact power tool. It is measured using one of three methods described in ASTM D4417, Standard Test Methods for Field Measurement of Surface Profile of Blast Cleaned Steel and SSPC-PA 17, Procedure for Determining Conformance to Steel Profile/Surface Roughness/Peak Count Requirements. BMR is the effect of this surface profile on a coating thickness gage. A 3-mil surface profile may have an associated BMR of 0.7 mil. Deducting surface profile from the coating thickness instead of the BMR will result in a significant under-recognition of the actual coating thickness.
Adhesion is an important physical characteristic of applied coating films and systems, and testing is frequently used as an indicator to whether an adequate bond exists between the substrate and primer (first coat) and/or between coats in multiple coat applications. Adhesion testing may be a requirement of a coating specification, or may be used for coating system performance qualification in a laboratory. Adhesion testing is also a valuable indicator for determining the integrity of coating systems that have been in service for extended periods of time, may require maintenance, and overcoating is a strategy being considered; and is frequently used during coating failure investigations. Irrespective of the application of the test, there are standard test methods (procedures) for conducting adhesion testing that should be followed to ensure consistency, especially when performing comparative analyses. This article discusses tape and knife adhesion test methods performed according to standardized ASTM International. Tensile (pull-off) adhesion is the subject of an article by Melissa Swogger that is also available on the KTA University site.
ASTM D3359, Standard Test Methods for Measuring Adhesion by Tape Test (Tape Test) and ASTM D6677, Standard Test Methods for Evaluating Adhesion by Knife (Knife Test) are perhaps the most widely used tests to evaluate a coatings’ adhesion to the substrate and to other coats in a multi-coat system. While tape and knife adhesion tests are generally regarded as more subjective than their tensile (pull-off) adhesion test counter parts, the tape and knife adhesion tests can be much more revealing of the true adhesion properties of a coating system. Experience has shown that high pull-off adhesion values can be achieved on a coating system that is easily lifted with a knife, tape or in some cases one’s fingers. This is primarily due to the directional forces applied to the coating system during the tests.
Adhesion testing performed according to ASTM D3359 or ASTM D6677 apply shear forces to the coating, while the pull-off adhesion tests (performed according to ASTM D4541 or ASTM D7234) use tensile [perpendicular] forces. The shear tests are oftentimes more definitive because they better replicate the way in which coatings fail. That is, coatings generally do not disbond from a substrate or other coating as a result of forces that are exerted perpendicular to the surface. Rather, the coating peels off of the substrate or another coat because of shear (non-perpendicular) forces exerted on the coating system. Undercutting and peeling can occur as a result of shear forces.
Tape Adhesion Tests
Two test methods are described in ASTM D3359: Method A (X-cut), and Method B (Cross-cut). Test Method A is primarily intended for use on coatings/coating systems over 5 mils (125µm, while Method B is generally used on coatings/coatings systems less than 5 mils thick Either method can be performed in the shop, field or laboratory.
The test was developed for assessing the adhesion of coating to steel, but can be used on other hard substrates. The test has also been used successfully on softer substrates (e.g., wood and plaster).
Both tests are performed by scribing the coating to the substrate with a sharp knife blade in a specific pattern, applying a pressure sensitive tape and then rapidly pulling the tape from the surface. When the coating is greater than 5-mils thick an X-cut (with each leg approximately 1.5-inches long) is made in the film. When the coating is less than 5-mils thick, a cross-cut lattice pattern is created with either six or eleven cuts in each direction. For coatings up to 2.0 mils thick, eleven incisions are made that are spaced 1 mm apart. For coatings between 2.0 mils and 5.0 mils thick, six incisions are spaced 2 mm apart. For both methods, a steel or other hard metal straightedge or template is recommended to ensure straight cuts and, in the case of the X-cut, the correct angle at the intersection (30-45°).
Once the incisions are made, a pressure sensitive tape (with adhesive properties conforming to the requirements of the standard; Figure 1) is applied over the incisions and pressed in place using a pencil eraser. Following a brief “recovery” period of about 60 seconds the tape is removed by grasping the free end of the tape and pulling it off rapidly (not jerked) back upon itself at as close to an angle of 180° as possible. After removal of the tape, the amount of coating removed from the substrate or underlying coating is rated. It is important to evaluate the coated surface and not the back of the tape, since coating debris from the incisions is often removed by the tape.
Adhesion is rated based on the scale provided in the ASTM standard. The scale ranges from 0 “Removal beyond the area of the incisions” to 5 “No peeling or removal.” When Method A is used an “A” is included after the numerical adhesion value (e.g., 3A). Similarly, a “B” is added after the numerical value when Method B is used (e.g., 3B). Table 1 provides the evaluation criteria for Method A; Table 2 provides the evaluation criteria for Method B. The standard also contains a pictorial guide to aid in the rating of the cross-cut (Method B).
 ASTM D4541, Standard Test Method for Pull-Off Strength of Coatings Using Portable Adhesion Testers and ASTM D7234, Standard Test Method for Pull-Off Strength of Coatings on Concrete Using Portable Adhesion Testers
 The exception to this is osmotic blistering, where the coating is pushed off the surface as a result of vapor pressure from below the coating. However, the subsequent delamination is a result of shear forces.
When appropriate, the nature and location of the separation is documented. A cohesive separation is one that occurs within a coating layer; an adhesive separation is one that occurs between coating layers or between the coating and the substrate. Generally, adhesion ratings of 4 and 5 are considered good, adhesion values of 2 and 3 are considered marginal and adhesion values of 0 and 1 are considered poor.
Knife Adhesion Tests
Similar to the tape adhesion tests, the Standard Test Method for Evaluating Adhesion by Knife (ASTM D6677) can be used to evaluate coating adhesion to steel and other hard substrates. Precautions are included regarding the use of the test on coatings with a high cohesive strength that may appear to have worse adhesion than one that is brittle and fractures easily. In addition, the method is not to be used on overly thick coatings that cannot be cut to the substrate with a utility knife blade in one stroke.
The knife adhesion test is conducted similarly to Method A of the tape adhesion tests in that incisions are made in the shape of an “X” (each leg 1.5-inches in length, with an angle of 30o – 45o) through the coating film down to the substrate. The tip of a knife blade is then inserted into the intersect of the two incisions and used to attempt to lift the coating from the substrate or an underlying coating.
Adhesion is rated on an even number scale between 0 and 10, with 10 having the best adhesion and 0 the worst. A description of the adhesion criteria is included in Table 3.
Generally, adhesion ratings of 8 and 10 are considered good, adhesion values of 4 and 6 are considered marginal, and adhesion values of 2 and 0 are considered poor.
Interpreting Adhesion Test Results
The Tape and Knife adhesion test procedures described herein include specific language to address “gray areas,” which require agreement between parties that are either requiring or performing the tests. There are circumstances and situations that do not allow standard procedures and methods to provide an accurate representation of coating adhesion. For example, laboratory testing is typically conducted under “standard” laboratory conditions of temperature and humidity; however, field testing conditions vary with the prevailing weather and are largely uncontrolled. Variations in temperature and humidity can affect the efficacy of the method employed.
Heavily chalked paints typically show very good tape adhesion properties, since only the friable chalk layer is removed by the tape (the weakest plane), leaving the coating system intact. The adhesion by knife test may provide a more accurate picture of the actual adhesion characteristics. If the tape adhesion test is required, the chalking should be removed from the area prior to performing the test.
Adhesion testing conducted on acrylic elastomeric coatings applied to cement or stucco cannot be evaluated using the tape adhesion test. Further, the results of any knife adhesion tests performed on these coatings must be carefully considered. Acrylic elastomeric coatings have high cohesive strength and, once cut, can often be removed by pulling on the leading edge with one’s fingers. Never the less, the adhesion is oftentimes considered to be acceptable under these conditions.
Adhesion tests that consistently reveal an adhesive break between coats or a cohesive break within a coat do not provide any information relative to the adhesion of the coating (or coating system) to the substrate. Knife adhesion tests may be used to assess the bond to the substrate when the tape adhesion test results revealed a break somewhere higher up in the coating system.
Simply stated, the ASTM standard test procedures have limitations that need to be considered when making judgements or decisions based on the test results. Make sure that any “gray areas” are considered and addressed by the stakeholders before testing is performed.
Surface profile is defined as a measurement of the maximum peak-to-valley depth generated by abrasive blast cleaning and impact-type power tools. These operations effectively increase the surface area and provide an “anchor” for the applied coating system. The surface profile depth must be compatible with the total coating system thickness; typically, the thicker the coating system, the deeper the surface profile. For example, a 3-coat 15 mil system may require a 2-3 mil surface profile, while a 40-mil coating system may require a 4-5 mil surface profile. The maximum achievable surface profile is generally 6-7 mils (in steel) using a G10 or G12 abrasive.
Abrasive blast cleaned and power tool cleaned steel surfaces are routinely checked to verify the specified surface profile has been achieved. Industry standards such as ASTM D4417, Standard Test Methods for Field Measurement of Surface Profile of Blast Cleaned Steel, NACE International SP0287, Standard Practice for Field Measurement of Surface Profile of Abrasive Blast-Cleaned Steel Surfaces Using Replica Tape, and SSPC: The Society for Protective Coatings PA 17, Procedure for Determining Conformance to Steel Profile/Surface Roughness/Peak Count Requirements describe the procedures for performing these measurements, as well as the recommended frequency of measurements and acceptability of the values. However, the standards assume smooth steel was prepared; little is written about measurement of surface profile on rough or irregular surfaces such as pitted steel, weathering steel, or cast iron surfaces. This brief article describes a few methods that may be considered for measuring surface profile on these types of irregular surfaces.
Many steel structures that have been in service for relatively long periods of time may have irregular rough surfaces due to corrosion. Often this results in steel thickness loss (section loss), and may even require modification or replacement. But when it is determined that not enough metal loss has occurred to warrant repairs to the steel substrate, the applicator is faced with complying with contract requirements for cleanliness and profile generation on surfaces that have a roughness that often exceeds the surface profile requirements of the contract. Likewise, other steel surfaces such as cast iron and weathering steel (ASTM A588, A242, A606-4, A847, and A709-50W) typically have a rougher surface than abrasive blast cleaned ASTM A36 steel after they have weathered from atmospheric conditions, and may result in a higher surface profile yield than the specification allows and an ensuing nonconformance.
Measuring surface profile on rough or pitted surfaces can often lead to false high readings, since the measurements are indicative of the depth of the pits or the inherent roughness of the steel versus the surface profile generated by the abrasive or impact-type power tool itself. This begs the question, “how do you verify the surface profile on these types of surfaces with any degree of accuracy?” There are a few alternatives that can be considered; however, they should be discussed, and an approach negotiated during the preconstruction meeting rather than during in-process measurement, when possible.
The first alternative is to obtain measurements in an adjacent area (that is not rough) using whichever method has been selected/specified (depth micrometer or replica tape). However, this may not be feasible when the pitting or rough steel is uniform. Of the three methods listed in the referenced standards, the depth micrometer (Method B in ASTM D4417) is generally considered optimum in these situations because a measurement of a single valley can be obtained, and the upper range of the instrument is higher (20 mils) than the maximum value that can be reasonably measured using replica tape (5 mils). Multiple measurements (a minimum of ten) are made in an area and the average surface profile is calculated.
Another option is to rely on a visual comparator and a reference disc. The comparator is a lighted magnifier (typically 5-10x power) that enables the user to closely examine the surface roughness and compare it to replica discs containing varying degrees of roughness (5 segments per disc). The appropriate reference disc that represents the abrasive employed (grit/slag or shot) is placed on the prepared steel and the user selects the segment that most closely matches the surface profile of the steel.
A third option is to measure the surface profile on a companion piece of steel such as a test plate that is abrasive blast cleaned with the same abrasive and pressure being used on the rough steel. This procedure has been accepted in the nuclear power industry for many years when painting cast iron motor housings.
Lastly, the abrasive manufacturer can be consulted regarding the typical surface profile values produced by the type and size abrasive being used. Some abrasive manufacturers can provide a Certificate of Conformance that states the measured range for a given lot under laboratory conditions. Note that surface hardness greatly influences surface profile depth, so the abrasive manufacturer’s data may be misleading.
The important point to remember is that when the surface is rough or irregular, one or more of these methods can be used to more accurately determine the surface profile depth. Further, rough surfaces may require the application of a thicker coat, or additional coating layers to help ensure corrosion protection. The coating manufacturer should be engaged when making these decisions.
 According to SSPC-SP 11 and SP 15, verification of minimum 1-mil surface profile created by power tools can only be measured using Method B (depth micrometer) described in the ASTM D4417 standard.
 SSPC-PA 17-2012 addresses measurement of surface profile on pitted steel in Appendix C (Section C2.5.3)
 ASTM D4417 instructs the operator to report the maximum of the ten measurements; however, this is not recommended on rough/pitted surfaces. The standard does allow averaging of the readings.
Coating thickness measurement is one of the most common quality assessments made during industrial coating applications. SSPC-PA 2, Procedure for Determining Conformance to Dry Coating Thickness Requirements is frequently referenced in coating specifications. As SSPC-PA 2 has evolved over the past four decades, a number of procedures and measurement frequencies are referenced in both the mandatory portions of the standard and in the non-mandatory appendices. While the measurement frequencies were never intended to be a statistical process, it is helpful to understand the statistical implications of the measurement process. And it is helpful to know what coating thickness variability is reasonable. This brief article explores how scanning probe technology can help to acquire a larger number of measurements (in a relatively short period of time) to better assess the consistency of the applied coating thickness, particularly on larger, more complex structures.
Scanning Illustration, courtesy of Elcometer Ltd.
There are two industry standards that are widely specified for measurement of coating thickness. These include ASTM D7091, Standard Practice for Nondestructive Measurement of Dry Film Thickness of Nonmagnetic Coatings Applied to Ferrous Metals and Nonmagnetic, Nonconductive Coatings Applied to Non-Ferrous Metals and SSPC-PA 2, Procedure for Determining Conformance to Dry Coating Thickness Requirements. The ASTM standard focuses on gage use, while the SSPC standard focuses on the frequency and acceptability of coating thickness measurements. The standards are designed to be used in conjunction with one another. In 2012, all references to measurement frequency were removed from the ASTM standard so that it did not conflict with SSPC-PA 2.
The frequency of coating thickness measurements is defined by gage readings, spot measurements and
FXS Probe designed to withstand rough surfaces, courtesy of DeFelsko Corporation
area measurements. A minimum of three (3) gage readings is obtained in a 1.5” diameter circle and averaged to create a spot measurement. Five spot measurements are obtained in a 100-square foot area. The number of areas to be measured is determined by the size of the coated area. If less than 300 square feet are coated (i.e., during a work shift), then each 100-square foot area is measured (maximum of three areas, each composed of five spot measurements with a minimum of three gage readings in each spot). If the size of the coated area is between 300 and 1000 square feet, three – 100 square foot areas are selected and measured. If the size of the coated area exceeds 1000 square feet, three areas are measured in the first 1000 square feet, with one additional area measured in each additional 1000 square feet, or portion thereof. For example, if the size of the coated area is 4500, square feet, 7 – 100 square foot areas are measured (total of 35 spot measurements and minimum of 105 gage readings).
Other measurement frequencies are included in non-mandatory appendices to SSPC-PA 2, including Appendix 2 & 3 for steel beams, Appendix 4 & 5 for test panels, Appendix 6 for measurement of coating thickness along edges and Appendix 7 for pipe exteriors.
Gauge display containing scanned data, courtesy of Elcometer Ltd.
The number of gage readings, spot measurements and area measurements prescribed by SSPC-PA 2 was never intended to be based on a statistical process. Rather, the frequency of measurement was based on what was reasonable in the shop or field to adequately characterize the thickness of the coating without unduly impeding production. Consider the impact of checking the thickness of a previous day’s application to 4,000 square of steel if every 100 square feet needed to be measured. That’s 40 areas, 200 spot measurements a minimum of 600 gage readings. And that frequency may not be considered a statistically significant sampling either. Further, obtaining additional measurements above the number prescribed by SSPC-PA 2 (when invoked by contract) may be considered “over inspection.”
Using Scanning Technology to Acquire Higher Volumes of Data
Several manufacturers of electronic coating thickness gages have incorporated “scanning probe” technology and the associated support software into the data acquisition process. This newer technology enables the gage operator to obtain large sets of coating thickness data in a relatively short time frame. For example, coating thickness data was obtained by a certified coatings inspector on an actual bridge recoating project that included 12 batches of readings (nearly 600 readings) in just under 8 minutes (measurement time only) on bridge girders across four panel points. So it may be possible to obtain a more representative sampling of the coated area without impeding production. However, there are concerns with acquiring such large data sets, such as management of the data, handling outliers, determining the statistical significance of the data (i.e., what is an acceptable standard deviation or coefficient of variation), applicability of the Coating Thickness Restriction Levels 1-5 in SSPC-PA 2), etc. The scanning probe set-up on the gage itself is relatively easy to perform, and the software is capable of handling the large volume of data coming into the gages.
The SSPC Committee on Dry Film Thickness Measurement may consider adding a 10th non-mandatory appendix to SSPC-PA 2 to give the specifier the option of acquiring a much larger data set of coating thickness measurements without impeding production. In this manner, an owner may gain greater confidence regarding the uniformity and consistency of the applied coating film.
Do You Like The Theme?
Share With Your Friends!