arrow_back_ios

Main Menu

See All Acoustic End-of-Line Test Systems See All DAQ and instruments See All Electroacoustics See All Software See All Transducers See All Vibration Testing Equipment See All Academy See All Resource Center See All Applications See All Industries See All Insights See All Services See All Support See All Our Business See All Our History See All Our Sustainability Commitment See All Global Presence
arrow_back_ios

Main Menu

See All Actuators See All Combustion Engines See All Durability See All eDrive See All Transmission & Gearboxes See All Turbo Charger See All DAQ Systems See All High Precision and Calibration Systems See All Industrial electronics See All Power Analyser See All S&V Hand-held devices See All S&V Signal conditioner See All Accessories See All DAQ Software See All Drivers & API See All nCode - Durability and Fatigue Analysis See All ReliaSoft - Reliability Analysis and Management See All Test Data Management See All Utility See All Vibration Control See All Acoustic See All Current / voltage See All Displacement See All Load Cells See All Pressure See All Strain Gauges See All Torque See All Vibration See All LDS Shaker Systems See All Power Amplifiers See All Vibration Controllers See All Accessories for Vibration Testing Equipment See All Training Courses See All Whitepapers See All Acoustics See All Asset & Process Monitoring See All Custom Sensors See All Data Acquisition & Analysis See All Durability & Fatigue See All Electric Power Testing See All NVH See All Reliability See All Smart Sensors See All Vibration See All Weighing See All Automotive & Ground Transportation See All Calibration See All Installation, Maintenance & Repair See All Support Brüel & Kjær See All Release Notes See All Compliance See All Our People
arrow_back_ios

Main Menu

See All CANHEAD See All GenHS See All LAN-XI See All MGCplus See All Optical Interrogators See All QuantumX See All SomatXR See All Fusion-LN See All Accessories See All Hand-held Software See All Accessories See All BK Connect / Pulse See All API See All Microphone Sets See All Microphone Cartridges See All Acoustic Calibrators See All Special Microphones See All Microphone Pre-amplifiers See All Sound Sources See All Accessories for acoustic transducers See All Experimental testing See All Transducer Manufacturing (OEM) See All Accessories See All Non-rotating (calibration) See All Rotating See All CCLD (IEPE) accelerometers See All Charge Accelerometers See All Impulse hammers / impedance heads See All Cables See All Accessories See All Electroacoustics See All Noise Source Identification See All Environmental Noise See All Sound Power and Sound Pressure See All Noise Certification See All Industrial Process Control See All Structural Health Monitoring See All Electrical Devices Testing See All Electrical Systems Testing See All Grid Testing See All High-Voltage Testing See All Vibration Testing with Electrodynamic Shakers See All Structural Dynamics See All Machine Analysis and Diagnostics See All Process Weighing See All Calibration Services for Transducers See All Calibration Services for Handheld Instruments See All Calibration Services for Instruments & DAQ See All On-Site Calibration See All Resources See All Software License Management

[Please note that the following article — while it has been updated from our newsletter archives — may not reflect the latest software interface and plot graphics, but the original methodology and analysis steps remain applicable.]

Guest Submission - Carl S. Carlson, Carlson Reliability

Today's organizations face unprecedented worldwide competition as a result of three continuing challenges: the mandate to reduce costs, faster development times and high customer expectations for the reliability of products and processes. The necessity for reliability assurance will not abate; however, there is increasing emphasis on Design for Reliability as an organizational strategy.

One of the tools that show up on almost every "short list" of Design for Reliability tools is Failure Mode & Effects Analysis. Most corporate and military applications require some form of FMEA or FMECA. Yet questions remain about the overall effectiveness of FMEA as applied in many companies and organizations today. Frankly, there are mixed results with FMEA applications.

The prerequisite for effective FMEAs is a sound knowledge of the basics of FMEA. There is no substitute for learning these fundamentals. Interested readers are encouraged to take ReliaSoft’s three-day training course that covers the FMEA basics and supporting software (Foundations for Effective FMEAs). Once these basics are well understood, it is possible to capture and apply certain lessons learned that make FMEAs highly effective.

There are a number of success factors that are critical to uniformity of success in the application of FMEA in any company. In the previous issue of Reliability Edge, the focus was on an effective FMEA process.

This article will outline the lessons learned and quality objectives that make for effective FMEAs.

The FMEA lessons learned presented here are the result of personally supervising or participating in over a thousand FMEA projects, and collaboration with many corporations and organizations on the FMEA process and its shortcomings.

There is a maxim that says, "Good judgment comes from experience and experience comes from poor judgment." Based on this maxim, the following lessons learned are based on considerable experience. Each of these lessons is from direct experience of how FMEAs were done wrong and how to improve the overall effectiveness.

FMEA Lessons Learned

So here we go. What are the primary ways that FMEAs can be done wrong (Mistakes) and the key factors that make for effective FMEAs (Quality Objectives)?

Mistake # 1

Based on empirical review of many FMEAs, some FMEAs do not drive any action at all; some FMEAs drive mostly testing while others drive ineffective action. The mistake is:

Failure of the FMEA to drive design or process improvements

Quality Objective # 1

The FMEA drives product design or process improvements as the primary objective

Note: Reliability Engineering has a multitude of tools to choose from in driving design or process improvements. The key is to use the FMEA "Recommended Actions" field to identify and execute best practice tools that can optimize designs. This is one of the reasons that Reliability Engineers need to participate on FMEAs.

Mistake # 2

There are various methods that the FMEA team can use to identify which failure modes and their causes require follow up action. Some companies set pre-determined risk thresholds; others review RPNs or Criticality using Pareto or other techniques. Whatever method is used, failure to address all high-risk failure modes (including high severity) can result in potentially catastrophic problems or lower customer satisfaction. The mistake is:

Failure of the FMEA to address all high-risk failure modes

Quality Objective # 2

The FMEA addresses all high-risk failure modes, as identified by the FMEA Team, with effective and executable action plans

Note: The emphasis on this Quality Objective is to ensure that all of the high-risk failure mode/causes are adequately addressed with effective actions. The key is effective action that reduces or eliminates the risk.

Mistake # 3

Some companies miss the opportunity to improve Design Verification Plan and Reports (DVP&Rs) or Process Control Plans based on the failure modes/causes from the FMEA. Some FMEA teams do not include knowledgeable representatives from the test or analysis department. The result is inadequate product testing or process control plans. The mistake is:

Failure of the FMEA to improve test/control plans

Quality Objective # 3

The Design Verification Plan & Report (DVP&R) or the Process Control Plan (PCP) considers the failure modes from the FMEA

Note: The FMEA team will often discover failure modes/causes that were not part of the design controls or test procedures. The key is to ensure that the test plan (DVP&R) or Control Plan is impacted by the results of the FMEA. This can be done by including test/control membership on the FMEA team or through well-written actions.

Mistake # 4

Empirical data shows that at least 50% of field problems can occur at interfaces or integration with the system. Some companies focus on part or subsystem failures and miss the interfaces. The mistake is:

Not including interfaces or integration in FMEA

Quality Objective # 4

The FMEA scope includes integration and interface failure modes in both block diagram and analysis

Note: Interfaces can be included as part of the item by item analysis or as a separate analysis. It is recommended that the FMEA Block Diagram clearly show the interfaces that are part of the FMEA scope.

Mistake # 5

Some companies provide no linkage between FMEAs and field data. It takes concerted effort to integrate problem resolution databases with FMEA. Otherwise, serious problems can repeat. The mistake is:

Disconnect between FMEA and information from the field

Quality Objective # 5

The FMEA considers all major "lessons learned" (such as high warranty, campaigns, etc.) as input to failure mode identification

Note: Field failure data can be brought into generic FMEAs on a regular basis. Then, when new program-specific FMEAs are started, they benefit from field lessons learned. If generic FMEAs are not used, new FMEAs should be seeded with potential field problems and required to show how they will not repeat in the new design/process. The key is to hold the FMEA team responsible to ensure that major field problems do not repeat.

Mistake # 6

Many companies have a Key Characteristics policy. The Design FMEA can identify Key Product Characteristics and the Process FMEA can identify Key Process Characteristics for special controls in manufacturing. Some companies miss this opportunity. The mistake is:

FMEA omits Key Characteristics

Quality Objective # 6

The FMEA identifies appropriate Key Characteristics candidates, if applicable according to company policy

Note: This is an underutilized element of FMEAs. Both the SAE J1739 and AIAG FMEA-3 guidelines for FMEA use the "Classification" column.

Mistake # 7

Many companies do FMEAs late, and this reduces their effectiveness. FMEAs should be completed by design or process freeze dates, concurrent with the design process. This is a very common problem and greatly reduces the effectiveness of the FMEAs. The mistake is:

Doing FMEAs late

Quality Objective # 7

The FMEA is completed during the "window of opportunity" where it can most effectively impact the product or process design

Note: The key to getting FMEAs done on time is to start the FMEAs on time. FMEAs should be started as soon as the design or process concept has been determined. The exception is FMEAs done during trade-off studies, which should, of course, be started earlier.

Mistake # 8

Some FMEA teams do not have the right experts on the core team. Some FMEA teams do not have good attendance. Some FMEA team members just sit in their chairs and don't contribute to team synergy. The mistake is:

FMEAs with inadequate team composition

Quality Objective # 8

The right people participate on the FMEA team throughout the analysis and are adequately trained in the procedure

Note: Based on an actual survey of Reliability Engineering internal customers on FMEAs: FMEAs are too important not to do, but too time consuming to participate in. The FMEA facilitator must value the time of team members and not waste time. People have blind spots (scotomas). The key is to get the people who are knowledgeable and experienced about potential failures and their resolutions to actually show up at the meetings. Attendance often takes management support. Team size is best between four to eight people. If the team gets too large, consider breaking into additional limited-scope FMEAs.

Mistake # 9

There are hundreds of ways to do FMEAs wrong. Some companies do not encourage or control proper FMEA methodology. Training, coaching and reviews are all necessary to success. The mistake is:

FMEAs with improper procedure

Quality Objective # 9

The FMEA document is completely filled out "by the book," including "Action Taken" and final risk assessment

Note: One common problem is the failure to get to root cause. Expert input is necessary. Follow-up actions based on poorly defined causes will not work and the FMEA will not be successful. Another common problem is lack of follow-up to ensure that the FMEA Recommended Actions are executed and the resulting risk is reduced to an acceptable level.

Mistake # 10

Some companies mandate FMEAs, and then do not ensure that the time is well spent. Pre-work must be completed, meetings must be well run and there must be efficient follow-up of high-risk issues. Ask the FMEA team if their time has been well spent and take action to address shortcomings. The mistake is:

Lack of efficient use of time

Quality Objective # 10

The time spent by the FMEA team, as early as possible, is an effective and efficient use of time with a value added result.

Note: If this Quality Objective is met, then future FMEAs will be well attended and supported by subject matter experts and management.

FMEA Quality Surveys/Audits

Each FMEA team (and internal customer of FMEA) can be surveyed for FMEA effectiveness. Surveys are based on the FMEA Quality Objectives. Surveys are in writing, 1 or 2 pages. Individual content can be confidential. This provides valuable feedback to improve future FMEAs.

In-person audits of completed (or nearly completed) FMEAs should be done. They are performed by supervisors and managers over the FMEA process, with the FMEA facilitator and core team. An in-person interview format is recommended, on a pre-scheduled or random basis. Typically they take one hour maximum per audit, which amounts to about five minutes for each of the ten FMEA Quality Objectives.

FMEA audits provide valuable feedback to improve future FMEAs, in the form of action items identified for follow up. Focus needs to be on improving the FMEA process, not on the person/team doing the FMEA. Don't expect to instantly achieve all ten objectives; work to maintain steady improvement. Management audits demonstrate commitment. In the words of W. Edwards Deming, "Quality cannot be delegated."

Summary

FMEA/FMECA is a powerful reliability tool to improve product or process designs early in the development process. This not only increases the initial reliability, but saves considerable cost of future testing and field warranty. It is worth the effort to get the tool implemented in an effective manner.

Achieve the FMEA Quality Objectives and your result will be more effective FMEAs for your company or organization.

References

  • Automotive Industry Action Group (AIAG), Potential Failure Mode and Effects Analysis (FMEA Third Edition), July, 2001.
  • ReliaSoft Training Course: Foundations for Effective FMEAs. On the web at https://www.reliasoft.com/services/training-courses/foundations-for-effective-fmeas.
  • Society of Automotive Engineers, SAE J1739: Potential Failure Mode and Effects Analysis in Design (Design FMEA), Potential Failure Modes and Effects Analysis in Manufacturing and Assembly Processes (Process FMEA), and Potential Failure Mode and Effects Analysis for Machinery (Machinery FMEA). SAE International, Warrendale, PA, June, 2000.

NOTE: The author participated in the development of the SAE J1739 guidelines for Design, Process and Machinery FMEAs. The Quality Objectives presented in this article are also reflected in Appendix A and Appendix B of that document, which is available for purchase from the Society of Automotive Engineers (http://www.sae.org).

About the author

Carl S. Carlson is a consultant and instructor in the areas of FMEA, reliability program planning and other reliability engineering and management disciplines. He has 20 years of experience in reliability engineering and management positions at General Motors, most recently Senior Manager for the Advanced Reliability Group.

Mr. Carlson co-chaired the cross-industry team to develop the Society of Automotive Engineers (SAE) J1739 for Design/Process/Machinery FMEA and participated in the development of the SAE JA 1000/1 Reliability Program Standard Implementation Guide. He has also chaired technical sessions for the Annual SAE RMSL Symposium, was a four-year member of the RAMS Advisory Board and served for five years as Vice Chair for the SAE's G-11 Reliability Division. He is an ASQ Certified Reliability Engineer.