Difficult discussions take place whenever people engage in the reconciliation of theory with practice. In many maintenance situations, the reality of a given maintenance event will be at odds with the current structure of the RCM hierarchy.
The failure mode will be identified at one level of causality in the RCM knowledge base and at an entirely different level and with a different viewpoint on the work order. The reliability engineer, supervisors, downtime administrators, and technicians repeatedly ask and discuss the same questions: “At what level do we need to analyze the failure mode?”, “Do the Consequences as surmised from the RCM Effects justify the level of detail proposed?”, “Is a failure analysis justified?” Where to draw the line between a Suspension and Potential Failure so as to correctly document a given work order according to “as-found” observations? The consultant, engineers, and supervisor’s, during the LRCM pilot, try to keep the basic questions from getting buried under excessive detail. This is hard work. People hold opinions and time is at a premium. The more immediate pressures of production will invariably conflict with the desire to make the RCM knowledge better.
Nevertheless these “RCM” types of discussions must take place. We call the process “LRCM” only for the reason that it happens outside the formal “initial” RCM review group meeting room. It takes place in the shops, corridors, and in the maintenance offices, formally and informally. RCM principles of clarity, consensus, and friendly debate pervade these discussions, that will sometimes culminate in agreement to adjust or augment the RCM record. The reliability engineer will justify and document such changes, including any dissenting opinion, in the RCM Effects. Our MESH LRCM software system accelerates the entire work order process while enriching the RCM knowledge base and upgrading the quality of data to standards needed for practical Reliability Analysis and optimal Decision Modeling.
© 2011 – 2014, Murray Wiseman. All rights reserved.
- Effects (78.3%)
- Effects - alternate usages (78.3%)
- Local, Next higher level, End effects (78.3%)
- FMEA according to Wikipedia (78.3%)
- Confidence in predictive maintenance (21.7%)
- Measuring and Improving CBM Effectiveness (RANDOM - 21.7%)