First let’s settle on what “RCM” is, really. Many say it is a process. That’s true. But at its core it’s a data model – a template in which to classify the answers to seven clear “questions”.
Those answers contain all the essential information about an asset’s failure behavior. The answers to the seven RCM questions cover everything the maintenance department needs to know about any future failure of the asset: What asset functions are in danger of being compromised? In precisely what way can the function cease to perform? What event (internal or external) would cause the loss of function? What happens (organizationally, operationally, and within the asset’s components) when the failure causing event (called a “mode”) occurs? Why does the failure matter (Can it impact operation? Can it harm someone?)? What proactive action(s) will satisfactorily mitigate the consequences? If no mitigating maintenance actions are practical, what alternative strategy (such as redesign and/or simply allowing the failure to occur) will satisfy the users’ requirements for safe reliable operation?
Who can and should answer these seven RCM questions? According to RCM (representatives of) everyone who’s involved or impacted in some way by the asset’s malfunction. Not just the obvious “experts”. John Moubray introduced the “review group” knowledge elicitation method. RCM shops found that the seven question and answer process generated an adequate maintenance plan. Information technology sprung up to assist the RCM process.
How does RCM succeed in delivering an adequate maintenance plan? It is partially through the magic of consensus in the framework of a systematic reasoning process. The CIA exploits this same idea in the geo-political intelligence field, having discovered its uncanny ability to arrive at good judgment. The CIA system grew out of an experiment called the “Good Judgment Project” designed by three well known psychologists and other people inside the intelligence community. Predictions made by ordinary people in the Good Judgment Project were, overall, better even than those made by intelligence analysts with access to classified information. People involved in the project were astonished by its success at making accurate predictions.[1]
With a little training from the people running the program, the subjects were given access to a website that listed dozens of carefully worded questions on events of interest to the intelligence community. The project’s user interface provided a place to enter a numerical estimate of an event’s likelihood. The participants’ main tool for acquiring information in order to arrive at the answers was Google.
How is it possible that a group of average citizens doing Google searches in their suburban town homes can outpredict members of the United States intelligence community with access to classified information?
For most of his professional career, Philip Tetlock one of the psycholgosts behind the project, studied the problems associated with expert decision making. His book Expert Political Judgment is considered a classic, and almost everyone in the business of thinking about judgment speaks of it with unqualified awe. All of his studies brought Tetlock to at least two important conclusions.
First, if you want people to get better at making predictions, you need to keep score of how accurate their predictions turn out to be, so they have concrete feedback.
And second, if you take a large crowd of different people with access to different information and pool their predictions, you will be in much better shape than if you rely on a single very smart person, or even a small group of very smart people.
Living RCM exploits the truth of Tetlock’s discoveries by “crowd sourcing”[2] maintenance knowledge. The MESH™ LRCM tool set integrates, unobtrusively, into the everyday work order procedure where it structures technicians’ observations and thoughts in the maintenance knowledge base, continuously refining the failure consequence mitigating strategy. In a fluid manner, the LRCM application feeds back the reliability impact to the knowledge originators as it relates to their contributions, thereby energizing a continuous improvement loop.
© 2014, Murray Wiseman. All rights reserved.
- [1]http://www.npr.org/blogs/parallels/2014/04/02/297839429/-so-you-think-youre-smarter-than-a-cia-agent↩
- [2]MESH™ LRCM is like the modern crowd sourcing phenomenon from which incredible discoveries in molecular science, genetics, and other fields have emerged. We use MESH in somewhat the same way to capture the knowledge and judgment of the people closest to the failure, its causes, and effects.↩