Skip to content

Jean-Pierre Signoret explains how GRIF was developed

Jean-Pierre Signoret has a passion for mathematical logic and a job well done. A leading figure in reliability engineering, Jean-Pierre Signoret continues to pass on his knowledge with the same level of enthusiasm through his books, conferences, training courses, as well as through his professional experience. Having played a fundamental role in the design of the GRIF software suite, he gives us the inside story on its remarkable development.

Interviewed by Emmanuelle LOURIA  

edito_1_colomn_event_club_grif_3.png

What were the reasons behind your move to TotalEnergies?

I joined Total (now TotalEnergies) from Elf in 1981, after having spent 10 years working for the French Atomic Energy Commission (CEA), in the Probabilistic Safety Studies office (which became PSS and is now the IRNS - Institute for Radiological Protection and Nuclear Safety).

At that time, the CEA was conducting predictive and preventive studies on nuclear facilities (plants, submarines, etc.). Our main role was to evaluate the probability of accidents in power plants using techniques such as fault trees and event trees. To ensure our calculation assumptions were sufficiently pessimistic, we added constraint upon constraint, to the point of simulating situations that could not physically exist in the real world. I was getting a little fed up with it all. And then, a request to conduct a reliability analysis landed on my desk, on drilling rigs with acid gas (H2S) near Pau (Pyrénées-Atlantiques), subcontracted by Elf.

At that point, reliability analyses were not yet standard practice in the oil industry. I thought it would be a really interesting subject to explore. When the first study was complete, I asked if I could join the company. After a few interviews and tests, I was hired to work in the Reliability department of Elf DGEP. ;

Jean-Pierre SIGNORET was born on July 6th, 1947 in Clermont-Ferrand (France). After obtaining a double Master's degree in Nuclear Physics and in Electronics, Electro-technics, Automation at the University of Clermont-Ferrand, he joined the Military Affairs Department of the French Atomic Energy Commission (CEA) in 1972. There he discovered the discipline of "reliability". 

For three years, he carried out reliability studies on the safety systems of nuclear submarines. In 1975, he joined the CEA's Probabilistic Safety Assessment Office (now IRSN) to work on the safety of nuclear power plants. In 1981, he joined the Reliability Department of the DGEP staff of Elf (now a subsidiary of TotalEnergies - formerly Total -) as a Reliability Engineer, where he carried out operational safety studies for oil production systems and designed the calculation engines and modules of the GRIF software suite. In parallel to his activities, he was appointed Vice-President of the Institute of Dependability ( IMdR - formerly ISdF -) from 1988 to 2002, President of the European Safety and Reliability Association (ESRA) from 1999 to 2001, President of the AFNOR UF56 (Dependability) from 2009 to 2015. He was the chairman of the "Methodological Research" working group of the Institute for Dependability (IMdR - formerly ISdF) and then of the Institute for Risk Management (IMdR) for more than 20 years and was the chairman of the program committee of the lm12 national conference in 2000 (l and m are the failure and repair rates, two of the emblematic parameters of dependability). 
Since his retirement in 2009, he has been involved in standardization activities and remains active in this field (IEC, ISO, AFNOR). Within the framework of TotalEnergies Associate Professors (TPA), he continues to teach operational safety in various national and international universities.  He is the author of numerous publications and well-known books in the field of reliability science. 

Bibliography: 
- J-P SIGNORET and A LEROY, Le risque Technologique. PUF 1991
- J-P SIGNORET and A LEROY, Reliability Assessment of Safety and Production Systems – Analysis, Modelling, Calculations and test cases. SPRINGER 2021.

Were reliability studies a new discipline for Elf? 

Not exactly, for example, studies had already been carried out for the northeast sector of the Grondin field (Gabon) in 1975. And when I started at Elf, we had two teams with six or seven reliability specialists, whose work was divided between processes and research. The Safety and Production Departments conducted their own independent reliability analyses, sometimes on the same systems. This type of organization had its limits because different results could be obtained from the same study subjects, but they would not address the same aspects (for example, safety versus availability).

During this period, a series of industrial accidents had left their mark on the public conscious and the industry (in particular, the Piper Alpha explosion in Scotland, the Macondo explosion in the Gulf of Mexico and the sinking of the Prestige in France). Reliability analyses have gradually become a fundamental part of risk management at production facilities.
As time went on and following several restructuring operations where teams were merged and colleagues retired, I found myself managing most of the day-to-day business, before hiring some young recruits who are still working today, after my own retirement in 2009.

How did you conduct reliability studies in the 1980s?

Mostly by writing out all our formulas and calculations on paper! Everything was done by hand. I developed the fault trees in the form of a binary structure (outlined by René-Louis Vallée in his treatise on binary analysis), to geometrically extract the possible failure scenarios. It was hard going but it worked!

Qualitative methods relied on the “engineer’s judgment”: using the intuition, skills and experience of individuals, as well as feedback from the field, to establish the different possible scenarios (breakdowns, accidents, maintenance, etc.). As a result, we began using inductive methods such as the HAZOP (Hazard Operability Studies) method taken from the chemical industry, to determine the effect of physical parameter drifts on breakdown eventuality, and deductive methods, such as fault trees, to identify the accident/incident scenarios most likely to occur.

"In the 1980s, qualitative methods relied heavily on the “engineer’s judgment”: using the intuition, skills and experience of individuals, as well as feedback from the field, to establish the different possible scenarios (breakdowns, accidents, maintenance, etc.)."

"Stochastic Petri nets are ideal for modeling industrial system behavior, and Monte Carlo is the perfect computational method for simulating random variables."

Did Elf not have computers to help you perform more complex operations? 

In the 1980s, Elf had a big IBM 368 mainframe unit to perform calculations, as well as small HP-9830 machines, a predecessor of personal computers. Although these small machines were designed for data acquisition in the Group's laboratories, everyone was using them to perform calculations without the support of central IT services, and we were no different! Although quickly replaced by more powerful machines (HP-9845s), the HP-9830 was used as the basis for developing our first calculation algorithms.

The lengthy calculation times were extremely costly: a simple mistake could blow the budget for the whole year! So it was better not to make any mistakes in our calculations before entering the data. While these machines helped us simulate qualitative and quantitative models, Elf still did not have adequate computing tools to quickly, easily and comprehensively process this data. That’s when I began to invest more heavily in programming to provide my department (and therefore the company) with tools and calculation engines specifically designed for reliability analysis.

Is Termites a distant relative of the ALBIZIA calculation engine, which has now been integrated into the GRIF software suite

Yes, it is. But it was the discovery of Binary Decision Diagrams (BDD) that resulted in a qualitative leap in the modeling and processing of very large fault trees and achieved the goal for which Termites was designed.

I then developed Mark-EXD for performing probabilistic calculations using the Markov model and based on the exponential of matrices. Considered at the time by the CEA as one of the worst possible approaches, this calculation method actually turned out to be the most efficient and effective solution for performing probabilistic calculations on production and safety systems.

Today, Termites and Mark-EXD represent the DNA of the ALBIZIA calculation engine. At present, it is doubtlessly one of the most powerful engines for meeting static modeling needs (Boolean calculations). It is used in many GRIF modules

How did you approach the development of the MOCA-RP engine for dynamic calculations?

For MOCA-RP, I wanted to combine Stochastic Petri nets (SPn) and Monte Carlo simulation. The idea came to me because SPn are for ideal modeling industrial system behavior, and Monte Carlo is the perfect computational method for simulating random variables (failures, repairs, etc.) and animating these SPns. This solution provides unrivaled flexibility and modeling power, opening up vast processing possibilities for large, complex (and not necessarily Markov-based) systems. We have progressed from a few dozen FLOPS (Floating Point Operations Per Second) to giga (109), tera (1012), and even peta FLOPS (1015). The time when numbers were drawn at random has long passed. The Monte Carlo calculation method is definitely proving its worth!

Why did you need to change these engines?

In the 1990s, the digital sector was completely disrupted when the first DOS and Unix operating systems arrived on the scene, leading to the widespread use of personal computers (PCs). Then, from the year 2000 onwards, came Windows (on PCs), HP View, Sun Solaris and Linux (for workstations). To keep up with the latest trends, we converted all our equipment to these operating systems and switched from using HP Basic language and FORTRAN to C and C++. With help from Damien Ehret, an IT engineer still working for TotalEnergies today, we developed an interactive graphical interface to support the use of the various calculation engines. Dubbed GRIF, it would ultimately give its name to the entire software package. First developed in LeLISP, it is now programmed in JAVA for the current version of the GRIF software.

Licenses for calculation engines were granted at the end of the 1980s, but it was when Elf's activities were being integrated into the Total group that the first GRIF modules were marketed: BFiab, Tree, Reseda, Markov and Petri in 2005, and BStoK in 2006.  

Did you design and program all these modules yourself?

All of them, from start to finish for the algorithms of the original calculation engines! Throughout my career at Total, I found myself wearing two hats, and combining my reliability skills with my computer programming skills. This was not always easy for the IT engineers, but I took advantage of the availability of HP-9845s, in the first instance, and then the arrival of PCs with DOS on to the mass market. Obtaining tools that met real needs proved incredibly useful.

Over time, and before retiring, I worked closely with an internal team and subcontractors such as Cyrille Folleau and Philippe Thomas from SATODEV to improve its functions.

Which of the modules that you developed are you proudest of?

I think they’re all great because they each meet a specific need for which I had to find a solution on my own to successfully carry out my work as a reliability engineer. And I know that by doing this, almost everything I’ve learned during my studies (math and nuclear physics) has helped me to solve a difficult problem at one time or another. Mathematics is like a living language: with its (rich!) syntax and its calculation methods that can constantly be adapted to help us understand and model the environment around us. It seems to me that this field is too often overlooked at the moment.

What is your view on the additional modules developed after your retirement (ETree, Bool, Petro, Risk, Flex and SIL)?  

Before leaving Total, I chose not to develop a SIL module to assess the integrity level of Safety Instrumented Systems (SIS). I thought that it was better to see a system’s problems on paper, using the classic fault tree module, rather than through a model that had already been distorted. But demand was such that Stéphane Collas (who had taken over GRIF) developed an automated computerized solution for this purpose, which was incredibly successful as soon as it was launched. /p>

All these modules have been designed to meet needs regularly identified by professionals in the field, who are increasingly required to conduct rapid reliability analyses integrating new parameters (standards, cost control, reduced carbon footprint, etc.). Meeting users’ needs is an integral part of the philosophy we adopted when carrying out the first work in 1982.

"Like any effective tool that stands the test of time, GRIF will continue to evolve and be adapted to meet the requirements of reliability analysis professionals."

What are your predictions for future key developments and challenges in reliability engineering?

Personally, I think that the standardization of definitions (in particular ISO and IEC) is still a key issue, so that all professionals, whether reliability, safety, production and/or maintenance engineers, can model their systems on the same basis. All too often, there is a tendency to adapt the criteria determining their scope of application, which results in semantic drift. However, in my view, the right definition should never change. Especially when it comes to qualifying the fundamentals of our business: what is "availability", "risk", or "reliability", etc.? Standardizing methods is also a good idea, provided that it doesn’t run counter to our practices! And to achieve this requires constant involvement in standardization committees.

I’ve also noticed the emergence of new reliability disciplines, such as predictive maintenance – where the aim is to intervene before a potential breakdown, but not so early as to impact the production chain – MBSA (Model Based Safety Assessment), the invasion of “intelligent” components, the rise in cybersecurity issues, and so on. This requires the introduction of new concepts, a whole new lexical field – to be defined with its own language – and new modeling and calculation analysis techniques. Nevertheless, after all this time, we still don’t have any global models that combine all the technological, human and computer aspects in a single tool. This makes it harder to anticipate opportunities for development.

Can you summarize GRIF in a few words?

I sincerely believe that GRIF is still one of the best industrial system modeling tools available on the market today. No other software features all the calculation methods that we have developed and integrated over the years. ALBIZIA is the only calculation engine to offer Markov junction and fault trees, and MOCA-RP is one of the most powerful algorithms on the market. What’s more, it’s intuitive interface is easy to use for all reliability engineers. I’m very proud to have created, through my contribution and with the support of the teams working with me, a tool now regarded as the benchmark by reliability engineers. Like any effective tool that stands the test of time, GRIF will continue to evolve and be adapted to meet the requirements of reliability analysis professionals.

Need more information? 

Damien EHRET, IT engineer at TotalEnergies,explains how he developed the interface for the GRIF software suite in collaboration with Jean-Pierre Signoret.

How did your work with Jean-Pierre Signoret on the GRIF software suite come about?

After training to become a general engineer, I was hired by Elf in 1987. I was 25 years old at the time and working in the Artificial Intelligence department of the Jean Féger Scientific and Technical Center (CSTJF) in Pau. 
I spent a lot of time designing rule-based engines. It was while involved in this activity that I had my first contact with Jean-Pierre Signoret’s team. He was looking for someone to help him create a system that could generate Markov graphs based on his descriptions and the idea really appealed to me. After a few conversations, he gave me free rein to work on the project.

Asking a fellow employee working in an industrial setting for help designing software with a graphic interface is an unconventional approach. Did you have any external expertise to support you on this project, or did you have existing software to use as inspiration for the design of your interface?  

You mustn’t forget what the working environment was like in the 1990s. PCs were in their infancy. Computers and workstations were not as widely used or ergonomic as they are today. To my knowledge, no equivalent software had been developed in FORTRAN before. That meant we had to do everything ourselves. At Elf, many innovations were being developed internally, providing IT and technical solutions to facilitate our activities, much more so than at Total, which had an already well-established culture of outsourcing when taking us over in 1999. Our engineers were more focused on the technical side. My work with Jean-Pierre Signoret was unique, precisely because we designed a range of turnkey tools that could effectively be used to conduct high-quality reliability analyses.

edito_1_colomn_dehret_0.png

“We weren’t aiming to create something revolutionary but to digitize the existing work environment.”

What was the situation when you arrived?

At first, all the calculations were done by hand. Then, with a text file detailing the nodes and links, Jean-Pierre entered all the calculation method descriptions into the HP-9845 and IBM servers, enabling the MOCA-RP and Mark-EXD engines to display the results.
The graphic part of the modeling and the calculation generation were kept as independent processes. We needed to design a graphical interface that would combine all the required functionalities and run on both UNIX and PCs with DOS.

How did you move the project forward?

We weren’t aiming to create something revolutionary but to digitize the existing work environment. Jean-Pierre had designed a few models which we discussed, and from which we aimed to develop a graphic interface that was as simple as possible, with a large area for viewing the system models, and configuration options at the sides. The most important thing was to maintain the symbolism of reliability expertise jargon. For example, there was a symbol to define a node, another to explain the link with a node, etc. After some development and testing to link the calculation programs to the interface, we created an operational tool.

illustration_-_evolution_of_grif_interface_0.png

Who came up with the name of the “GRIF” software suite?

I did. I came up with it. At the time, collective brainstorming was not really standard practice. I produced some documentation for the graphic interface, and then, as the project progressed, I named it GRIF (for GRaphical Interface for reliability Forecasting). I suggested the name to Jean-Pierre who immediately approved it. I never thought for a moment that GRIF would last so long or that the name would stick. It’s funny because we ended up keeping the name to refer to the GRIF graphical interface, but for me, the calculation engines for Markov Graphs and Petri nets integrated into the packages are the most important element.But people liked it. Probably because the image speaks for itself. Today, I’m pleasantly surprised to see that GRIF has become the benchmark tool in industrial system modeling. So much so that companies cite the name of our software on job descriptions in order to recruit reliability engineers who already know how to use it.

 

About GRIF