The necessity of quantitative estimation of non-failure operation of complex technical structures at the beginning of the 60s XX centuries stimulated the so-called logic and probabilistic calculus (LPC) which is a part of the mathematics treating rules of calculus and operating with statements of two-valued logic. LPC is based on the logic algebra and rules of replacement of logical arguments in functions of the logic algebra (PAL) by probabilities of their being true and rules of replacement of the logic operations by the arithmetic ones.
In other words, with the help of LPC, it became possible to connect the Boolean algebra with the probability theory not only for the elementary structures but also for the structures, whose formalization results in PAL of iterated type (bridge, network, monotonous). This original âbridge of knowledgeâ includes some proven theorems, properties, and algorithms, which constitute the mathematical basis of LPC.
Investigation of the safety problem has resulted in the development of the original logic and probabilistic theory of safety (LPTS), which allows to estimate quantitatively the risk of system (as a measure of its danger) and to rank the contribution of separate arguments to the system danger (in the case of an absence of truth probabilities of initiating events). The ranking of arguments under their contribution to the system reliability was proposed by me in 1976 in the monograph [Reliability of Engineering Systems. Principles and Analysis. Mir Publicizes, Moscow, 1976, p. 532] with the help of the introduction of concepts: âBoolean difference,ââweight,â and âimportanceâ of an argument. The aim of the author, from my point of view, is the connection of the LPC used in the field of technical systems, with questions of risk in economics and organizational systems.
Studying the works by the author, I realized that these economic and organizational systems essentially differ from technical ones, and the direct carrying of the knowledge and results of LPC from the area of engineering into an area of economics is not effective, and sometimes, it is not even possible. It is likely that much time and many efforts will be needed so that the new approaches in the LPC can make the same revolutionary break in the financial market, which was made by George Boo1 in the development of the inductive logic in the middle of XIX century and by G. Markowitz in the choice of the optimal security portfolio with the help of the analytical theory of probabilities in the middle of XX century.
To the authorâs knowledge, the risk phenomenon in complex technical, economic, and organizational systems is not completely recognized in the scientific plane and is also not resolved satisfactorily for the needs of applications, even though in complex systems, non-success occurs rather often with human victims and large economic losses. The management risk problem is current and challenging; it forces us to carry out new investigations and to seek new solutions for quantitative estimation and analysis of risk.
The risk is quantitative measure of fundamental properties of systems and objects, such as safety, reliability, effectiveness, quality, and accuracy. The risk is also a quantitative measure of non-success of such processes and actions as classification, investment, designing, tests, operation, training, development, management, etc.
In the listed subject fields, we shall consider three different statements of mathematical tasks of optimization by the management of risk â of interest will be the risk in problems of classification, investment, and effectiveness. Generally, the riskis characterized by the following quantitative parameters:
The probability of non-success.
The admitted probability of non-success (admitted risk).
Maximum admitted losses or minimal admitted effectiveness.
Value of losses or the effectiveness parameter.
The number of different objects or conditions of the object in the system.
The number of dangerous objects or conditions of an object.
It was marked by the founders of many fields of modern science, John von Neumann and Norbert Wiener, that the behavior of complex technical, economic, and social systems cannot be described with the help of differential equations. However, the description can be madeby the logic and the set theory, instead of the theories of chaos, accidents, bifurcations, etc. (See the book by Morgenstern and Neumann âThe game theory and economic behavior,â Moscow, Nauka, 1970, sec. 1.2.5. and 4.8.3.)
Analysis of the theories of Management and Risk development and the interaction between Man and Risk in complex systems proves the correctness of this point of view. In complex human-machine systems, the logic and probabilistic theory (LP-theory) reveals considerable achievements in estimation, analysis, and forecasting of risk (Ryabinin,1976; Guding et al., 2001).
The LP-theory attractiveness is in its exclusive clearness and unambiguity in quantitative estimations of risk; in a uniform approach to risk problems in economics and engineering; and in big opportunities for the analysis of influence by any element, including personnel, on the reliability and safety of the whole system. The risk LP-model may include the logic connections OR, AND, NOT between elements of system and cycles. Elements of the system under consideration may have several levels of conditions. The system risk dynamics can be taken into account by consideration of variation in time of probabilities of conditions.
The basis for the construction of the scenario risk LP-management in complex systems is the risk LP-theory; the methodology for the construction of scenarios and models of risk; the technology of risk management; examples of risk modeling and analysis from various fields of economics and engineering.
In complex systems, the technology of the scenario risk LP-management is based on the risk estimation by the LP-model, the techniques of the risk analysis, schemes and algorithms of risk management, and the corresponding software. Generally, it is impossible to control the risk without a quantitative analysis of risk which allows us to trace the contributions of initial events to the risk of the system. Estimation and analysis of risk as well as finding optimal management are carried out algorithmically with calculations, which are very time-consuming even for modern computers.
The risk LP-theory considered in the book unifies Ryabininâs LP-calculus and LP-method, Mojaveâs methodology of automatized structure and logical modeling, and Solojentsevâs risk LP-theory with groups of incompatible events (GIE1). The LP-calculus is a special part of discrete mathematics, which should not be confused with the probabilistic logic and other sections of the mathematical logic. Therefore, it is useful to outline the history of the publications on this subject briefly. To the authorâs knowledge, the idea and development of the subject should be attributed to Russian authors. The contents and formation of LP-calculus originate from the work by I. A. Ryabinin âLeningrad scientific school of the logic and probabilistic methods of investigations of reliability and safetyâ (in the book: âScience of St. Petersburg and sea power of Russia,â v. 2, 2002, pp. 798â812).
The LP-calculus was createdat the beginning of the 60-the of XX century in connection with the necessity of quantitative estimation of the reliability of complex structures (annular, networks, bridge-like and monotonous ones). Scientific literature of that time could suggest nothing suitable to deal with the problem. The experts in reliability could perform calculations for the consecutive, parallel, or tree-like structures only.
In 1987, Kyoto University published the book by I. A. Ryabinin and G. N. Cherkesov âLogic and probabilistic methods of research of reliability structural-complex systemsâ (M.: Radio and Communication, 1981, p. 264) translated into the Japanese language. In the book, the set-theoretic and logic part of LP-calculus was advanced. In the new book âReliability and safety of structural-complex systemsâ (SPb., Polytechnika, 2000, p. 248), Prof. I. A. Ryabinin has generalized the 40-year experience of researches on reliability and safety by the LP-calculus. There is a review of this book in English (Andrew Adamatzky âBook reviewsââ Reliability and Safety of Structure-complex Systems. â Kybernetes. Vol. 31, No 1, 2002, pp. 143â155).
The present publications in the risk LP-theory and the risk management do not represent the state-of-the-art in the field of science; they have a small circulation, and the knowledge is confined within a small group of experts. The risk LP-theory and such scientific disciplines as the LP-calculus, the discrete mathematics, and the combinatorial theory are not included as a rule into the educational programs of the Higher School. It causes the difficulty in the way of active mastering the scenario risk LP-management in business, economics, and engineering. The publication of the present monograph, devoted to the scenario risk LP-management, seems to be well-timed.
The present book is of applied importance. The purpose of the present book is to acquaint economists, engineers, and managers with the bases of the scenario risk LP-management, which includes: the risk LP-theory, the methodology of construction of the risk scenario, the technology of risk management, examples of scenarios, and models of risk in different fields of economy and engineering.
The important feature of the suggested presentation is the attempt to unify knowledge from different fields: discrete mathematics, combinatorial theory and Weilâs theorem; nonlinear optimization and algorithmic calculations, modeling of Monte-Carlo and on modern computers; the LP-calculus (Ryabinin,1976; Guding et al., 2001), the LP-methods (Mojaev and Gromov, 2000; Ryabinin, 2000); the theories by Markowitz and VaR for risk of security portfolio (Markowitz, 1952; Sharp, 2001), the risk LP-theory with GIE (Solojentsev et al., 1999; Solojentsev and Alekseev, 2003).
The novelty and utility of the book consist of the following: it is the first time when the basic principles of the modern risk LP-theory (the LP-calculus, the LP-methods, and the risk LP-theory with GIE) are stated in one work using uniform methodology and terminology, and with practical orientation on use both in engineering and economics. With the permission of Prof. I. A. Ryabinin, some mathematical results and examples from his book (Ryabinin, 2000) are reproduced. The technology of the automated construction and the analysis of LP-models of any complexity are presented following works by Mojaevand Gromov(2000).
Since a correct and timely decision can have a significant impact on the personal and social life of humans, the need for a strong technique that can help a person in this area is quite tangible. One of the most effective of these techniques is the analytical hierarchy process (AHP), which was first introduced by Thomas L. Saaty in the 1970s. This technique is based on paired comparisons and allows the managers to examine different scenarios. This process has been welcomed by various managers and users in light of its simple yet comprehensive nature until, by comparing the two criteria and sub-criteria in this process, the results of this method will be closer to the actual reality. Based on this, considering that any criterion or sub-criterion in this process has different utility at different levels, it is best to compare them with two criteria according to the desirability of the criteria at each level. To test the results of this work, the technique is used to solve a problem that is available in this book.
The world around us is fraught with multi-criteria issues, and people are always forced to make decisions in these areas. For example, when choosing a job, there are various criteria, such as social status, creativity and innovation, and so on. The decision maker must consider the various options according to these criteria. In large-scale decisions such as annual budget planning, experts have pursued various goals, such as security, industrial development training, etc., and would like to optimize these goals. In the life of the day, there are many examples of decision making with multiple criteria.
In some cases, the result of decision making is critical to the fact that an error may impose irreparable losses on us. Therefore, it is necessary to design the appropriate technique or techniques for optimal selection and decision making so that the decision maker can make the best possible selection closer. The AHP method, based on human brain analysis for complex and fuzzy problems, was suggested by a researcher named Thomas L. Saaty in the 1970s. The hierarchical analysis process is one of the most comprehensive systems designed for decision making with multiple criteria because this technique allows formulation of the problem in a hierarchical manner, as well as the possibility of considering different quantitative and qualitative criteria in the problem is that this process interferes with different options in decision making and allows sensitivity analysis to be based on criteria and sub-criteria, in addition to being based on a paired comparison that facilitates judgment and computation, as well as the degree of adaptability And the incompatibility of the decision demonstrates the privileged advantages of this technique in making a few decisions it helps. The type of our paired comparison between the criteria and the sub-criteria is linear. For example, if the element A preference for element B is always equal to n, the element B preference for element A will always be equal to 1/n, while at the various levels of element A, the elementâs desirability B has changed. In this research, we have tried to make a more accurate comparison of the criteria and sub-criteria according to the utility theory, which is one of the most applicable theories in microeconomics, and the relative weight of each criterion with the use of the utility function is obtained between the two criteria.
The concept of âoptimizationâ is commonly used both in rational decision theory and natural selection theory. While researchers broadly recognize the differences between the two theories, the differences are not widely emphasized. This chapter aims to stress the differences between the two theories, which warrant calling each concept by a different name: rationality optimization and selection optimization.
The term ârationalityâ connotes the discipline of economics, while the term ânatural selectionâ connotes the discipline of evolutionary biology. The disciplinary boundary, though, between economics and evolutionary biology is irrelevant to the fault line that separates rationality optimization from selection optimization (Khalil, 2000). Biologists use, without explicitly stating so, the concept of rationality when they discuss the fitness of behavior. On the other hand, economists use, also without explicitly stating so, natural selection when they discuss market equilibrium. So, we need not make a comparison between economics and biology as disciplines, which would usually imply that they use separate conceptual frameworks. In any case, such comparisons have been undertaken (Hirshleifer, 1977; Hodgson, 2004, 2007). The focus of this chapter is, rather, on the concept of optimization and, in particular, how far apart rationality optimization is from selection optimization.
It is imperative to emphasize the difference between rationality optimization and selection optimization. Once the difference is clarified, it is not easy any longer for natural selection theory to explain the origin of rationality optimization (Khalil, 2007b). The basic difference is that natural selection operates at the level of the population as an unintended outcome, while rational decision operates at the level of the individual as an intended action.
In stressing the difference between rationality optimization and selection optimization, this chapter emphasizes the role of behavior and, hence, the development of the organism (ontogeny), which is not the same as the operation of natural selection. The neo-Darwinian theory of evolution, i.e. natural selection theory, has long ignored the role of ontogeny, which is greatly influenced by decision making. However, a new emphasis on the role of ontogeny in evolution has spearheaded by the research program known as Evo Devo (MĂźller and Newman, 2003). As indicated below, evolutionary economics associated with the work of Joseph Schumpeter, which stresses the learning and development of the firm, parallels this Evo-Devo approach in biology.
One payoff of highlighting the difference between the two kinds of optimization is showing that biologists have been employing the tools of rationality optimization in their analysis of animal behavior and ontogeny without being aware of doing so. Rationality optimization, which amounts to responsiveness to incentives or constraints, is nothing other than what biologists call âphenotypic plasticity.â Biologists widely recognize phenotypic plasticity across all taxa and kingdoms. That is, rational decision making typifies the behavior of all organisms â viz., from plants to fungi and animals (Khalil, 2007a).