Business plan - Accounting. Contract. Life and business. Foreign languages. Success stories

A systematic description of the decision-making problem. Mathematical methods for making managerial decisions in conditions of uncertainty Functions of mathematical methods of analysis and decision making

MATHEMATICAL METHODS OF MANAGEMENT DECISION MAKING UNDER UNCERTAINTY

Kravchuk Alina Sergeevna

4th year student, Department of Economic Cybernetics, VNAU. Vinnytsia

Chernyak Natalia Ivanovna

scientific advisor, Ph.D., associate professor of VNAU, Vinnytsia

Introduction.At the present stage of development of market relations, with complex economic and informational connections between business entities, in the process of managing an enterprise, problems arise that depend on a significant number of external and internal factors that rapidly change over time and affect the efficiency of the enterprise in different directions. In such conditions, when developing and making management decisions, it is necessary to take into account the conditions of uncertainty, analyze them, use the appropriate models and methods of decision-making.

Analysis of recent research and publications.The problems of developing and making management decisions in conditions of uncertainty are considered in the works of such domestic and foreign scientists as R. Akoff, I.O. Blank, V.V. Vitlinsky, V. G. Vovk, A. K. Kamalyan, Yu. G. Lysenko, M. Meskon, D. O. Novikov, V. S. Ponomarenko, O. I. Pushkar, T. Saati, G. Saimon, E. A. Trakhtengerts, R. A. Fatkhutdinov, J. Forrester and others.

The purpose of the study is the study of a decision-making model under uncertainty, based on a game-theoretic concept using classical criteria for evaluating alternatives from a variety of possible options.

The main results of the study.Uncertainty - a fundamental characteristic of insufficient provision of the economic decision-making process with knowledge about a certain problem situation. Uncertainty can be interpreted and detailed as unreliability, ambiguity.

To substantiate decisions under conditions of uncertainty, when the probabilities of possible scenarios are unknown, special mathematical methods have been developed, which are considered in game theory. Game theory examines the interaction of individual decisions under certain assumptions about decision making under risk, general environmental conditions, cooperative or noncooperative behavior of other individuals. The goal of game theory is to predict the results of strategic, operational games, when the participants do not have complete information about each other's intentions.

Let the information situation be characterized by the set

Where - a set of solutions (alternatives) of the control object,

- a set of states of an uncertain economic environment,

Is the estimation functional (estimation matrix), defined on and and the one that .

The quality of the decision made, as well as the methodology for making it, depend on the degree of awareness of the subject of management. From the point of view of the subject of management, the information situation means a certain degree of uncertainty in the choice of the environment by its states at the time of making a decision.

Consider a classifier of information situations associated with environmental uncertainty:

And 1 the first informational situation is characterized by a given distribution of prior probabilities on the elements many states of the environment;

AND 2 the second information situation is characterized by a given probability distribution with unknown parameters or environmental factors (sufficient information in terms of volume, a hypothesis has been put forward regarding the class of functions to which the probability density function belongs and on the basis of the available information it is necessary to estimate the parameters that characterize this class of functions);

And 3 the third informational situation is characterized by a given system of linear or nonlinear relations on the elements of a priori distribution of the states of the environment.

Within the limits of the first - third information situations in conditions of uncertainty of the environment and risk in the implementation of the process of making effective decisions, Bayes criteria, modular, minimum variance, Germeier, maximax are used.

And 4 the fourth informational situation is characterized by an unknown probability distribution on the elements (parameters, factors, etc.) set of environment states. In such a situation, it is advisable to use the Jaynes and Laplace criteria;

And 5 the fifth informational situation is characterized by antagonistic interests of the environment; in the decision-making process, alternatives are assessed according to the criteria of Wald, Savage;

And 6 the sixth informational situation is characterized as intermediate between And 1and And 5 when choosing the environment of their states in the decision-making process according to the Hurwitz, Hodge-Lehmann criteria.

The given informational situations are global characteristics of the degree of uncertainty of states from the point of view of the subject of management.

Let the functional have a positive ingredient (the problem of optimizing the categories of utility, gain, profitability, probability of achieving a certain strategy), i.e.

, (1)

And let for a negative ingredient (cost optimization, damage, risk), i.e.

, (2)

Risk function in the implementation of a certain strategy is defined as a linear transformation of a positively or negatively given ingredient of the functional V to the relative units of measurement of the components of the functional V.

So, for a certain information situation, as well as for a fixed state of the environment, the value of risk is equal to:

,

For respectively

Thus, risk is defined as the difference between the decision in the presence of accurate data on the state of the environment and the result that can be achieved when the data on the state of the environment is not defined.

The definition of alternatives is carried out under conditions, for example, information situations I 1 - I 6, respectively, according to the criteria:

(Wald criterion); (3)

Wald's criterion expresses a position of extreme caution. This property allows us to consider this criterion as one of the fundamental ones.

(Savage criterion); (4)

The Savage criterion is quite often used in practice when making managerial decisions for a long period: for example, when allocating capital investments.

(Laplace criterion); (five)

The Laplace criterion is used under the condition when the probabilities of possible states of systems are unknown, i.e. in conditions of complete uncertainty.

(maximax criterion); (6)

Using the maximax criterion, a strategy is determined that maximizes the maximum gains for each information situation.

(Germeier criterion); (7)

Germeier's criterion is a criterion of extreme pessimism, taking into account the likelihood of environmental conditions.

The variables determine the amount of resources in terms of profit, or expenses, therefore, knowing the price per unit of resources that are offered for expenses, it is possible to calculate the amount of profit or loss from the implementation of one or another strategy with respect to optimal alternatives.

If experts cannot (or have doubts) determine the state of the internal environment of resources in a certain period of their use to the conditions of behavior of the external environment for information situations And 1And 6, then the alternatives are evaluated according to all criteria. The determination of the optimal alternative in this case is carried out by the so-called voting method, the essence of which is to choose the alternative for which the largest number of experts voted.

Conclusions. Uncertainty is an insurmountable quality of the market environment, due to the influence of a large number of factors that are different in nature and direction, which together cannot be estimated or measured. When forming a management decision in conditions of uncertainty in the use of one of the above criteria, it is not enough for a rational choice of a solution, since it can lead to significant losses of economic, social and other content. It is necessary to take into account the time factor, combine the criteria with each other and analyze the criteria on already known situations to check the reliability of the results obtained. It is also advisable to combine the application of these criteria with the method of expert assessments.

List of references:

1. Arefieva AA Models of making economic and organizational decisions to improve the efficiency of using production potential and criteria for the feasibility of its use / AA Aref "єva, VM Mikhailenko, OL Goryacha // Problems of information technologies. - 2007. - No. 1. - S. 14-23.

2. Vitlinsky V. V. Economic risk: game models: Textbook. allowance / V. V. Vitlinsky, P. I. Verchenko, A. V. Sigal, Ya. S. Nakonechny; Ed. Dr. econ. Sciences, prof. V.V. Vitlinsky. - K .: KNEU, 2002 .-- 446 p.

3. Klimenko S. M., Dubrova O.S. Justification of economic decisions and risk assessment: Teaching method. manual. for self. study disc. - K .: KNEU, 2006 .-- 188 p.

4. Levikin VM Influence of information technologies on reengineering of business processes of an enterprise / VM Levikin, MG Kapustin // New technologies. - 2005. - No. 3 (9). - S. 73.

5. Petrov EG Management of the functioning and development of socio-economic systems in conditions of uncertainty / EG Petrov, NA Sokolova, DI Filipskaya // Bulletin of the Kherson National Technical University. - 2007. - Issue. 27. - S. 156-159.

Of the various methods of making economic decisions, the most common ones can be distinguished: mathematical programming; game theory; theory of statistical decisions; queuing theory; method of cause and effect analysis; use of the model

Mathematical programming represents theoretical principles and analytical methods for solving problems in which the extremum (minimum or maximum) of a certain function is searched for in the presence of restrictions imposed on unknowns. A special place in mathematical programming is occupied by linear programming, which is the most developed and widely used in practice. Linear programming includes analytical methods for solving such problems in which the objective function and constraints are expressed in linear form, that is, the unknowns included in the objective function and constraints must be the first step. Problems in which the maximum and minimum values \u200b\u200bof a linear function are found under linear constraints are called linear programming problems.

Depending on the type of the objective function and the system of constraints, mathematical programming methods are divided into

linear programming - the objective function and the constraint functions included in the constraint system are linear (first order equation)

nonlinear programming - the objective function or one of the constraint functions included in the constraint system is nonlinear (higher order equation)

Integer (discrete) programming - if at least one variable is imposed an integer condition;

dynamic programming - if the parameters of the objective function and / or the system of constraints change over time or the objective function has an additive / muliishgic form or the process of making the decision of the masses is multi-step in nature.

Depending on the information about the process in advance, the methods of mathematical programming are divided into

Stochastic programming - not all information about the process is known in advance: the parameters included in the objective function or in the function of constraints are random or one has to make decisions under risk conditions

Deterministic programming - all information about the process is known in advance.

Depending on the number of target functions, tasks are divided into:

One-criterion;

Bagato-criterion.

Linear programming combines the theory and methods for solving a class of problems in which a set of values \u200b\u200bof variables is defined that satisfy a given linear constraint and maximize (or minimize) some linear function. That is, the tasks of linear programming are such optimization problems in which the objective function and functional constraints are linear functions, take any values \u200b\u200bfrom a certain set of values.

Numerous solution methods and corresponding software for various situations have been developed for linear programming problems. To solve linear programming problems, several methods are used, among which the simplex method and the graphical method are the most common.

The most convenient method for solving such problems is the simplex method, which allows, starting from the original version of solving problems, for a certain number of steps to obtain the optimal version. Each of these steps (iterations) consists in finding a new option, which corresponds to the largest (when solving maximum problems) or less (when solving problems to a minimum) value of the linear function than the value of the same function in the previous version. The process is repeated until the optimal solution is obtained, which has an extreme value.

Thus, we can assume that the optimal plan is the one that provides the maximum production effect for a given amount of material, raw materials, labor resources. The maximum production effect is determined by the optimization criterion, which determines the target function.

The most typical tasks, for the solution of which the simplex method is used, are: optimal planning at enterprises (planning of assortment production of products), an optimal set of raw materials, efficient use of raw materials, material, labor, financial and energy resources, tasks of optimizing the organization of production (transport task ).

Optimization of the production program (assortment tasks) at enterprises is a group of tasks in which the production program is determined taking into account the influence on enterprises of internal factors (equipment capabilities, raw material limits, labor factors) and some external requirements (demand for marketable products as a whole or individual assortment groups and types, the average price of the assortment that is produced, etc.).

The main stages of formulating and solving the problem of optimizing the production program:

1) building an economic and mathematical model: collecting information, preparing it for building a model; choice of optimization criterion; selection of restrictions and their construction in general form; analytical and tabular view of the model with real coefficients;

2) finding the optimal solution to the problem;

3) analysis of solution results and practical recommendations.

In the optimal production plan, the selection of optimization criteria is carried out in accordance with the goal of solving the problem. The optimization criterion can be different cost and physical indicators. In addition to the goal function, the model uses constraints, since the resources available to the enterprise are in most cases limited, and the assortment output must be calculated taking into account the demand for products. The constraints are selected depending on the resources that are used to release the production program of the enterprise.

The efficiency of the task and the optimality of the resulting assortment are assessed using systems of economic indicators (change in production volumes in physical and value terms, reduction in production costs, increase in profits and profitability, cost reduction by 1 ruble, use of raw materials, etc.).

Game theory studies quantitative patterns in conflict situations. The main goal of game theory is to develop or quantitatively substantiate recommendations for choosing the most rational solution in conflict situations. In economic research, conflict situations are situations when it becomes necessary to choose a rational solution from two or more mutually exclusive options.

The theory of statistical decisions, which uses methods to study processes and phenomena that are very exposed to random, uncertain factors, at the heart of this theory is the theory of probability.

The theory of queuing, studies the patterns of queuing processes and, on their basis, develops effective methods for managing service systems. The methods of queuing theory make it possible to rationally organize the service process and ensure the most efficient functioning of the queuing system (reducing the waiting time for service, reducing the cost of service). The theory of queuing is based on the theory of probability and mathematical statistics.

Decision tree (can also be called classification trees or regression trees) - used in statistics and data analysis for predictive models. The tree structure contains the following elements: "leaves" and "branches". On the edges ("branches") of the decision tree, attributes are written on which the objective function depends, the "letter" contains the values \u200b\u200bof the objective function, and in other nodes - the attributes by which the cases are distinguished. To classify a new case, you need to go down the tree to the leaf and return the corresponding value. Decision trees like these are widely used in data mining. The goal is to create a model that predicts the value of the target variable based on multiple input variables.

Each leaf represents the value of the target variable as it moves from root to leaf. Each internal node corresponds to one of the input variables. The tree can also be "explored" by dividing the output variable sets into subsets based on testing the attribute values. This is a process that is repeated on each of the resulting subsets. Recursion completes when a subset in a node has the same target variable values, so it does not add value for predictions. The top-down process, Decision Trees Induction (TDIDT), is an example of a greedy consuming algorithm and is by far the most common data decision tree strategy, but it is not the only possible strategy. In data mining, decision trees can be used as mathematical and computational methods to help describe, classify, and summarize a set of data that can be written as follows:

The dependent variable Y is the target variable that needs to be analyzed, classified and summarized. Vector x consists of input variables X1, x2, x3, etc., which are used to accomplish this task.

In decision analysis, decision trees are used as a visual and analytical decision support tool, where the expected values \u200b\u200b(or expected utility) of competing alternatives are calculated.

The decision tree consists of three types of nodes.

1. Solution nodes - usually represented by squares.

2. Probability nodes are represented as a circle.

3. Closing nodes - are represented as a triangle.

In fig. 4.1 below, the decision tree should be read from left to right. A decision tree cannot contain cyclic elements, that is, each new leaf can only split afterwards, there are no paths converging. Thus, when constructing a tree by hand, we may face the problem of its dimension, therefore, as a rule, we can obtain a decision tree using specialized programs. Usually, a decision tree is represented as a symbolic diagram, which makes it easier to perceive and analyze.

Figure: 4.1. decision tree

Decision trees used in Data Mining come in two main areas:

Analysis of the classification tree when the predicted result is the class to which the data belongs;

Tree regression analysis where the predicted outcome can be viewed as a real number (such as the price of a house, or the length of a patient's stay in the hospital).

The above terms were first used by Breiman et al. The types listed have some similarities as well as some differences, such as the procedure used to determine where to split. Several methods allow you to build more than one decision tree:

A bag decision tree, the earliest decision tree, builds multiple decision trees, interpolating data repeatedly with substitution, and voting trees to predict consensus. The forestry random classifier uses a number of decision trees to improve the classification rate;

"Elevated" trees can be used for regressive type and classification of type of problems.

"Forest rotation" - trees in which each decision tree is analyzed by the first application of the principal component analysis (PCA) on random subsets of input functions.

The general scheme for constructing a decision tree based on test examples is as follows (according to the algorithm in Fig. 4.2):

Figure: 4.2. Decision tree construction algorithm

The main question is: how to choose the next attribute? There are different ways to select the next attribute:

Algorithm ID3, where the choice of an attribute is based on the gain of information (English Gain), or based on the Gini coefficient.

Algorithm C4.5 (improved version of ID3), where the choice of an attribute is based on the normalized gain of information (English Gain Ratio).

Algorithm CART and its modifications - IndCART, DB-CART.

Automatic detector of interaction Chi-square (forces). Performs tiered splitting when calculating tree classification.

MARS: Expands decision trees to improve digital data processing.

In practice, as a result of the operation of these algorithms, trees are often too detailed, which, when applied further, give many errors. This is due to the phenomenon of overfitting. Pruning is used to prune trees.

Adjusting the depth of the tree is a technique that allows you to reduce the size of a decision tree by removing areas of the tree that are lightweight.

One of the questions that arises in the decision tree algorithm is the optimal size of the final tree. Thus, a small tree may not capture some important information about the sample space. However, it is difficult to tell when the algorithm should stop, because it is impossible to predict which node will be added to significantly reduce the error. This problem is known as the "horizon effect". Nevertheless, the general strategy of limiting the tree is preserved, that is, the removal of nodes is implemented in the event that they do not provide additional information.

It should be noted that adjusting the tree depth should reduce the size of the training tree model without reducing its prediction accuracy or through cross-validation. There are many methods for adjusting the depth of the tree, which differ in terms of performance optimization.

The tree can be cut from top to bottom or bottom to top. From top to bottom - pruning starts at the root, from bottom to top - the number of tree leaves is reduced. One of the simplest adjustment methods is to reduce the tree limiting error. Starting with the leaves, each node is replaced with the most popular class. If the prediction accuracy is not affected, then the change is saved.

When making decisions, the manager can use one of the above methods. The best decisions are made by the group. The effectiveness of group decisions largely depends on the leader. Taking into account the skills, character and mood of the leader, his pedagogical abilities, attention to people and other qualities, psychologists distinguish five types of leaders: dictator, democrat, pessimist, organizer and manipulator.

The method, based on a scientific and practical approach, requires the use of modern technical means and, first of all, electronic computers.

In general, the problem of the manager's choice of a solution is one of the most important in modern science and management practice.

Information systems specialists believe that the state of any control object can be characterized by some uncertainty, or entropy (H0 \u003d -logPo), which acts as an information potential that causes the transition of the system to another state, that is, the onset of an event, the probability of which is is equal to P0.
In practice, the goal of any manager is to change the state of the system, that is, to exert an impact that led it to a new stable state (event) Rust, which will correspond to a different value of the information potential (Nust \u003d -logH ^), where Rust is the probability of an event from applied by the control action on the system.
Then we can assert that the essence of control carried out by the source of information (the head) can be characterized by some information tension
(4.11)
P st
DHopt. _ H0 Hset.
\u003d \u003d DJ exercise 5
P
ie DHopt »DJcont.
Thus, managers involved in production activities are the source of control information. It should be understood in this way. The head of a human-machine complex or OTS must have such a potential (a source of information tension), which is equal to the logarithm of the ratio of the probability of a correctly made decision (P0), leading to the probability of the system transitioning to a stable state Rust, the functioning of which will be carried out without additional impact on the control object. Or, another example, let the vice-rector for information be a source of control information for all computing departments, having an information tension equal to the probability of fulfilling the informatization plan of UlSTU without additional funds.
From the above it follows that the information voltage, that is, the essence of the source AN, can be both positive and negative. If Rust \u003d P0, then the voltage of the source is equal to zero (AH \u003d 0), and then the role of the leader in management is insignificant, meaningless, that is, he does not control the process.
It is now important that we can move from a meaningful description of the control process to a mathematical one, but for this it is necessary to choose a unit of measurement of the information potential, identifying the formal description of entropy with information entropy and, depending on the choice of the base of the logarithm in (4.11), we come to the concept of “information entropy ", which will be measured in bits.
Many authors identify informational entropy with thermodynamic entropy, which actually corresponds to physical reality. In our case, it is possible to use bits to measure the information voltage only if we use binary logarithms, as suggested in the work. However, information voltage should not be confused with information, which is also measured in bits, this is essential.
To be convincing, consider an example. Let's calculate the information tension possessed by the computer security system in the laboratories of the IC MF. Let the most important object is the information server of the MF, which stores all information, and when it is destroyed or liquidated, the entire educational process of the faculty is disrupted. Suppose that the operation of liquidating the server is carried out by two people, one of whom managed to escape when the alarm was triggered. In this case, without being able to arrest both kidnappers, the guards who do not have operational communication with each other will capture one of the kidnappers with a probability
equal to 0.5 (P0 \u003d 0.5). If the actions of the guard are coordinated with each other, then they neutralize this subject with a possible probability equal to 1. Then we have that AH \u003d log2 \u003d 1 bit. According to the definition of the logarithm, we obtain an exponential equation of the form 2x \u003d 1, taking x \u003d 0, the voltage of the information source (guard) will be 1 bit.
It should be noted that according to the considered example, a source with a voltage of 1 bit is capable of transmitting an arbitrarily large amount of information to a control object, depending on the time it will have. It is also important to note that the information voltage of the source can change its value over time, that is, the sign, if the importance of achieving the goal is not the same at different points in time. Using mathematical expressions describing the operation of automatic control systems, to determine the alternating information voltage, you can use the formula
2
ґr L
mouth
V P0)
1 t
IJ
T
dt \u003d o (AH),
log
(4.12)
AH d \u003d
1 ¦ J dt \u003d
which expresses the rms voltage o (AH). For random changes in the essence of the signal x, you can use the expression
? ? AH0 \u003d Jf (x) AH ¦ dx; A ^ \u003d Jf (x) AH2 ¦ dx,
-oo
-oo
where AH0 and AED are the average and effective values \u200b\u200bof the signal essence; f (x) is the density of the probability distribution P of the event.
If AH \u003d A sin
v T)
, then according to (4.12) the effective value of the variable
A
th information voltage is AH d \u003d - \u003d, which is 1.5 times less
V2
maximum instantaneous voltage value.
This information, issued by the control source, that is, the control, is sent to the executive bodies ("active elements") by the information load of the source, and then through the feedback loop it returns to the source again. Feedback is provided by the same elements as direct.
If the executive bodies are passive and have no memory, they are characterized only by information resistance (IR). It should be noted that IR is the time (t), that is, the execution time of the control indication.
More precisely, the IR of the system is equal to the time (tR) of the task execution from the moment of receiving the instruction to the receipt of the report on its implementation. At the same time
(tR) for making the decision itself, that is, comprehending the wording, is
internal information resistance (R V nr) of the information source
(manager), which is the inverse of the system bandwidth (Imax) of the information source. And, therefore, for systems without memory, there is an information law similar to Ohm's law for an electrical circuit
ii \u003d (4.13)
FH
where FH \u003d Fn - BW - information load resistance; Bp and F ^ - information resistance, respectively, of the entire circuit and the internal resistance of the source; I - information flow (current) in the load circuit.
With a single achievement of the goal, information (1c) passes through the control system, numerically equal to the voltage of the information source
I, \u003d IFh \u003d DH \u003d DI control. (4.14)
During long-term operation during time (t), information flows through this circuit
t t DH
1 UPR \u003d J Idt \u003d J-dt. (415)
0 0 Gn
It is important to understand that the effectiveness of management does not depend on the amount of information and not even on the quality, but how much it contributes to the achievement of the goal, that is, on its value. Thus, the value of information, first of all, must be associated with the goal, with the accuracy of the task formulation. By the quality of information, we mean the degree of its distortion, which depends on the elements of the information chain.
Thus, we can have a large flow of information, but if it does not contribute to the achievement of the goal and is not accurate, for example, due to distortion, it will therefore have no value.
Based on this methodology for calculating the amount of information circulating in the information chain, it is also possible to perform assessments of the quality of decisions made, which makes it possible to use classical mathematical estimation procedures to solve optimization problems.
Similar tasks are considered in the work.
It is known that any task becomes more specific when it is expressed in mathematical form. To set a mathematical problem that reflects the essence of the production of information work, one should add sufficient conditions to the necessary conditions set out above, namely:
be able to use the information assessment methodology in the current situation;
have a manager who is able to neutralize the destabilizing factors that affect a given probabilistic system.
The paper shows how probabilistic dynamic problems are represented in the form of deterministic ones, within which the objects under study are described by functions of many variables, and the varied parameters are their arguments. Thus, taking the IC as a probabilistic dynamical system, its model can be represented in the form of functions of several variables x \u003d x (x1, ..., xt), where x \u003d f (I); I - information.
In problems that do not require an exact solution, you can use an approximate estimate of the state of an object, taking into account only the most important output indicator, for example, the throughput f (x), i.e., efficiency. Then, denoting the remaining parameters by the function φ8 (x), s \u003d 1, 2, ..., m, we arrive at the problem of optimal choice of the vector of parameters x. This problem is a computational algorithm written in the form of an estimation and optimization procedure:
max f (x),
(4.16)
>
xeS
S (x: x є X with Rn, js (x) We need to maximize the performance index f (x) on the set S given by the system of constraints formulated above. Here x belongs to the set S if x ,X, where X is some subset n-dimensional space Rn, when the inequality φ3 (х) is satisfied.Usually, the set X defines restrictions on the admissible values \u200b\u200bof the varied parameters x such as conditions for non-negativity xj\u003e 0 or for the inequality φ3 (x) to belong to the interval xj А It is essential that from the mathematical point of view the formulated problem can also be interpreted as a planning process under uncertainty for a dynamical system.Then it is reduced to solving a probabilistic linear programming problem, which, taking into account (4.16), is written in a more convenient form:
max MуCj (w) y L
w
(4.17)
j \u003d 1
S ^ x: xє X, P \\? Asj (w) xj Ls, S \u003d 1,2, ..., m.
sJw j s J \u003d!
where Mw is the operation of averaging a random variable w, and Y is a function f (xj) that characterizes the most important indicator of the analyzed system, for example, the capacity of the complex or its efficiency. Generally, the averaging operator is written as
Mw (y (x, w)) \u003d Y (x),
which defines the function Y (x) as the mathematical expectation of a random vector y (x, w). The function Y (x) given by the random variables js (x, w) is probabilistic.
In formulas (4.16) and (4.17), the functions f (x) and φ3 (x) were specified algorithmically, not analytically, therefore we operate with random variables, which are mathematically denoted as f (x, w) and js (x, w ), so that in a more rigorous form we have
f (y) \u003d Mw (f (y, w)),
js (x) \u003d Mw (js (x, w)). (4.18)
It should be noted that Y is a deterministic value, and q (w) is the coefficient of the objective function.
Conditions a All random parameters included in (4.17) make it possible to take into account fluctuations (deviations) in the costs (z) for production (y), taking into account the late delivery of components, spare parts, software and hardware and other random factors in the conditions of which the system operates (computing complex).
To satisfy the conditions of problems (4.16) and (4.17), it is necessary to choose
n
vector x so that a random inequality of the form 2 asj (w)? bs (w) ran
j \u003d 1
with probability equal to Ls, and then problem (4.17) can be represented in a simpler form
f (y, w) \u003d 2 Cj (w) y,
j \u003d 1
(4.19)
js (x, w) \u003d Ls - 1
j \u003d 1
where Ls (w) characterizes a set of random factors, for example, depending on suppliers and consumers.
Thus, the problem under consideration belongs to the category of probabilistic, because the conditions in which the complex exists and functions are
are uncertain and dependent on many unforeseen circumstances not known to immediate management.
The formulated and posed task allows us to link all the most important parameters into the system and take into account random factors that always exist in real practice.
This formulation of the problem allows one to abstract from the meaningful formulation and move on to building a mathematical control model using the theory of automatic control.
In order to practically solve this control problem with a given quality of manufactured products, it is necessary to introduce procedures for making an operational decision, which should be easily adapted to the target function. In this case, the parameters x; \u003d f (I), that is, the execution of the plan x ;, can be replaced by the amount of processed information (I) using information chains.
Since the solution of the general mathematical control problem within the framework of this work is not possible due to its complexity, therefore, we will represent it in the form of separate simplest subproblems.
In practice, such a procedure for simplifying a complex task is achieved through preliminary coordination of individual subtasks with direct persons of the top management level, whose competence relates to their solution. Thus, we reduce the multifactorial problem to a one-step, deterministic one. But, on the other hand, since in one-step decision-making problems, it is not the magnitude and nature of the control action (H) that is determined, but the direct value of the state variable 0 of the object, which ensures the achievement of the goal in front of the IC, therefore, the higher-level manager is not interested in how this problem will be solved in a way. The end result is important to him. Consequently, for a specific head of the lower level, the decision-making problem will be considered given if it includes all the necessary parameters that make it possible to assess the state of the object at a given time (t). Then, in this particular case, the problem of making a decision for it will be considered deterministic provided that the state space of nature 0 with the probability distribution ^ (u) for all ue 0, the space of solutions x and the criterion of the quality of the decision are determined. The relationship between these parameters will be called the objective function (Fq).
The objective function F4, which explicitly expresses the objective, can be considered as one of the most important output values \u200b\u200bof the control object and we denote it by (g). Then the objective function is a scalar quantity that depends on the state of nature u and on the state of the control object 0. In this case, the formulated problem in mathematical form can be represented as
g \u003d 0 (x, u).
This is a mathematical model of a one-step deterministic decision-making problem. It is a trio of interrelated parameters that can be written as the following dependency:
G \u003d (x, 0, q), (4.20)
where q is a scalar function defined on the direct product of sets (XX0), then G \u003d f (g).
*
The solution to this problem consists in finding such x є X that maximizes the function g, i.e., satisfies the condition
X \u003d (x є X: Q (x, u) \u003d max). (4.21)
Here X \u003d х1, х2, ..., хт - a list of planned activities of the IC, with m? N, where N - variables - the number of planned activities (tasks). There are several methods for solving a one-step problem.
Representing the variable X as the amount of processed information I in the process of performing computational work, we can write down that x \u003d W), and use the informational method for evaluating decision making. Therefore, if necessary, we have the right to evaluate the activities of the information center in bits.
Based on systemic principles, we tried to formalize the routine work of the head of the information department and translate it onto a scientific basis, presenting it as a management task, in order to increase the efficiency of decision-making in uncertain conditions.

abstract

Mathematical Methods in Decision Making


From its inception, mathematics as a science has been a tool in the search for truth, and therefore it can be assumed that any mathematical operations, even the simplest ones, are mathematical methods of decision-making. Currently, decision-making is understood as a special process of human activity aimed at choosing the best option (alternative) of action. Decision-making processes are the basis of any purposeful human activity. For example, when creating new equipment (machines, devices, devices), in construction when designing new buildings, when organizing the functioning and development of social processes. In this regard, there is a need for a decision-making guide that would simplify this process and make decisions more reliable. In addition to empirical perception of the situation and intuition, in our time of difficult economic situations and enterprise management processes, managers need some basis and a "proven guarantee" of the decision to be made. Inevitably, formalization of the decision-making process is required. As a rule, important decisions are made by experienced people who are quite far from mathematics, and especially from its new methods, and who are afraid to lose more from formalization than to gain.

Consequently, science is required to advise on optimal decision making. The time has passed when the right decisions were made "by touch", by the "trial and error" method. Today, to work out such a solution requires a scientific approach - the losses associated with errors are too great. Optimal solutions make it possible to provide the enterprise with the most favorable conditions for the production of products (maximum profit with minimum labor costs, material and labor resources).

Currently, the search for optimal solutions can be considered using the sections of classical mathematics. So, for example, in mathematical statistics in the section "decision making" they study the ways of accepting or not accepting some basic hypothesis in the presence of a competing hypothesis taking into account the loss function. Decision theory develops methods of mathematical statistics - methods for testing hypotheses. Different values \u200b\u200bof losses when choosing different hypotheses lead to results that differ from those obtained by methods of statistical hypothesis testing. The choice of a less probable hypothesis may turn out to be more preferable if the losses in case of an erroneous choice are less than the losses caused by an erroneous choice of a more probable competing hypothesis. Such problems are called statistical decision problems. To solve these problems, it is necessary to find the minimum value of the risk function on the set of possible outcomes, i.e. solve the problem of finding a conditional extremum. As a rule, for these tasks, you can select a goal and specify conditions, i.e. restrictions under which they must be resolved. Similar problems are dealt with in the section of mathematics "mathematical programming", which, in turn, is part of the section "research of operations".

The input data is a real task - an arbitrarily formulated set of data about a problem situation. The first step in solving the problem is its formulation - bringing the data to a form convenient for building a model. Model is an approximate (descriptive) representation of reality. Further, according to the constructed model, the search for optimal solutions and the issuance of recommendations are carried out.

The models can be divided into 2 large groups:

Deterministic models:

Linear programming;

Integer programming and combinatorics;

Graph theory;

Streams in networks;

Geometric programming;

Non-linear programming;

Mathematical programming;

Optimal control.

Stochastic models:

Queuing theory;

Utility theory;

Decision making theory;

Game theory and game modeling;

Search theory;

Simulation modeling;

Dynamic simulation.

When making decisions, it is necessary to find the optimum of a certain functional in deterministic or stochastic form. Two things should be noted. First, mathematical methods of decision-making for problems related to various areas of human activity begin to penetrate each other, for example, optimization control problems in the transition from continuous variables to discrete ones become problems of mathematical (linear) programming, evaluation of the separating function

in statistical methods, decision making can be carried out using linear or quadratic programming procedures, etc. Second, the original numerical data as a result of measurements or observations

in decision-making problems for real situations are not deterministic, but more often are random variables

with known or unknown distribution laws, therefore, subsequent data processing requires the use of methods of mathematical statistics, theory of fuzzy sets or theory of possibilities.

Mathematical methods in economics and decision making can be divided into several groups:

Optimization methods.

Methods that take into account uncertainty are primarily probabilistic and statistical.

Methods for constructing and analyzing simulation models,

Methods for analyzing conflict situations (game theory).

Optimization methods

Optimization in mathematics is the operation of finding an extremum (minimum or maximum) of an objective function in a certain region of a vector space bounded by a set of linear or nonlinear equalities (inequalities).

The theory and methods of solving the optimization problem is studied by mathematical programming.

Mathematical programming is a field of mathematics that develops theory, numerical methods for solving multidimensional problems with constraints. Unlike classical mathematics, mathematical programming deals with mathematical methods of solving problems of finding the best possible options.

Optimization problem statement

In the design process, the task is usually to determine the best, in a sense, the structure or values \u200b\u200bof the parameters of objects. Such a task is called optimization. If optimization is associated with the calculation of the optimal values \u200b\u200bof parameters for a given structure of the object, then it is called parametric optimization. The problem of choosing the optimal structure is structural optimization.

The standard mathematical optimization problem is formulated as follows. Among the elements χ that form the set найти, find an element χ * that ensures the minimum value f (χ *) of a given function f (χ). In order to correctly formulate the optimization problem, it is necessary to set:

Admissible set - set

solution math game

2. Objective function - display;

Search criterion (max or min).


Then solving the problem means one of:

Show what .

Show that the objective function is not bounded from below.

If, then find:

If the function to be minimized is not convex, then it is often limited to the search for local minima and maxima: points such that everywhere in some of their neighborhood for minimum and for the maximum.

If the set is admissible, then such a problem is called an unconstrained optimization problem, otherwise it is called a conditional optimization problem.

Optimization methods classification

The general record of optimization problems defines a wide variety of their classes. The selection of the method (the efficiency of its solution) depends on the class of the problem. The classification of problems is determined by: the objective function and the admissible domain (set by a system of inequalities and equalities or a more complex algorithm).

Optimization methods are classified according to optimization tasks:

1. Local methods:

converge to some local extremum of the objective function. In the case of a unimodal objective function, this extremum is unique and will be the global maximum / minimum.

Global methods:

deal with multi-extreme target functions. In a global search, the main task is to identify trends in the global behavior of the target function.

Currently existing search methods can be divided into three large groups:

1.deterministic;

Random (stochastic);

Combined.

According to the criterion of the dimension of the admissible set, optimization methods are divided into one-dimensional optimization methods and multidimensional optimization methods.

By the type of the objective function and the feasible set, optimization problems and methods for their solution can be divided into the following classes:

Optimization problems in which the objective function and constraints are linear functions, are resolved by the so-called linear programming methods.

Otherwise, deal with the task nonlinear programming and apply appropriate methods. In turn, two particular tasks are distinguished from them:

if and are convex functions, then such a problem is called a convex programming problem;

if, then we are dealing with an integer (discrete) programming problem.

· Direct methods requiring only calculations of the objective function at points of approximation;

· Methods of the first order: require the calculation of the first partial derivatives of the function;

· Methods of the second order: they require the calculation of the second partial derivatives, that is, the Hessian of the objective function.

In addition, optimization methods are divided into the following groups:

Analytical methods (eg Lagrange multiplier method and Karush-Kuhn-Tucker conditions);

Numerical methods;

Graphic methods.

Depending on the nature of the set X, mathematical programming problems are classified as:

· Discrete programming problems (or combinatorial optimization) - if X is finite or countable;

· Problems of integer programming - if X is a subset of the set of integers;

· Nonlinear programming problems if the constraints or the objective function contain nonlinear functions and X is a subset of a finite-dimensional vector space.

If all the constraints and the objective function contain only linear functions, then this is a linear programming problem.

In addition, sections of mathematical programming are parametric programming, dynamic programming, and stochastic programming.

Mathematical programming is used to solve optimization problems of operations research.

The way to find the extremum is completely determined by the class of the problem. But before getting a mathematical model, you need to do 4 stages of modeling:

Determining the boundaries of the optimization system

We discard those connections of the optimization object with the outside world that cannot greatly affect the optimization result, or, more precisely, those without which the solution is simplified

Selecting controlled variables

We "freeze" the values \u200b\u200bof some variables (unmanaged variables). The others are left to take any values \u200b\u200bfrom the range of feasible decisions (controlled variables)

Defining constraints on controlled variables (equality and / or inequality).

Choosing a numerical optimization criterion (for example, a performance indicator)

We create an objective function.

Probabilistic statistical methods

The essence of probabilistic and statistical decision-making methods

How are the approaches, ideas and results of probability theory and mathematical statistics used in decision making?

The base is a probabilistic model of a real phenomenon or process, i.e. a mathematical model in which objective relationships are expressed in terms of probability theory. Probabilities are used primarily to describe uncertainties that need to be taken into account when making decisions. This refers to both unwanted opportunities (risks) and attractive ones ("lucky chance"). Sometimes randomness is deliberately introduced into a situation, for example, when drawing lots, randomly selecting units for control, holding lotteries or consumer surveys.

Probability theory allows one to calculate other probabilities that are of interest to the researcher. For example, based on the probability of a coat of arms falling out, you can calculate the probability that at least 3 coats of arms will fall out with 10 coin tosses. Such a calculation is based on a probabilistic model, according to which coin tosses are described by a scheme of independent tests, in addition, the coat of arms and lattice fall out are equally possible, and therefore the probability of each of these events is Ѕ. A more complex model is one in which, instead of tossing a coin, checking the quality of a unit is considered. The corresponding probabilistic model is based on the assumption that quality control of various items of production is described by an independent test scheme. In contrast to the coin tossing model, a new parameter must be introduced - the probability P that a unit of production is defective. The model will be fully described if it is assumed that all items have the same probability of being defective. If the latter assumption is incorrect, then the number of model parameters increases. For example, you can assume that each product has its own probability of being defective.

Let us discuss a quality control model with a common defectiveness probability P for all units of production. In order to “reach the number” in the analysis of the model, it is necessary to replace P with some specific value. To do this, you need to go beyond the probabilistic model and refer to the data obtained during quality control. Mathematical statistics solves the inverse problem in relation to the theory of probability. Its purpose is to draw conclusions about the probabilities that underlie the probabilistic model based on the results of observations (measurements, analyzes, tests, experiments). For example, based on the frequency of occurrence of defective products during inspection, conclusions can be drawn about the probability of defectiveness (see Bernoulli's theorem above). Based on Chebyshev's inequality, conclusions were drawn about the correspondence of the frequency of occurrence of defective products to the hypothesis that the probability of defectiveness takes on a certain value.

Thus, the application of mathematical statistics is based on a probabilistic model of a phenomenon or process. Two parallel series of concepts are used - related to theory (probabilistic model) and related to practice (sample of observation results). For example, the theoretical probability corresponds to the frequency found from the sample. The mathematical expectation (theoretical series) corresponds to the sample arithmetic mean (practical series). Typically, sample characteristics are theoretical estimates. At the same time, the values \u200b\u200brelated to the theoretical series “are in the heads of researchers”, refer to the world of ideas (according to the ancient Greek philosopher Plato), and are not available for direct measurement. Researchers have only sample data with which they try to establish the properties of the theoretical probabilistic model that interest them.

Why is a probabilistic model needed? The fact is that only with its help it is possible to transfer the properties established from the analysis of a particular sample to other samples, as well as to the entire so-called general population. The term “general population” is used when referring to a large but finite population of units of interest. For example, about the aggregate of all residents of Russia or the aggregate of all consumers of instant coffee in Moscow. The purpose of marketing or opinion polls is to transfer statements from a sample of hundreds or thousands of people to populations of several million people. In quality control, a batch of products acts as a general population.

To transfer conclusions from a sample to a wider population, one or another assumption is required about the relationship of the sample characteristics with the characteristics of this wider population. These assumptions are based on an appropriate probabilistic model.

Of course, it is possible to process sample data without using a particular probabilistic model. For example, you can calculate the sample arithmetic mean, calculate the frequency of the fulfillment of certain conditions, etc. However, the calculation results will relate only to a specific sample; the transfer of the conclusions obtained with their help to any other population is incorrect. This activity is sometimes referred to as “data mining”. Compared to probabilistic statistical methods, data analysis has limited cognitive value.

So, the use of probabilistic models based on the estimation and testing of hypotheses using sample characteristics is the essence of probabilistic-statistical decision-making methods.

We emphasize that the logic of using sample characteristics for making decisions based on theoretical models involves the simultaneous use of two parallel series of concepts, one of which corresponds to probabilistic models, and the second to sample data. Unfortunately, in a number of literary sources, usually outdated or written in a recipe spirit, no distinction is made between selective and theoretical characteristics, which leads readers to bewilderment and errors in the practical use of statistical methods.

Application of a specific probabilistic-statistical method consists of three stages:

The transition from economic, managerial, technological reality to an abstract mathematical and statistical scheme, that is, the construction of a probabilistic model of a control system, a technological process, decision-making procedures, in particular based on the results of statistical control, and the like.

Carrying out calculations and obtaining conclusions by purely mathematical means within the framework of a probabilistic model.

Interpretation of mathematical and statistical conclusions in relation to a real situation and making an appropriate decision (for example, on the compliance or non-compliance of product quality with established requirements, the need to adjust the technological process), in particular, conclusions (on the proportion of defective product units in a batch, on a specific type of distribution laws of controlled process parameters and the like).

Mathematical statistics applies concepts, methods and results of probability theory. Next, we consider the main issues of constructing probabilistic models in various cases. We emphasize that for the active and correct use of normative-technical and instructive-methodological documents on probabilistic-statistical methods, preliminary knowledge is needed. So, you need to know under what conditions a particular document should be applied, what initial data are needed for its selection and application, what decisions should be made based on the results of data processing, and so on.

Let's consider a few examples when probabilistic-statistical models are a good tool for solving problems.

In the novel by Alexei Nikolaevich Tolstoy "Walking through the agony" (volume 1) it is said: "The workshop gives twenty-three percent of the marriage, and you stick to this figure," Strukov said to Ivan Ilyich. " How are these words to be understood in a conversation between plant managers? A piece cannot be 23% defective. It can be either good or defective. Probably, Strukov thought that a large batch contains about 23% of defective items. Then the question arises: what does "approximately" mean? Let 30 out of 100 tested units of production turn out to be defective, or out of 1,000 - 300, or out of 100,000 - 30,000 ... Should Strukov be accused of lying?

The coin used as a toss must be “symmetrical”: on average, half of the tosses should be heads, and half of the cases are tails. But what does “average” mean? If you carry out many series of 10 tosses in each series, then there will often be series in which a coin falls heads 4 times. For a symmetrical coin, this will occur in 20.5% of the series. And if there are 40,000 eagles per 100,000 tosses, can the coin be considered symmetrical? The decision making procedure is based on the theory of probability and mathematical statistics.

An example may seem frivolous. This is not true. The drawing of lots is widely used in the organization of industrial technical and economic experiments. For example, when processing the results of measuring the quality indicator (frictional moment) of bearings depending on various technological factors (the influence of a conservation environment, methods of preparing bearings before measurement, the effect of bearing load during the measurement process, etc.). Let's say you want to compare the quality of bearings depending on the results of their storage in different conservation oils. When planning such an experiment, the question arises, which bearings should be placed in oil of one composition, and which in another, but in such a way as to avoid subjectivity and ensure the objectivity of the decision. The answer can be obtained by drawing lots.

A similar example can be given with quality control of any product. To decide whether a controlled batch of products meets or does not meet the established requirements, a representative part is selected from it: the entire batch is judged from this sample. Therefore, it is desirable that each unit in a controlled lot has the same likelihood of being selected. In a production environment, the selection of units of production is usually made not by lot, but by special tables of random numbers or with the help of computer random number sensors.

Similar problems of ensuring the objectivity of comparison arise when comparing various schemes for organizing production, remuneration, tenders and competitions, and the selection of candidates for vacant positions. A draw or similar measures are needed everywhere.

Let it be necessary to identify the strongest and second strongest team when organizing a tournament according to the Olympic system (the loser is eliminated). Let's say that the stronger team always wins the weaker one. It is clear that the strongest team will definitely become the champion. The second strongest team will reach the final only when they have no games with the future champion before the final. If such a game is planned, then the second-strongest team will not make it to the final. Anyone planning a tournament can either “knock out” the second-strongest team from the tournament ahead of schedule, bringing it together in the first meeting with the leader, or provide it with a second place, ensuring meetings with weaker teams until the final. To avoid subjectivity, a toss is carried out. For a tournament of 8 teams, the probability that the two strongest teams will meet in the final is 4 out of 7. Accordingly, with a probability of 3 out of 7, the second-strongest team will leave the tournament ahead of schedule.

Any measurement of units of production (using a caliper, micrometer, ammeter ...) has errors. To find out if there are systematic errors, it is necessary to measure repeatedly the units of the product, the characteristics of which are known (for example, a reference material). It should be remembered that in addition to the systematic error, there is also a random error.

The question arises of how to identify the systematic error by measurements. If we only note whether the error obtained during the next measurement is positive or negative, then this problem can be reduced to the one already considered. Indeed, let us compare the measurement with tossing a coin: a positive error - with heads, negative - tails (zero error with a sufficient number of scale divisions almost never occurs). Then checking the absence of a systematic error is equivalent to checking the symmetry of the coin.

So, the problem of checking for systematic error is reduced to the problem of checking the symmetry of a coin. The above reasoning leads to the so-called "sign criterion" in mathematical statistics.

With the statistical regulation of technological processes on the basis of the methods of mathematical statistics, rules and plans for statistical control of processes are developed, aimed at timely detection of disruptions in technological processes and taking measures to adjust them and prevent the release of products that do not meet the established requirements. These measures are aimed at reducing production costs and losses from the supply of substandard units. With statistical acceptance control, based on the methods of mathematical statistics, quality control plans are developed by analyzing samples from batches of products. The difficulty lies in being able to correctly build probabilistic and statistical decision-making models. In mathematical statistics, probabilistic models and methods for testing hypotheses have been developed for this, in particular, hypotheses that the proportion of defective units of production is equal to a certain number, for example,.

Game theory

Game theory is a mathematical method for studying optimal strategies in games. A game is understood as a process in which each of the participating parties (two or more) are fighting for their interests. Each side pursues its own goals and uses some strategy, which can, in turn, lead to a win or a loss (the result depends on other players. Game theory provides an opportunity to choose the best strategy, taking into account ideas about other players, their capabilities and possible actions.

Game theory is a branch of applied mathematics, more precisely, operations research. Most often, the methods of game theory are used in economics, a little less often in other social sciences - sociology, political science, psychology, ethics, jurisprudence and others. Since the 1970s, it has been adopted by biologists to study animal behavior and the theory of evolution. It is very important for artificial intelligence and cybernetics, especially with the manifestation of interest in intelligent agents.

Optimal solutions or strategies in mathematical modeling were proposed as early as the 18th century. The problems of production and pricing in conditions of oligopoly, which later became textbook examples of game theory, were considered in the 19th century. A. Cournot and J. Bertrand. At the beginning of the XX century. E. Lasker, E. Zermelo, E. Borel put forward the idea of \u200b\u200ba mathematical theory of conflict of interest.

Mathematical game theory has its origins in neoclassical economics. For the first time, the mathematical aspects and applications of the theory were presented in the classic 1944 book by John von Neumann and Oskar Morgenstern "Theory of Games and Economic Behavior".

This area of \u200b\u200bmathematics has found some reflection in social culture. In 1998, American writer and journalist Sylvia Nazar published a book about the fate of John Nash, Nobel laureate in economics and scientist in the field of game theory; and in 2001 the film "A Beautiful Mind" was made based on the book. Some American television shows, such as Friend or Foe, Alias, or NUMBERS, periodically reference the theory in their episodes.

J. Nash in 1949 wrote a dissertation on game theory, 45 years later he received the Nobel Prize in economics. J. Nash, after graduating from Carnegie Polytechnic Institute with two diplomas - bachelor's and master's - entered Princeton University, where he attended lectures by John von Neumann. In his writings, J. Nash developed the principles of "management dynamics". The first concepts of game theory analyzed antagonistic games where there are losers and winners at their expense. Nash develops methods of analysis in which all participants either win or fail. These situations are called "Nash equilibrium", or "non-cooperative equilibrium", in a situation the parties use the optimal strategy, which leads to the creation of a stable equilibrium. It is beneficial for players to maintain this balance, since any change will worsen their situation. These works by J. Nash made a significant contribution to the development of game theory, the mathematical tools of economic modeling were revised. J. Nash shows that A. Smith's classical approach to competition, when everyone is for himself, is not optimal. More optimal strategies are when everyone tries to do better for himself, doing better for others.

Although game theory initially looked at economic models, it remained a formal theory within mathematics until the 1950s. But since the 1950s. Attempts began to apply the methods of game theory not only in economics, but in biology, cybernetics, technology, and anthropology. During the Second World War and immediately after it, the military became seriously interested in game theory, who saw it as a powerful apparatus for researching strategic decisions.

In 1960-1970. interest in game theory is waning, despite the significant mathematical results obtained by that time. Since the mid-1980s. begins active practical use of game theory, especially in economics and management. Over the past 20-30 years, the importance of game theory and interest has grown significantly, some areas of modern economic theory cannot be expounded without the application of game theory.

A major contribution to the application of game theory was the work of Thomas Schelling, 2005 Nobel laureate in economics, "The Strategy of Conflict." T. Schelling examines various "strategies" of behavior of the parties to the conflict. These strategies coincide with the tactics of conflict management and the principles of conflict analysis in conflict management (this is a psychological discipline) and in managing conflicts in an organization (management theory). In psychology and other sciences, the word "game" is used in other senses than in mathematics. Some psychologists and mathematicians are skeptical about the use of this term in other meanings that have developed earlier. The cultural concept of the game was given in the work of Johan Huizing "Homo Ludens" (articles on the history of culture), the author talks about the use of games in justice, culture, ethics, about the fact that the game is older than man himself, since animals also play. The concept of a game is found in Eric Byrne's concept "Games that people play, people who play games." These are purely psychological games based on transactional analysis. J. Hösing's concept of play differs from the interpretation of play in conflict theory and mathematical game theory. Games are also used for training in business cases, seminars by G.P. Shchedrovitsky, the founder of the organizational and activity approach. During Perestroika in the USSR G.P. Shchedrovitsky played many games with Soviet managers. In terms of psychological intensity, ODIs (organizational-activity games) were so strong that they served as a powerful catalyst for changes in the USSR. Now in Russia there is a whole movement of ODI. Critics point to the artificial uniqueness of ODI. The Moscow Methodological Circle (MMK) became the basis of the ODI.

The mathematical theory of games is now rapidly developing, dynamic games are being considered. However, the mathematical apparatus of game theory is expensive. It is used for justified tasks: politics, economics of monopolies and the distribution of market power, etc. A number of well-known scientists have become Nobel laureates in economics for their contribution to the development of game theory, which describes socio-economic processes. J. Nash, thanks to his research in game theory, has become one of the leading experts in the field of the Cold War, which confirms the scale of the tasks that game theory deals with.

Nobel laureates in economics for achievements in the field of game theory and economic theory were: Robert Auman, Reinhard Zelten, John Nash, John Harsanyi, William Vickrey, James Mirrlees, Thomas Schelling, George Akerlof, Michael Spence, Joseph Stiglitz, Leonid Hurwitz, Eric Maskin , Roger Myerson, Lloyd Shapley, Alvin Roth, Jean Tyrol.

Game presentation

Games are strictly defined mathematical objects. The game is formed by players, a set of strategies for each player and an indication of the winnings, or payments, of the players for each combination of strategies. Most of the cooperative games are characterized by a characteristic function, while for the rest of the species they use the normal or extensive form. Characterizing features of the game as a mathematical model of the situation:

1. The presence of several participants;

2. Uncertainty in the behavior of participants associated with the presence of each of them several options for action;

Difference (mismatch) of interests of the participants;

The interconnectedness of the behavior of the participants, since the result obtained by each of them depends on the behavior of all participants;

The presence of rules of conduct known to all participants.

Extensive form



A game " Ultimatum»In extensive form

Games in extensive or extended form are represented as a directed tree, where each vertex corresponds to a situation where the player chooses his strategy. A whole level of peaks is associated with each player. Payments are recorded at the bottom of the tree, under each leaf node.

The picture on the left is a game for two players. Player 1 goes first and chooses strategy F or U. Player 2 analyzes his position and decides whether to choose strategy A or R. Most likely, the first player will choose U, and the second - A (for each of them these are optimal strategies); then they will receive 8 and 2 points respectively.

The extensive form is very descriptive, making it especially convenient to represent games with more than two players and games with consecutive moves. If the participants make simultaneous moves, then the corresponding vertices are either connected by a dotted line, or outlined with a solid line.

Normal form of play

In normal, or strategic, form, the game is described by a payment matrix. Each side (more precisely, dimension) of the matrix is \u200b\u200ba player, the rows define the strategies of the first player, and the columns define the strategies of the second. At the intersection of the two strategies, you can see the winnings that the players will receive. In the example on the right, if player 1 chooses the first strategy, and the second player chooses the second strategy, then at the intersection we see (−1, −1), which means that as a result of the move, both players lost one point.

The players chose strategies with the maximum result for themselves, but lost, due to ignorance of the other player's move. Usually games are presented in normal form in which the moves are made at the same time, or at least it is assumed that all players are unaware of what other participants are doing. Such games with incomplete information will be discussed below.

Characteristic function

In cooperative games with transferable utility, that is, the ability to transfer funds from one player to another, it is impossible to apply the concept of individual payments. Instead, a so-called characteristic function is used, which determines the payoff of each coalition of players. In this case, it is assumed that the payoff of the empty coalition is zero.

The basis for this approach can be found in the book by von Neumann and Morgenstern. Studying the normal form for coalition games, they reasoned that if coalition C forms in a game with two sides, then coalition N \\ C opposes it. It looks like a game for two players. But since there are many options for possible coalitions (namely, 2N, where N is the number of players), the payoff for C will be a certain characteristic value depending on the composition of the coalition. Formally, a game in this form (also called a TU-game) is represented by a pair (N, v), where N is the set of all players and v: 2N → R is a characteristic function.

A similar form of representation can be applied to all games, including those without transferable utility. Currently, there are ways to convert any game from normal to characteristic form, but the conversion in the opposite direction is not possible in all cases.

Application of game theory

Game theory, as one of the approaches in applied mathematics, is used to study the behavior of humans and animals in various situations. Initially, game theory began to develop within the framework of economics, making it possible to understand and explain the behavior of economic agents in various situations. Later, the field of application of game theory was extended to other social sciences; currently, game theory is used to explain human behavior in political science, sociology and psychology. Game theory analysis was first used to describe animal behavior by Ronald Fisher in the 1930s (although even Charles Darwin used the ideas of game theory without formal justification). The term "game theory" does not appear in Ronald Fischer's work. Nevertheless, the work is essentially done in the mainstream of game-theoretic analysis. The developments in economics were applied by John Maynard Smith in his book Evolution and Game Theory. Game theory is not only used to predict and explain behavior; Attempts have been made to use game theory to develop theories of ethical or reference behavior. Economists and philosophers have applied game theory to better understand good (decent) behavior. Generally speaking, the first game-theoretic arguments explaining correct behavior were expressed by Plato.

Description and modeling

Originally, game theory was used to describe and model the behavior of human populations. Some researchers believe that by determining equilibrium in appropriate games, they can predict the behavior of human populations in a situation of real confrontation. This approach to game theory has recently been criticized for several reasons. First, the assumptions used in modeling are often violated in real life. Researchers may assume that players choose behaviors that maximize their net benefit (the economic man model), but in practice human behavior often does not meet this premise. There are many explanations for this phenomenon - irrationality, modeling discussion, and even different motives of the players (including altruism). Game-theoretic authors object to this, saying that their assumptions are analogous to similar assumptions in physics. Therefore, even if their assumptions are not always fulfilled, game theory can be used as a reasonable ideal model, by analogy with the same models in physics. However, a new wave of criticism fell on game theory when, as a result of experiments, it was revealed that people did not follow equilibrium strategies in practice. For example, in the games "Centipede", "Dictator", the participants often do not use the strategy profile that makes up the Nash equilibrium. There is an ongoing debate about the significance of such experiments. Another point of view is that Nash equilibrium is not a prediction of expected behavior; it only explains why populations already in Nash equilibrium remain in this state. However, the question of how these populations arrive at a Nash equilibrium remains open. Some researchers in search of an answer to this question have switched to the study of evolutionary game theory. Evolutionary game theory models imply limited rationality or irrationality of the players. Despite the name, evolutionary game theory deals with more than just natural selection of species. This branch of game theory examines models of biological and cultural evolution as well as models of the learning process.

Normative Analysis (Identifying Best Behavior)

On the other hand, many researchers view game theory not as a tool for predicting behavior, but as a tool for analyzing situations in order to identify the best behavior for a rational player. Since the Nash equilibrium includes strategies that are the best response to the behavior of another player, the use of the Nash equilibrium concept to select behavior seems quite reasonable. However, this use of game-theoretic models has been criticized. First, in some cases, it is advantageous for a player to choose a strategy that is not in equilibrium if he expects that other players will not follow equilibrium strategies either. Second, the famous Prisoner Dilemma provides another counterexample. In Prisoner's Dilemma, pursuing self-interest puts both players in a worse situation than when they would sacrifice self-interest.

Cooperative and non-cooperative

The game is called cooperative, or coalition, if players can unite in groups, taking on some obligations to other players and coordinating their actions. This is different from non-cooperative games in which everyone is obliged to play for themselves. Recreational games are rarely cooperative, but such mechanisms are not uncommon in everyday life.

It is often assumed that cooperative games differ precisely in the ability of players to communicate with each other. In general, this is not true. There are games where communication is allowed, but the players pursue personal goals, and vice versa.

Of the two types of games, non-cooperative games describe situations in great detail and produce more accurate results. Cooperatives consider the process of the game as a whole. Attempts to combine the two approaches have yielded considerable results. The so-called Nash program has already found solutions to some cooperative games as equilibrium situations of noncooperative games.

Hybrid games include elements of co-op and non-co-op games. For example, players can form groups, but the game will be played in a non-cooperative style. This means that each player will pursue the interests of his group, while at the same time trying to achieve personal gain.

Symmetrical and unbalanced


Asymmetrical play


The game will be symmetrical when the corresponding strategies of the players are equal, that is, they have the same payments. In other words, if the players can exchange places and their winnings for the same moves will not change. Many of the two-player games under study are symmetrical. In particular, these are: "Prisoner's Dilemma", "Deer Hunt", "Hawks and Pigeons". As asymmetrical games, you can cite "Ultimatum" or "Dictator".

In the example on the right, the game at first glance may seem symmetrical due to similar strategies, but this is not so - after all, the second player's payoff with the strategy profiles (A, A) and (B, B) will be greater than that of the first.

Zero-sum and non-zero-sum

Zero-sum games are a special kind of fixed-sum games, that is, games where players cannot increase or decrease the available resources or the fund of the game. In this case, the sum of all winnings is equal to the sum of all losses on any move. Look to the right - the numbers represent payments to the players - and their total in each cell is zero. Examples of such games are poker, where one wins all the bets of others; reverse, where the opponent's pieces are captured; or banal theft.

Many games studied by mathematicians, including the already mentioned "Prisoner's Dilemma", are of a different kind: in games with a nonzero sum, the gain of some player does not necessarily mean the loss of another, and vice versa. The outcome of such a game can be less than or greater than zero. Such games can be converted to zero sum by introducing a fictitious player who “pockends” the surplus or makes up for the lack of funds.

Another game with a non-zero amount is trading, where each participant benefits. A well-known example of where it is declining is war.

Parallel and sequential

In parallel games, the players move at the same time, or at least they are not aware of the choices of others until everyone has made their move. In sequential, or dynamic, games, participants can make moves in a predetermined or random order, but at the same time they receive some information about the previous actions of others. This information may not even be completely complete, for example, the player may find out that his opponent out of ten of his strategies did not exactly choose the fifth, without knowing anything about the others.

The differences in the presentation of parallel and sequential games were discussed above. The former are usually presented in normal form, and the latter in extensive.

With complete or incomplete information

An important subset of sequential games are games with complete information. In such a game, the participants know all the moves made up to the current moment, as well as the possible strategies of the opponents, which allows them to predict to some extent the subsequent development of the game. Complete information is not available in parallel games, since they do not know the current moves of the opponents. Most of the games studied in mathematics are with incomplete information. For example, the whole "salt" of "Prisoner's Dilemma" or "Coin Comparison" is their incompleteness.

At the same time, there are interesting examples of games with complete information: Ultimatum, Centipede. This also includes chess, checkers, go, mancala and others.

Often the concept of complete information is confused with a similar concept - perfect information. For the latter, only knowledge of all strategies available to opponents is enough, knowledge of all their moves is not necessary.

Infinite Step Games

Games in the real world, or games studied in economics, tend to last a finite number of moves. Mathematics is not so limited, and in particular, set theory deals with games that can go on indefinitely. Moreover, the winner and his winnings are not determined until the end of all moves.

The problem that is usually posed in this case is not to find an optimal solution, but to find at least a winning strategy. Using the axiom of choice, one can prove that sometimes even for games with complete information and two outcomes - "win" or "lose" - no player has such a strategy. The existence of winning strategies for some specially designed games has an important role in descriptive set theory.

Discrete and continuous games

Most of the games under study are discrete: they have a finite number of players, moves, events, outcomes, etc. However, these components can be extended to a variety of real numbers. Games that include these elements are often called differential games. They are associated with some kind of material scale (usually a time scale), although the events occurring in them may be discrete in nature. Differential games are also considered in optimization theory and find their application in engineering and technology, physics.

Metagames

Methods for constructing and analyzing simulation models (simulation modeling).

Simulation modeling (situational modeling) is a method that allows you to build models that describe processes as they would in reality. Such a model can be “played” in time for both one test and a given set of them. In this case, the results will be determined by the random nature of the processes. Based on these data, one can obtain fairly stable statistics.

Simulation modeling is a research method in which the system under study is replaced by a model that describes with sufficient accuracy the real system with which experiments are carried out in order to obtain information about this system. Experimenting with a model is called imitation (imitation is the comprehension of the essence of a phenomenon without resorting to experiments on a real object).

Simulation modeling is a special case of mathematical modeling. There is a class of objects for which, for various reasons, analytical models have not been developed, or methods for solving the resulting model have not been developed. In this case, the analytical model is replaced by a simulator or simulation model.

Simulation modeling is sometimes called obtaining partial numerical solutions of a formulated problem on the basis of analytical solutions or using numerical methods.

A simulation model is a logical and mathematical description of an object that can be used for experimenting on a computer for the purpose of designing, analyzing and evaluating the functioning of the object.

Application of simulation modeling.

Imitation modeling is used when:

· It is expensive or impossible to experiment on a real object;

· It is impossible to build an analytical model: the system has time, causal relationships, consequences, nonlinearities, stochastic (random) variables;

· It is necessary to simulate the behavior of the system in time.

The purpose of simulation is to reproduce the behavior of the system under study based on the results of the analysis of the most significant interrelationships between its elements, or in other words, to develop a simulation modeling for the subject area under study for various experiments.

Types of simulation

Three Simulation Approaches


Simulation approaches on a scale of abstraction

Agent-based modeling is a relatively new (1990-2000) direction in simulation modeling, which is used to study decentralized systems, the dynamics of which are determined not by global rules and laws (as in other modeling paradigms), but on the contrary, when these global rules and laws are the result of the individual activity of group members. The goal of agent-based models is to get an idea of \u200b\u200bthese global rules, the general behavior of the system, based on assumptions about the individual, private behavior of its individual active objects and the interaction of these objects in the system. An agent is a certain entity with activity, autonomous behavior, can make decisions in accordance with a certain set of rules, interact with the environment, and also change independently.

· Discrete-event modeling - an approach to modeling that offers to abstract from the continuous nature of events and consider only the main events of the modeled system, such as: "waiting", "order processing", "movement with a load", "unloading" and others. Discrete-event modeling is the most developed and has a huge scope of applications - from logistics and queuing systems to transport and production systems. This type of simulation is most suitable for modeling production processes. Founded by Jeffrey Gordon in the 1960s.

· System dynamics is a modeling paradigm, where graphical diagrams of causal relationships and global influences of some parameters on others in time are built for the system under study, and then the model created on the basis of these diagrams is simulated on a computer. In fact, this type of modeling more than all other paradigms helps to understand the essence of the ongoing identification of cause-and-effect relationships between objects and phenomena. With the help of system dynamics, models of business processes, city development, production models, population dynamics, ecology and the development of the epidemic are built. The method was founded by Jay Forrester in the 1950s.

Areas of use

· Business processes

Business simulation

Combat action

Population dynamics

Traffic

IT infrastructure

Mathematical modeling of historical processes

Logistics

Pedestrian dynamics

Production

Market and competition

· Service centres

Supply chain

· Traffic

· Project management

Health Economics

Ecosystem

· Information Security

Relay protection

Conclusion

Of the above methods, the most widely used are probabilistic-statistical and game theory. Probabilistic-statistical methods, in comparison with others, are the most accessible and cheap for using and installing a software base. So, for forecasts (for example, regression analysis) and optimization, it is possible to use the standard Microsoft Office Excel software package.

Game theory is a much more expensive method, less studied from the point of view of a scientific basis and requires highly qualified personnel. However, in areas of special importance (politics, global financial companies, TNCs) it is justified and necessary.

References

1. A.A. Greshilov, Mathematical methods of decision making, 2nd edition, revised and enlarged, Moscow, 2014

2. Mazalov V.V. Mathematical game theory and applications. - St. Petersburg - Moscow - Krasnodar: Lan, 2010.

Orlov A.I. Sustainability in socio-economic models. - M .: Nauka, 1979.

4. Orlov A.I. Applied statistics. Textbook, 2009

5. Orlov A.I. Econometrics. Textbook for universities. Ed. 3rd, revised and supplemented. - M .: Publishing house "Exam", 2004.

6. Gill F., Murray W., Wright M. Practical optimization. Per. from English. - M .: Mir, 1985.

Zhiglyavsky A.A., Zhilinkas A.G. Global extremum search methods. - Moscow: Nauka, Fizmatlit, 1991.

Keeney R.L., Raifa H. Decision Making under Multiple Criteria: Preferences and Substitutions. - M .: Radio and communication, 1981

Bodrov V.I., Lazareva T.Ya., Martemyanov Yu.F. Mathematical methods of decision making. Tutorial. - Tambov: Publishing house of TSTU, 2004

Petrosyan L.A. Zenkevich N.A., Semina E.A. Game theory: University Book House, 1998.