Business plan - Accounting.  Contract.  Life and business.  Foreign languages.  Success stories

Theory of optimization. What are the optimization methods? Methods for optimizing managerial decisions Methods for expert assessments

3.2.1. Linear programming

Among the optimization problems in decision theory, the most well-known are linear programming problems in which the function to be maximized F(X) is linear, and the constraints BUT are given by linear inequalities. Let's start with an example.

production task. The workshop can produce chairs and tables. It takes 5 units of material to produce a chair and 20 units (feet of mahogany) to produce a table. A chair requires 10 man-hours, a table 15. There are 400 pieces of material and 450 man-hours. Profit in the production of a chair - 45 USD, in the production of a table - 80 USD. How many chairs and tables do you need to make to get the maximum profit?

Denote: X 1 - the number of chairs made, X 2 - the number of tables made. The optimization problem has the form:

45 X 1 + 80 X 2 → max ,

5 X 1 + 20 X 2 ≤ 400 ,

10 X 1 + 15 X 2 ≤ 450 ,

X 1 ≥ 0 ,

X 2 ≥ 0 .

The first line contains the target function - profit upon release X 1 chairs and X 2 tables. It needs to be maximized by choosing the optimal values ​​of the variables X 1 and X 2. In this case, the restrictions on the material (second line) must be met - no more than 400 feet of mahogany have been spent. As well as labor restrictions (third line) - no more than 450 hours spent. In addition, we must not forget that the number of tables and the number of chairs are non-negative. If a X 1 = 0, this means that the chairs are not produced. If at least one chair is made, then X 1 is positive. But it's impossible to imagine a negative release - X 1 cannot be negative from an economic point of view, although from a mathematical point of view such a limitation cannot be seen. In the fourth and fifth lines of the problem and it is stated that the variables are non-negative.

The conditions of the production task can be depicted on the coordinate plane. We will plot the values ​​along the horizontal abscissa X 1 , and along the vertical ordinate - values X 2. Then the restrictions on the material and the last two lines of the optimization problem highlight the possible values ​​( X 1 , X 2) output volumes in the form of a triangle (Fig. 1).

Thus, material constraints are depicted as a convex polygon, specifically a triangle. This triangle is obtained by cutting off the zone adjacent to the origin from the first quadrant. The cutting is carried out by a straight line corresponding to the second line of the original problem, with the inequality replaced by equality. The line crosses the axis X 1 corresponding to the chairs at (80,0). This means that if all the material is used to make chairs, then 80 chairs will be made. The same line intersects the axis X 2 , corresponding to the tables, at the point (0.20). This means that if all the material is put on


making tables, then 20 tables will be made. For all points inside the triangle, inequality, not equality, is satisfied - the material will remain.

Labor restrictions can be depicted in a similar way (Fig. 2).

Thus, labor restrictions, as well as material restrictions, are depicted as a triangle. This triangle is also obtained by cutting off the zone adjacent to the origin from the first quadrant. The cutting is carried out by a straight line corresponding to the third line of the original problem, with the inequality replaced by equality. The line crosses the axis X 1 corresponding to the chairs at (45,0). This means that if all labor resources are used to make chairs, then 45 chairs will be made. The same line intersects the axis X 2 , corresponding to the tables, at the point (0.30). This means that if all workers are assigned to make tables, then 30 tables will be made. For all points inside the triangle, inequality is satisfied, not equality - some of the workers will be idle.

We see that there is no obvious solution - for the manufacture of 80 chairs there is material, but there are not enough workers, and for the production of 30 tables there is a labor force, but there is no material. So, we need to make both. But in what ratio?

To answer this question, it is necessary to "combine" Fig.1 and Fig.2, obtaining the area of ​​possible solutions, and then trace what values ​​the objective function takes on this set (Fig.3).

Thus, the set of possible values ​​for the production volumes of chairs and tables ( X 1 , X 2), or, in other terms, the set BUT, which specifies constraints on the control parameter in the general optimization problem, is the intersection of two triangles, i.e. convex quadrilateral shown in Fig.3. Its three vertices are obvious - they are (0.0), (45.0) and (0.20). The fourth is the intersection of two straight lines - the boundaries of the triangles in Fig. 1 and Fig. 2, i.e. solution of a system of equations

5 X 1 + 20 X 2 = 400 ,

10 X 1 + 15 X 2 = 450 .

From the first equation: 5 X 1 = 400 - 20 X 2 , X 1 = 80 - 4 X 2. Substitute into the second equation:

10 (80 - 4 X 2) + 15 X 2 = 800 - 40X 2 + 15 X 2 = 800 - 25 X 2 = 450,

therefore 25 X 2 = 350, X 2 = 14, whence X 1 \u003d 80 - 4 x 14 \u003d 80 -56 \u003d 24.

So, the fourth vertex of the quadrilateral is (24, 14).

We need to find the maximum of a linear function on a convex polygon. (In the general case of linear programming, the maximum of a linear function on a convex polyhedron lying in a finite-dimensional linear space.) The basic idea of ​​linear programming is that the maximum is reached at the vertices of the polygon. In the general case - at one vertex, and this is the only maximum point. In private - in two, and then the segment connecting them also consists of maximum points.

Objective function 45 X 1 + 80 X 2 takes on a minimum value of 0 at vertex (0,0). As the arguments increase, this function increases. At the vertex (24,14) it takes the value 2200. In this case, the straight line 45 X 1 + 80 X 2 = 2200 passes between straight lines 5 X 1 + 20 X 2 = 400 and 10 X 1 + 15 X 2 = 450 intersecting at the same point. From here, as well as from a direct check of the two remaining vertices, it follows that the maximum of the objective function, equal to 2200, is reached at the vertex (24,14).

Thus, the optimal output is as follows: 24 chairs and 14 tables. This uses all the material and all labor resources, and the profit is 2200 US dollars.

Dual problem. Each problem of linear programming corresponds to the so-called dual problem. In it, compared with the original problem, rows turn into columns, inequalities change sign, instead of a maximum, a minimum is sought (or vice versa, instead of a minimum, a maximum). The task dual to the dual one is the original task itself. Let's compare the original problem (on the left) and its dual problem (on the right):

45 X 1 + 80 X 2 → max , 400 W 1 + 450 W 2 → min ,

5 X 1 + 20 X 2 ≤ 400 , 5 W 1 + 10 W 2 ≥ 45,

10 X 1 + 15 X 2 ≤ 450 , 20 W 1 + 15 W 2 ≥ 80,

X 1 ≥ 0 , W 1 ≥ 0,

X 2 ≥ 0 . W 2 ≥ 0.

Why is the dual task so important? It can be proved that the optimal values ​​of the objective functions in the original and dual problems are the same (i.e., the maximum in the original problem coincides with the minimum in the dual problem). In this case, the optimal values W 1 and W 2 show the cost of material and labor, respectively, if they are estimated by their contribution to the objective function. Not to be confused with the market prices of these factors of production, W 1 and W 2 are called "objective valuations" of raw materials and labor.

Linear programming as a scientific and practical discipline. Of all the optimization problems, linear programming problems are distinguished by the fact that their constraints are systems of linear inequalities or equalities. Constraints define convex linear polyhedra in a finite linear space. The objective functions are also linear.

For the first time such problems were solved by the Soviet mathematician L.V. Kantorovich (1912-1986) in the 1930s as a task of production management in order to optimize the organization of production and production processes, for example, the processes of loading machines and cutting sheets of materials. After the Second World War, similar tasks were taken up in the United States. In 1975, T. Koopmans (1910-1985, born in the Netherlands, worked mainly in the USA) and Academician of the USSR Academy of Sciences L.V. Kantorovich were awarded the Nobel Prize in Economics.

Consider several typical linear programming problems (see also ).

The problem of diet (simplified version). Assume for definiteness that it is necessary to formulate the cheapest diet for chickens containing the required amount of certain nutrients (for simplicity, thiamine T and niacin H).

Table 1.

Initial data in the mixture optimization problem.

The nutritional value of the diet (in calories) must be at least as specified. Let, for simplicity, the mixture for chickens be made from two products - To and With. The content of thiamine and niacin in these products is known, as well. also nutritional value To and With(in calories). How much To and With should be taken for one serving of chicken feed so that the chickens receive the dose of H and T substances and calories (or more) they need, and the cost of the serving is minimal? The initial data for calculations are given in Table 1.

3,8 To + 4,2 With→ min ,

0,10 To + 0,25 With ≥ 1,00 ,

1,00 To + 0,25 With ≥ 5,00 ,

110,00To + 120,00 With ≥ 400,00 ,

To ≥ 0 ,

With ≥ 0 .

Its graphical solution is shown in Fig.4.

Fig.4. Graphical solution of the mixture optimization problem.

In Fig. 4, for the sake of ease of perception, four straight lines are marked with numbers (1) - (4). Straight line (1) is straight line 1.00 To + 0,25 With= 5.00 (substance H limit). It passes, as shown in the figure, through the points (5.0) on the x-axis and (0.20) on the y-axis. Please note that the allowed values ​​of the parameters (K, With) lie above the line (1) or on it, in contrast to the previously considered cases in the previous production problem of linear programming.

Straight line (2) is straight line 110.00 To + 120,00 With= 400.00 (calorie restriction). Note that in the region of non-negative With it lies everywhere below the line (1). Indeed, this is true for To=0, the line (1) passes through the point (0.20), and the line (2) passes through the point (0, 400/120) located below. The point of intersection of two lines is found when solving the system of equations

1,00 To + 0,25 With = 5,00 ,

110,00 To + 120,00 With = 400,00 .

From the first equation To = 5 - 0,25 With. Substitute in the second: 110 (5-0.25 With) + 120 With= 400, from where 550 - 27.5 With + 120 With= 400. Therefore, 150 = - 92.5 With, i.e. the solution is reached with a negative With. This means that for all positive With line (2) lies below line (1). So, if the restriction on H is fulfilled, then the restriction on calories is also necessarily fulfilled. We are faced with a new phenomenon - some restrictions from a mathematical point of view may be superfluous. From an economic point of view, they are necessary, they reflect the essential features of the problem statement, but in this case, the internal structure of the problem turned out to be such that the calorie restriction does not participate in the formation of the permissible range of parameters and finding a solution.

Straight line (4) is straight line 0.1 To + 0,25 With= 1 (restriction on substance T). It passes, as shown in the figure, through the points (10.0) on the abscissa axis and (0.4) on the ordinate axis. Note that valid parameter values ​​( To, With) lie above the line (4) or on it, as for the line (1).

Therefore, the range of admissible parameter values ​​( To, With) is unbounded from above. From the entire plane, it is distinguished by the coordinate axes (it lies in the first quadrant) and the straight lines (1) and (4) (it lies above these lines, and also includes boundary segments). The area of ​​acceptable parameter values, i.e. points ( To, With) can be called an "unbounded polygon". Minimum objective function 3.8 To + 4,2 With can only be reached at the vertices of this "polygon". There are only three peaks. These are the intersections with the abscissa (10.0) and ordinate (0.20) axes of the lines (1) and (4) (in each case, the one that satisfies both restrictions is taken from the two intersections). The third vertex is the point BUT intersection of lines (1) and (4), the coordinates of which are found when solving the system of equations

0,10To + 0,25 With = 1,00 ,

1,00 To + 0,25 With = 5,00 .

From the second equation To = 5 - 0,25 With, from the first 0.10 (5 - 0.25 With) + 0,25 With = 0,5 - 0,025 With + 0,25 With = 0,5 + 0,225 With= 1, whence With= 0.5/0.225 = 20/9 and To= 5 - 5/9 = 40/9. So, BUT = (40/9; 20/9).

The straight line (3) in Fig. 4 is the straight line corresponding to the objective function 3.8 To + 4,2With. It passes between the straight lines (1) and (4) that define the constraints, and the minimum is reached at the point BUT, through which the straight line (3) passes. Therefore, the minimum is 3.8x40/9 + 4.2x20/9 = 236/9. The mixture optimization problem is completely solved.

The dual problem, constructed according to the rules described above, has the following form (we repeat here the original mixture optimization problem in order to clearly demonstrate the technology for constructing the dual problem):

3,8 To + 4,2 With→ min , W 1 + 5 W 2 + 400 W 3 → max ,

0,10 To + 0,25 With ≥ 1,00 , 0,1 W 1 + 1,10 W 2 + 110 W 3 ≤ 3,8 ,

1,00 To + 0,25 With ≥ 5,00 , 0,25W 1 + 0,25 W 2 + 120 W 3 ≤ 4,2 ,

110,00 To + 120,00 With ≥ 400,00 , W 1 ≥ 0 ,

To ≥ 0 , W 2 ≥ 0 ,

With ≥ 0 . W 3 ≥ 0 .

The minimum value in the direct problem, as it should be, is equal to the maximum value in the dual problem, i.e. both numbers are 236/9. Interpretation of dual variables: W 1 - "cost" of a unit of substance T, and W 2 - "cost" of a unit of substance H, measured "according to their contribution" to the objective function. Wherein W 3 = 0, since the calorie restriction does not contribute in any way to the formation of the optimal solution. So, W 1 , W 2 , W 3 - this is the so-called. objectively conditioned estimates (according to L.V. Kantorovich) of resources (substances T and H, calories).

Planning of the nomenclature and volumes of release. Let's return to the organization of production. The enterprise can produce automatic kitchens (type of pans), coffee makers and samovars. Table 2 shows data on the production capacities available at the enterprise (in pieces of products).

Table 2.

Production capacity (in pcs.)

Coffee makers

Samovars

Stamping

Issue volume

Specific profit (per product)

In this case, stamping and finishing are carried out on the same equipment. It allows you to stamp in a given time either 20,000 kitchens, or 30,000 coffee makers, or both, and no less. But the assembly is carried out in separate areas.

The linear programming problem has the form:

X 1 ≥ 0 , X 2 ≥ 0 , X 3 ≥ 0 , (0)

X 1 / 200 + X 2 / 300 + X 3 / 120 ≤ 100 , (1)

X 1 / 300 + X 2 / 100 + X 3 / 100 ≤ 100 , (2)

X 1 / 200 ≤ 100 , (3)

Х 2 / 120 ≤ 100 , (4)

X 3 / 80 ≤ 100 , (5)

F\u003d 15 X 1 + 12 X 2 + 14 X 3 → max .

(0) is the usual condition in economics for the non-negativity of variables,

(1) - restriction on the possibilities of stamping (expressed for ease of perception as a percentage),

(2) - restriction on the possibilities of finishing,

(3) - assembly limit for kitchens,

(4) - the same for coffee grinders,

(5) - the same for samovars (as already mentioned, all three types of products are assembled on separate lines).

Finally, the objective function F is the total profit of the enterprise.

Note that inequality (3) follows from inequality (1), and inequality (4) follows from (2). Therefore, inequalities (3) and (4) can be eliminated from the formulation of the linear programming problem.

Let us immediately note an interesting fact. As will be determined, optimally X 3 = 0, i.e. it is unprofitable to produce samovars.

Methods for solving linear programming problems. Methods for solving linear programming problems belong to computational mathematics, not to economics. However, it is useful for an economist to be aware of the properties of the smart tool he uses.

As the power of computers increases, the need to use sophisticated mathematical methods decreases, since in many cases the counting time ceases to be a limiting factor, it is very small (fractions of seconds). Therefore, we will analyze only three methods.

Simple enumeration. Let's take some multidimensional parallelepiped containing a polyhedron defined by constraints. How to build it? For example, if there is a constraint of type 2 X 1 + 5X 2 ≤ 10, then, obviously, 0 ≤ X 1 ≤ 10/2 = 5 and 0 ≤ X 2 ≤ 10/5 = 2. Similarly, one can move from general linear constraints to constraints on individual variables. It remains to take the maximum bounds for each variable. If the constrained polyhedron is unbounded, as was the case in the diet problem, it is possible, in a similar but somewhat more complicated way, to single out its part "turned" to the origin, containing the solution, and enclose it in a multidimensional parallelepiped.

Let's enumerate the points of the parallelepiped with a step of 1/10 n sequentially at n=2,3,…, calculating the values ​​of the objective function and checking the fulfillment of the constraints. Of all the points that satisfy the constraints, we take the one at which the objective function is maximum. Solution found! (More strictly speaking, found with an accuracy of 1/10 n.)

Directed iteration. Let's start with a point that satisfies the constraints (it can be found by simple enumeration). We will sequentially (or randomly - using the so-called random search method) change its coordinates by a certain value ∆, each time to a point with a higher value of the objective function. If we reach the restriction plane, we will move along it (finding one of the coordinates using the restriction equation). Then the movement along the edge (when two constraints-inequalities turn into equalities) ... Stop - at the vertex of the linear polyhedron. Solution found! (More strictly speaking, it is found up to ∆. If necessary, in the neighborhood of the found solution, we carry out directed enumeration with a step of ∆/2 , ∆/4, etc.)

Simplex method. This is one of the first specialized optimization methods aimed at solving linear programming problems, while simple and directed enumeration methods can be applied to solve almost any optimization problem. The simplex method was proposed by the American G. Dantzig in 1951. Its main idea is to move along a convex constraint polyhedron from vertex to vertex, in which at each step the value of the objective function improves until the optimum is reached. Let's analyze an example based on the data in Table 2.

Consider the linear programming problem formulated above when considering the optimization of the range and output volumes:

F = 15 X 1 + 12 X 2 + 14 X 3 → max .

X 1 / 200 + X 2 / 300 + X 3 / 120 ≤ 100 ,

X 1 / 300 + X 2 / 100 + X 3 / 100 ≤ 100 ,

X 3 / 80 ≤ 100 .

The non-negativity of variables will not be specifically indicated, since this assumption is always accepted in linear programming problems.

In accordance with the simplex method, we introduce the so-called "free variables" X 4 , X 5 , X 6 corresponding to underutilized capacities, i.e. from the system of inequalities we pass to the system of equations:

X 1 / 200 + X 2 / 300 + X 3 / 120 + X 4 = 100 ,

X 1 / 300 + X 2 / 100 + X 3 / 100 + X 5 = 100 ,

X 3 / 80 + X 6 = 100 ,

15 X 1 + 12 X 2 + 14 X 3 = F.

This system has an obvious solution corresponding to one of the vertices of the polyhedron of admissible variable values:

X 1 = X 2 = X 3 = 0, X 4 = X 5 = X 6 = 100, F = 0.

In terms of the original problem, this means that nothing needs to be released. This solution is acceptable only for the period of summer holidays.

In accordance with the simplex method, we select a variable that is included in the objective function F with the highest positive coefficient. This is X 1 .

We compare the quotients of the free terms in the first three equations by the coefficients with the variable just chosen X 1:

100 / (1/200) = 20000, 100 / (1/300) =30000, 100/0 = + ∞ .

We select a line from the system of equations, which corresponds to the minimum of all positive ratios. In this example, this is the first line, which corresponds to the ratio 20000.

Multiply the first row by 200 to get X 1 with unity coefficient:

X 1 + 2/3 X 2 + 2/1,2 X 3 + 200 X 4 = 20000 .

Then we multiply the newly obtained row by (-1/300) and add it to the second row to eliminate the term with X 1 , we get

7/900 X 2 + 4/900 X 3 - 2/3 X 4 + X 5 = 100/3.

Multiply the same converted first string by (-15) and add it to the string on the right side of which is F, we get:

2 X 2 - 11 X 3 - 3000 x 4 = F - 300000.

As a result, the system of equations is transformed to the form in which the variable X 1 is included only in the first equation:

X 1 + 2/3 X 2 + 2/1,2 X 3 + 200 X 4 = 20000 ,

7/900 X 2 + 4/900 X 3 - 2/3 X 4 + X 5 = 100/3,

X 3 / 80 + X 6 = 100 ,

2 X 2 - 11 X 3 - 3000 x 4 = F - 300000.

Obviously, the new system has an improved solution compared to the original one, corresponding to another vertex of the convex polyhedron in six-dimensional space:

X 1 = 20000, X 2 = X 3 = X 4 = 0, X 5 = 100/3, X 6 = 100, F = 300000.

In terms of the original problem, this solution means that only kitchens should be produced. Such a solution is acceptable if it is permissible to produce only one type of product.

Let's repeat the above operation. In line with F there is another positive coefficient - at X 2 (if there were several positive coefficients, we would take the maximum of them). Based on coefficients at X 2 (not at X 1 , as for the first time) we form the quotients from dividing the corresponding free terms by these coefficients:

20000 / (2/3) = 30000, (100/3) / (7/900) = 30000/7, 100/0 = + ∞.

Thus, we need to choose the second row, for which we have the smallest positive ratio of 30000/7. Multiply the second row by 900/7 (so that the coefficient at X 2 was equal to 1). Then add the updated line to all lines containing X 2 , having previously multiplied them by suitable numbers, i.e. such that all coefficients at X 2 would become 0 after addition, except for the coefficient of the second row, which has already become equal to 1. We get a system of equations:

X 1 + 9/7 X 3 + 1800/7 X 4 - 600/7 X 5 = 120000/7 ,

X 2 + 4/7 X 3 - 600/7 X 4 + 900/7 X 5 = 30000/7,

X 3 / 80 + X 6 = 100 ,

85/7 X 3 - 19800/7 X 4 - 1800/7 X 5 = F - 308571.

Since all variables are non-negative, it follows from the last equation that profit F reaches its maximum value of 308571 at X 3 = X 4 = X 5 = 0. It follows from the remaining equations that, in this case, X 1 = 120000/7 = 17143, X 2 = 30000/7 = 4286, X 6 = 100. Since in the line with F there is not a single positive coefficient of the variables left, then the simplex method algorithm has finished its work, the optimal solution has been found.

The practical recommendations are as follows: it is necessary to produce 17143 kitchens, four times less, i.e. 4286, coffee grinders, do not produce samovars at all. In this case, the profit will be maximum and equal to 308571. All production equipment will be fully loaded, with the exception of the samovar assembly line.

transport task. Various technical, economic and economic tasks of production management, from the optimal loading of the machine and cutting of a steel sheet or fabric to the analysis of the inter-sectoral balance and assessment of the growth rate of the country's economy as a whole, lead to the need to solve certain linear programming problems. The book contains an extensive list of publications devoted to numerous applications of linear programming in metallurgy, coal, chemical, oil, paper and other industries, in the problems of transport and communications, production planning, design and storage of products, agriculture, in scientific research, including number of economic, and even in the regulation of traffic.

As another example, consider the so-called. transport task. There are warehouses, the stocks of which are known. Consumers and their needs are known. It is necessary to deliver goods from warehouses to consumers. You can organize the "attachment" of consumers to warehouses in different ways, i.e. determine from which warehouse to which consumer and how much to carry. In addition, the cost of delivering a unit of goods from a certain warehouse to a certain consumer is known. It is required to minimize transportation costs.

For example, we can talk about the transportation of sand - a raw material for the production of bricks. Sand is usually delivered to Moscow by the cheapest transport - water. Therefore, ports can be considered as warehouses, and their daily throughput can be considered as reserves. Consumers are brick factories, and their needs are determined by daily production (in accordance with existing orders). For delivery, it is necessary to load vehicles, drive along a certain route and unload it. The cost of these operations is calculated according to well-known rules, on which it makes no sense to dwell. Therefore, the cost of delivering goods from a particular warehouse to a particular consumer can be considered known.

Consider an example of a transport problem, the initial data for which are presented in Table. 3.

Table 3, in addition to the volumes of needs and stock values, shows the cost of delivery of a unit of goods from the warehouse i, i = 1,2,3, consumer j, j = 1,2,3,4. For example, the cheapest delivery is from warehouse 2 to consumers 1 and 3, and from warehouse 3 to consumer 2. However, warehouse 2 has 80 units of goods, and consumers 1 and 3 require 50 + 70 = 120 units, so goods will have to be driven to them and from other warehouses. Please note that in Table 3, stocks in warehouses are equal to total requirements. For example, with the delivery of sand to brick factories, this is a completely natural restriction - if such a restriction is not met, either the ports will be covered with mountains of sand, or the brick factories will not fulfill orders.

Table 3

Initial data for the transport problem.

Consumer 1

Consumer 2

Consumer 3

Consumer 4

Stocks in warehouses

Needs

It is necessary to plan transportation, i.e. choose volumes Х ij deliveries of goods from the warehouse i consumer j, where i = 1,2,3; j= 1,2,3,4. Thus, in total there are 12 variables in the problem. They satisfy two groups of restrictions. First, stocks in warehouses are set:

X 11 + X 12 + X 13 + X 14 = 60 ,

X 21 + X 22 + X 23 + X 24 = 80 ,

X 31 + X 32 + X 33 + X 34 = 60 .

Secondly, the needs of customers are known:

X 11 + X 21 + X 31 = 50 ,

X 12 + X 22 + X 32 = 40 ,

X 13 + X 23 + X 33 = 70 ,

X 14 + X 24 + X 34 = 40 .

So, there are 7 equality-type constraints in total. In addition, all variables are non-negative - 12 more constraints.

The objective function is the transportation costs that need to be minimized:

F = 2 X 11 + 5 X 12 + 4 X 13 + 5 X 14 + X 21 + 2 X 22 + X 23 + 4 X 24 +

3 X 31 + X 32 + 5 X 33 + 2 X 34 → min.

In addition to the one discussed, various other variants of the transport problem are also considered. For example, if delivery is made by wagons, then the volume of deliveries must be a multiple of the capacity of the wagon.

The number of variables and constraints in the transport problem is such that it cannot be solved without a computer and the corresponding software product.

3.2.2. Integer programming

Optimization problems in which variables take integer values ​​are related to integer programming. Let's consider a few of these problems.

Equipment selection problem. 20,000 US dollars were allocated for the purchase of equipment for the new section of the workshop. In this case, you can occupy an area of ​​\u200b\u200bno more than 38 m 2. Type A machines and Type B machines are available. Type A machines cost US$5,000, occupy an area of ​​8 m2 (including the necessary technological passages) and have a capacity of 7,000 units per shift. Type B machines cost $2,000, occupy an area of ​​4 m2, and have a capacity of 3,000 units per shift. It is necessary to calculate the optimal option for acquiring equipment that, under given restrictions, ensures the maximum overall productivity of the site.

Let X be the number of type A machines, and Y the number of type B machines included in the equipment set. It is required to choose a set of equipment so as to maximize performance With area (in thousand units per shift):

With= 7 X + 3 At → max.

In this case, the following restrictions must be met:

by cost (in thousands of US dollars)

5 X+ 2 At ≤ 20,

by area occupied (in m 2)

8 X + 4 At ≤ 38,

as well as newly emerging specific restrictions on integers, namely,

X ≥ 0 , At ≥ 0 , X and At- whole numbers.

The formulated mathematical problem differs from the linear programming problem only in the last integer condition. However, the presence of this condition makes it possible (in this particular case) to easily solve the problem by enumeration. Indeed, both the cost constraint and the area constraint give that X≤ 4. Hence, X can take only one of 5 values: 0, 1, 2, 3, 4.

If a X= 4, then it follows from the cost constraint that At= 0, and therefore With = 7 X = 28.

If a X= 3, then the first constraint implies that At≤ 2, from the second At≤ 3. Hence, the maximum With At=2, namely With = 21 + 6 = 27.

If a X= 2, then it follows from the first constraint that At≤ 5, from the second also At≤ 5. Hence, the maximum With subject to the fulfillment of restrictions is achieved when At=5, namely With = 14 + 15 = 29.

If a X= 1, then from the first constraint we have At≤ 7, from the second also At≤ 7. Hence, the maximum With subject to the fulfillment of restrictions is achieved when At= 7, namely With = 7 + 21 = 28.

If a X= 0, then the first constraint implies At≤ 10, from the second At≤ 9. Hence, the maximum With subject to the fulfillment of restrictions is achieved when At= 9, namely, With = 27.

All possible cases have been considered. Maximum performance With= 29 (thousand units of production per shift) is achieved with X = 2, At= 5. Therefore, you need to buy 2 machines of type A and 5 machines of type B.

The knapsack problem. The total weight of the backpack is pre-limited. What items to put in the backpack so that the overall utility of the selected items is maximized? The weight of each item is known.

There are many equivalent formulations. For example, instead of a knapsack, we can consider a spacecraft - a satellite of the Earth, and as objects - scientific instruments. Then the problem is interpreted as the selection of devices for launching into orbit. True, this assumes that the preliminary task has been solved - the assessment of the comparative value of studies for which one or another instrument is needed.

From the point of view of the economics of the enterprise and the organization of production, another interpretation of the knapsack problem is more relevant, in which orders (or options for the production of batches of certain goods) are considered as “objects”, profit from the fulfillment of an order is considered as utility, and in as weight - the cost of the order.

Let's move on to the mathematical formulation. It is assumed that there are n objects, and for each of them it is necessary to decide whether to put it in the satchel or not. Boolean variables are introduced to describe the solution X k ,k = 1,2,…, n(i.e. variables that take two values, namely 0 and 1). Wherein X k= 1 if the item is placed in a knapsack, and X k= 0 if not, k = 1,2,…, n. For each item, two constants are known: A k- the weight k-th subject, and With k- utility k-th subject, k = 1,2,…, n. We denote the maximum possible capacity of the knapsack AT. The optimization problem has the form

C 1 X 1 + C 2 X 2 + C 3 X 3 + .... + C n X n→ max ,

A 1 X 1 + A 2 X 2 + A 3 X 3 + .... + А n Х n ≤ В.

Unlike the previous tasks, the control parameters X k , k = 1,2,…, n, take values ​​from a set containing two elements - 0 and 1.

Integer programming includes problems of placement (of production facilities), scheduling theory, scheduling and operational planning, staff assignment, etc. (see, for example, the monograph).

We indicate two common methods for solving problems of integer programming

Approximation method by continuous problems. In accordance with it, the problem of linear programming is first solved without considering the integer, and then integer points are searched in the vicinity of the optimal solution.

Directed enumeration methods. Of these, the best known is the branch and bound method. The essence of the method is as follows. Each subset X set of possible solutions X 0 is assigned a number - "border" A (X). When solving a minimization problem, it is necessary that BUT(X 1) ≥ A (X 2), if X 1 is included in X 2 or same as X 2 .

Each step of the branch and bound method consists in dividing the set X C selected at the previous step into two - X 1C and X 2C. At the same time, the intersection X 1C and X 2C is empty, and their union is the same as X C. Then calculate the bounds BUT(X 1C) and BUT(X 2C) and allocate a "branch" X C+1 - one of the sets X 1C and X 2C, for which the bound is smaller. The algorithm stops when the diameter of the newly selected branch is less than a predetermined small number

For each specific problem of integer programming (in other words, discrete optimization), the branch and bound method is implemented in its own way. There are many modifications to this method.

3.2.3. Graph Theory and Optimization

One of the branches of discrete mathematics often used in decision making is graph theory (see, for example, tutorials). A graph is a collection of points, called graph vertices, some of which are connected by arcs (arcs are also called edges). Examples of graphs are shown in Fig.5.

Fig.5. Graph examples.


New properties are "attached" to the newly introduced notion of a graph. New qualities are attributed to the original object. For example, the notion of a directed graph is introduced and used. In such a graph, the arcs have arrows directed from one vertex to another. Examples of directed graphs are given in Fig.6.

Fig.6. Examples of directed graphs.

A directed graph would be useful, for example, to illustrate the organization of transportation in a transportation problem. In economics, numbers are often assigned to the arcs of a directed or ordinary graph, such as the cost of travel or transportation of goods from point A (the initial vertex of the arc) to point B (the final vertex of the arc).

Let's consider some typical decision-making problems related to optimization on graphs.

The traveling salesman problem. It is required to visit all vertices of the graph and return to the original vertex, minimizing travel costs (or minimizing time).

The initial data here is a graph whose arcs are assigned positive numbers - travel costs or the time required to move from one vertex to another. In general, the graph is directed, and every two vertices connect two arcs - back and forth. Indeed, if point A is located on a mountain, and point B is in a lowland, then the time to travel from A to B is obviously less than the time to travel back from B to A.

Many statements of economic content are reduced to the traveling salesman problem. For example:

Compile the most profitable route for bypassing the adjuster in the shop (controller, security guard, policeman), who is responsible for the proper functioning of a given set of objects (each of these objects is modeled by a graph vertex);

Draw up the most profitable route for delivering parts to workers or bread from the bakery to a given number of bakeries and other outlets (parking at the bakery).

Shortest Path Problem. What is the shortest way to get from one graph vertex to another? In terms of production management: what is the shortest way (and, therefore, with the least fuel and time consumption, the cheapest way) to get from point A to point B? To solve this problem, each arc of a directed graph must be associated with a number - the time it takes to move along this arc from the initial vertex to the final one. Consider an example (Fig. 7).

Fig.7. Initial data for the shortest path problem.

The situation can be described not only by a directed graph with weights assigned to the arcs, but also by a table (Table 7). In this table, two vertices - the beginning of the path and the end of the path - are associated with the travel time. Table 7 considers paths without intermediate stops. More complex routes are made up of elementary segments listed in Table 4.

Table 4

Initial data for the shortest path problem.

Arc start

End of arc

Travel time

The question is: what is the shortest way to get from vertex 1 to vertex 4?

Decision. Let's introduce the notation: With(T) - length of the shortest path from vertex 1 to vertex T. (Since any path to be considered consists of arcs, and there are a finite number of arcs, and each arc enters at most once, there are finitely many contenders for the shortest path, and the minimum of a finite number of elements is always reached.) The problem under consideration is to calculate With(4) and an indication of the path on which this minimum is reached.

For the initial data presented in Fig. 7 and Table 4, only one arrow enters vertex 3, just from vertex 1, and near this arrow there is its length equal to 1, therefore With(3) = 1. Moreover, it is obvious that With(1) = 0.

You can get to vertex 4 either from vertex 2, having traveled a path equal to 4, or from vertex 5, having traveled a path equal to 5. Therefore, the relation

With(4) = min(С(2) + 4; With(5) + 5}.

Thus, the restructuring (simplification) of the problem has been carried out - finding С(4) has been reduced to finding С(2) and With(5).

You can get to vertex 5 either from vertex 3, having traveled a path equal to 2, or from vertex 6, having traveled a path equal to 3. Therefore, the relation

With(5) = min ( With(3) + 2; With(6) + 3}.

We know that With(3) = 1. Therefore

С(5) = min(3; With(6) + 3}.

Since it is obvious that C(6) is a positive number, it follows from the last relation that With(5) = 3.

You can get to vertex 2 either from vertex 1, having traveled a path equal to 7, or from vertex 3, having traveled a path equal to 5, or from vertex 5, having traveled a path equal to 2. Therefore, the relation

With(2) = min (С(1) + 7; С(3) + 5; With(5) + 2}.

We know that С(1) = 0, With(3) = 1, With(5) = 3. Therefore

With(2) = min(0 + 7; 1 + 5; 3 + 2) = 5.

Now we can find With(4):

With(4) = min ( With(2) + 4; With(5) + 5) = min(5 + 4; 3 + 5) = 8.

Thus, the length of the shortest path is 8. It is clear from the last relation that one must go to vertex 4 through vertex 5. Returning to the calculation With(5), we see that we must go to vertex 5 through vertex 3. And we can get to vertex 3 only from vertex 1. So, the shortest path is as follows:

1 → 3 → 5 → 4 .

The problem of the shortest path for specific initial data (Fig. 7 and Table 4) is completely solved.

Optimization problems on graphs that arise in the preparation of managerial decisions in production management are very diverse. As an example, consider another problem related to transportation.

The maximum flow problem. How (i.e. on what routes) to send the maximum possible amount of goods from the starting point to the final point, if the capacity of the paths between the points is limited?

To solve this problem, each arc of the directed graph corresponding to the transport system must be associated with a number - the capacity of this arc. Consider an example (Fig. 8).

Fig.8. Initial data for the maximum flow problem

The initial data on the transport system, for example, intra-factory, shown in Fig. 8, can also be set in a table (Table 5).

The solution of the maximum flow problem can be obtained from the following considerations.

Obviously, the maximum capacity of the transport system does not exceed 6, since no more than 6 units of cargo can be sent from the starting point 0, namely, 2 units to point 1, 3 units to point 2 and 1 unit to point 3.

Table 5

Initial data for the maximum flow problem

Point of departure

Destination

Bandwidth

Next, it is necessary to ensure that all 6 units of cargo leaving point 0 reach the final point 4. Obviously, 2 units of cargo that arrived at point 1 can be directly sent to point 4. The goods that arrived at point 2 will have to be divided: 2 units immediately sent to the point 4, and 1 unit - to the intermediate point 3 (due to the limited capacity of the section between points 2 and 4). The following cargoes were delivered to point 3: 1 unit from point 0 and 1 unit from point 2. We send them to point 4.

So, the maximum capacity of the transport system under consideration is 6 units of cargo. At the same time, internal sections (branches) between points 1 and 2, as well as between points 1 and 3 are not used. The branch between points 1 and 4 is not fully loaded - 2 units of cargo are sent along it with a throughput of 3 units.

The solution can be presented in the form of a table (Table 6).

Table 6

Solving the maximum flow problem

Point of departure

Destination

Transportation plan

Bandwidth

Linear programming problem for flow maximization. Let us formulate the maximum flow problem in terms of linear programming. Let be X KM- volume of traffic from the point To to paragraph M. According to Fig.8 To = 0,1,2,3, M= 1,2,3,4, and transportation is possible only to a point with a higher number. So there are 9 variables in total. X KM, namely, X 01 , X 02 , X 03 , X 12 , X 13 , X 14 , X 23 , X 24 , X 34 . A linear programming problem aimed at maximizing the flow has the form:

F→ max ,

X 01 + X 02 + X 03 = F (0)

- X 01 + X 12 + X 13 + X 14 = 0 (1)

- X 02 - X 12 + X 23 + X 24 = 0 (2)

- X 03 - X 13 - X 23 + X 34 = 0 (3)

- X 14 - X 24 - X 34 = -F (4)

X 01 ≤ 2

X 02 ≤ 3

X 03 ≤ 1

X 12 ≤ 4

X 13 ≤ 1

X 14 ≤ 3

X 23 ≤ 1

X 24 ≤ 2

X 34 ≤ 2

X KM ≥ 0 , K, M = 0, 1, 2, 3, 4

F ≥ 0 .

Here F- objective function, condition (0) describes the entry of goods into the transport system. Conditions (1) - (3) set balance ratios for nodes 1-3 of the system. In other words, for each of the internal nodes, the incoming flow of goods is equal to the outgoing flow, loads do not accumulate inside the system and are not "born" in it. Condition (4) is the condition for the "exit" of goods from the system. Together with condition (0), it constitutes a balance ratio for the system as a whole ("input" is equal to "output"). The next nine inequalities set limits on the capacity of individual "branches" of the transport system. Then, in the system of constraints of the linear programming problem, the non-negativity of traffic volumes and the objective function is indicated. It is clear that the last inequality follows from the form of the objective function (relationship (0) or (4)) and the non-negativity of traffic volumes. However, the last inequality carries some general information - either a positive volume of goods or zero can be passed through the system (for example, if there is movement in a circle inside the system), but not negative (it does not make economic sense, but the formal mathematical model about this "does not knows").

On the variety of optimization problems. A wide variety of optimization problems arise in various decision-making problems. To solve them, one or another method, exact or approximate, is used. Optimization problems are often used in theoretical and economic studies. It suffices to recall the optimization of the country's economic growth using the Wassily Leontiev input-output matrix or the microeconomic problems of determining the optimal volume of output in terms of the cost function at a fixed price (or under monopoly conditions) or minimizing costs for a given volume of output by choosing the optimal ratio of production factors (taking into account payment for them).

In addition to the above methods for solving optimization problems, we recall that smooth functions are optimized by setting the derivative equal to 0 (for functions of several variables, partial derivatives). If there are restrictions, Lagrange multipliers are used. These methods are usually taught in advanced mathematics courses and are therefore omitted here.

Of interest are optimization problems with fuzzy variables, as well as optimization problems arising in econometrics. They are discussed in the relevant literature.

Literature

1. Gass S. Journey to the land of linear programming / Per. from English. - M.: Mir, 1973. - 176 p.

2. Kofman A., Fore R. Let's do research operations / Per. from French. - M: Mir, 1966. -280 p.

3. Belov V.V., Vorobyov E.M., Shatalov V.E. Graph theory. - M.: Higher school, 1976. - 392 p.

4. Burkov V.N., Zalozhnev A.Yu., Novikov D.A. Graph theory in the management of organizational systems. – M.: Sinteg, 2001. – 124 p.

5. Orlov A.I. Optimization problems and fuzzy variables. - M.: Knowledge, 1980. - 64 p.

6. Orlov A.I. Econometrics. - M .: Publishing house "Exam", 2002. - 576 p.

Tasks by decision-making methods

1. Draw a linear programming problem on the constraint plane and solve (graphically) this problem:

400 W 1 + 450 W 2 → min ,

5 W 1 + 10 W 2 ≥ 45,

20 W 1 + 15 W 2 ≥ 80,

W 1 ≥ 0, W 2 ≥ 0.

2. Solve the linear programming problem:

W 1 + 5 W 2 → max,

0,1 W 1 + W 2 ≤ 3,8 ,

0,25 W 1 + 0,25 W 2 ≤ 4,2 ,

W 1 ≥ 0 , W 2 ≥ 0 .

3. Solve the problem of integer programming:

10 X + 5 At→ max.

8X + 3 At ≤ 40,

3 X + 10 At ≤ 30,

X ≥ 0 , At ≥ 0 , X and Y are integers.

4. Solve the knapsack problem:

X 1 + X 2 + 2 X 3 + 2 X 4 + X 5 + X 6 → max,

0,5X 1 +X 2 + 1,5 X 3 + 2X 4 + 2,5X 5 + 3X 6 ≤ 3.

Control parameters X k,k= 1,2,…, 6 , take values ​​from a set containing two elements - 0 and 1.

5. The transport network (with indication of distances) is shown in Figure 9. Find the shortest path from point 1 to point 4.

Fig.9. Initial data for the shortest path problem.

7. Solve the traveling salesman problem for four cities (the route must be closed and contain no return visits). Travel costs are shown in Table 7.

Table 7

Initial data for the traveling salesman problem

Departure city

Destination city

Travel costs

8. How to send the maximum amount of cargo from the starting point 1 to the final point 8 if the capacity of the paths between the points of the transport network (Fig. 10) is limited (Table 8)?

Fig.9. Transport network to the maximum flow problem.

Table 8

Initial data for the maximum flow problem

Point of departure

Destination

Bandwidth

Topics of reports and abstracts

1. Classification of optimization problems of decision making.

2. Pareto optimal solutions.

3. Multi-criteria decision-making problems: different methods of criteria convolution.

4. Optimization problems and fuzzy variables (based on work).

5. Modeling and expert assessments in decision making.

6. Interactive decision-making systems.

7. Methods for taking into account the uncertainties of decision making: probabilistic models, fuzzy theory, interval mathematics.

8. Econometric methods of decision making (based on the monograph).

9. Simulation modeling and statistical testing method (Monte Carlo) in decision making.

11. Methods of game theory (conflict theory), the role of information and Nash equilibrium in decision theory.

12. Problems of the combined application of various methods in specific applied work.

13. Information technology decision support.


Previous

Rejection of the currently dominant definition

Economic theory is the science of which of the rare productive resources people and society over time, with the help of money or without their participation, choose to produce various goods and distribute them for consumption in the present and future among different people and groups of society.

In favor of brief

ET is the science of optimizing the economy (management) at all levels up to the global level.

Associated with the possibilities of the concept of optimization

OPTIMIZATION (one of the formulations) - determination of the values ​​of economic indicators at which the optimum is achieved, that is, the best state of the system. Most often, the optimum corresponds to achieving the highest result with given resource costs or achieving a given result with minimal resource costs. http://slovari.yandex.ru/dict/economic

Or Optimization (from Latin optimum - the best) - the process of finding an extremum (global maximum or minimum) of a certain function or choosing the best (optimal) option from a variety of possible ones. The most reliable way to find the best option is a comparative assessment of all possible options (alternatives).
If the number of alternatives is large, mathematical programming methods are usually used to find the best one. The methods can be applied if there is a strict statement of the problem: a set of variables is set, the area of ​​their possible change is set (limitations are set), and the type of the objective function (the function whose extremum needs to be found) of these variables is determined. The latter is a quantitative measure (criterion) for assessing the degree of achievement of the goal. In dynamic problems, when the restrictions imposed on variables depend on time, optimal control and dynamic programming methods are used to find the best course of action.

In order to find the optimal among a large number of rational options, information is needed on the preference of various combinations of values ​​of indicators characterizing the options. In the absence of this information, the best option from among the rational ones is chosen by the leader responsible for making the decision ...

The introduction of the concept of optimization in the definition of economic theory reduces the chances of a general chatter in this science.

Economic theory as a science of economic optimization requires

Optimization of the conceptual apparatus of this theory;
- optimization of economic research methods;
- optimization of consideration and definition of each concept;
- optimization of economic decisions at all levels of economic life;
- the use of optimality criteria in the evaluation of any economic phenomena.

Goals of economic education:
formation of the foundations of economic optimization thinking;
development of functional economic literacy and ability to optimize self-development;
the formation of practical skills for making optimal decisions in various economic situations;

Tasks of economic education:
to form the knowledge, skills and abilities necessary for optimization in economic life;
to develop a culture of economic optimization thinking, to teach how to use economic optimization tools.

The classic of political economy recognizes personal benefit as the criterion of optimality.
Neoclassicism and trends close to it are also not against economic egoism.

Optimization-focused economics allows for self-interest as a particular (though common) economic decision at all levels.

At the same time, such an ET allows at all levels the optimality of collective benefit, the preferential benefit of the majority (especially all) participants of any level of economic life: family (where there are 2 or more family members), local, regional, state, interstate, global ...

Diverse benefits (private and common) - as a criterion of optimality - are also characteristic of wildlife (http://ddarwin.narod.ru/), it also includes benefits from the very survival of any system.

The dominant economic theory so far (sharply competitive, “market”) justifies only private benefits, often bashfully turning a blind eye to the efforts of countries and peoples to achieve common benefits (sometimes inevitably to the detriment of private ones) in the name of the existence of economic systems of different levels. Starting from small settlements and individual families (for example, farmers).

ET as a science of optimizing the economy (management) at all levels up to the global one allows more research into the harmonization of personal and common interests for the survival of all business entities.

Social groups have been engaged in various aspects of economic optimization since primitive times. Optimization processes have intensified in recent millennia with the formation of states, the emergence of large polyethnic groups in China and India, Egypt and Sumer, in the expanses of Scythia and in other regions. Without various forms of optimization (one or another coordination of interests, often by force) economic life is impossible.

Optimality is related to efficiency and efficiency to optimality. This connection goes through all the basic concepts, even the dominant ET so far.

Needs and economic benefits, utility.
Economic resources, their types, limited resources (and their optimal use).
economic choice. Alternative costs. The principle of increasing economic costs. Production Possibility Curve.
The concept of efficiency. Pareto efficiency and optimality criterion. Resource efficiency and allocation efficiency.
Positive and normative theory. Economic policy. Economic systems.
Market system. Market. Competition.
Demand and price. Function and demand curve. demand factors. The law of demand. Consumer win. Individual and market demand.
Offer and price. Function and supply curve. supply factors. The law of supply. Manufacturer win.
Market balance of supply and demand. Equilibrium price. Scarcity and surplus.
The impact of commodity taxes and subsidies, the distribution of the tax burden.
Price elasticity of demand and its properties. Arc elasticity.
cross elasticity. income elasticity of demand. Price elasticity of supply.
Prerequisites for consumer choice analysis. Utility. marginal utility.
Equilibrium of the consumer in the cardinal theory.
Consumer preferences. Curves of indifference.
budget constraint. Consumer's equilibrium position.
Changes in consumer income and prices of goods. substitution effect. income effect.
Benefits of a lower order. Substitutability and complementarity of goods.
Production. factors of production. Income factors.
The concept of a production function.
Total, average and marginal product.
Law of diminishing marginal productivity
Isoquant and its properties. Isocost. Producer equilibrium
Firm: concept, types.
Firm costs. Fixed and variable costs.
General costs. Average costs.
marginal cost.
Accounting and economic profit
The total, average and marginal revenue of the firm.
Various types of market structures.
Perfect Competition
Equilibrium of a competitive firm in the short run
Equilibrium of a competitive firm in the long run
Pure monopoly. Determining the price and volume of production in a monopoly. Indicators of market power. Economic consequences of monopoly.
Monopolistic competition. Establishing the price and volume of production in conditions of monopolistic competition. Non-price competition. Product diversification.
Oligopoly. Determining the price and volume of production in an oligopoly.
Markets for factors of production: labor, capital, land. Formation of demand for factors of production, its derivative nature.
Labor market. Demand and supply in the labor market.
Monopsony and bilateral monopoly in the labor market. The role of trade unions. Efficient salary. Theory of human capital. Investment in education.
capital market. physical and monetary capital. Capital and loan interest. Demand and supply of loans.
Interest rate under perfect competition. Real and nominal interest rate. Equilibrium interest rate.
Investment decisions of firms. The principle of discounting. Evaluation of investment efficiency.
Partial and general equilibrium. General equilibrium and distribution efficiency.
Efficiency criteria in a market economy.
Efficiency criterion and Pareto optimum (and here).
Efficiency and social justice, social and economic optimum. Compensation principle (Kaldor-Hicks principle).
"Market failures". Social security system.
Inequality, poverty and discrimination. Income distributions. Lorenz curve. Gini coefficient.
public goods. Demand and supply of public goods. Comparative analysis of public and private goods.
Private and social costs. Private (internal) and social (external) benefit. The problem of the public goods market and the regulatory role of the state.
Offering public goods through political institutions. Public choice in a direct and representative democracy. Decisions made upon agreement. Majority rules. Lobbying. Seekers of political rent.
Externalities: positive and negative externalities.
The problem of internalizing externalities. Government policy: corrective taxes and subsidies.
The theory of property rights. Coase theorem. transaction costs. Market of property rights.

It seems that there is no need to prove to modern economists the prospects of optimality as the main problem of modern economic theory. Almost any specialist thinks about the optimization of the economy at all levels.

Modern ET should simply justify these efforts of specialists.

Parameters for a given object structure, then it is called parametric optimization. The problem of choosing the optimal structure is structural optimization.

The standard mathematical optimization problem is formulated in this way. Among the elements χ that form the sets X, find such an element χ * that provides the minimum value f(χ *) of the given function f(χ). In order to correctly pose the optimization problem, it is necessary to set:

Then solve the problem means one of:

If the function being minimized is not convex, then they often limit themselves to searching for local minima and maxima: points such that everywhere in some of their neighborhoods for a minimum and for a maximum.

If the set is admissible, then such a problem is called unconstrained optimization problem, otherwise - conditional optimization problem.

Classification of optimization methods

The general notation of optimization problems defines a wide variety of their classes. The choice of method (the efficiency of its solution) depends on the class of the problem. The classification of problems is determined by: the objective function and the admissible area (given by a system of inequalities and equalities or a more complex algorithm).

Optimization methods are classified according to optimization tasks:

  • Local methods: converge to some local extremum of the objective function. In the case of a unimodal objective function, this extremum is unique and will be the global maximum/minimum.
  • Global methods: deal with multi-extremal objective functions. In the global search, the main task is to identify trends in the global behavior of the objective function.

Currently existing search methods can be divided into three large groups:

  1. deterministic;
  2. random (stochastic);
  3. combined.

According to the criterion of the dimension of the admissible set, optimization methods are divided into methods one-dimensional optimization and methods multivariate optimization.

By the form of the objective function and the admissible set, optimization problems and methods for their solution can be divided into the following classes:

According to the requirements for smoothness and the presence of partial derivatives in the objective function, they can also be divided into:

  • direct methods that require only the calculation of the objective function at approximation points;
  • first order methods: require the calculation of the first partial derivatives of a function;
  • second-order methods: require the calculation of second partial derivatives, that is, the Hessian of the objective function.

In addition, optimization methods are divided into the following groups:

  • analytical methods (for example, the Lagrange multiplier method and Karush-Kuhn-Tucker conditions);
  • graphic methods.

Depending on the nature of the set X Mathematical programming problems are classified as:

  • discrete programming (or combinatorial optimization) problems - if X finite or countable;
  • integer programming problems - if X is a subset of the set of integers;
  • non-linear programming problem if the constraints or objective function contain non-linear functions and X is a subset of a finite-dimensional vector space.
  • If all constraints and the objective function contain only linear functions, then this is a linear programming problem.

In addition, branches of mathematical programming are parametric programming, dynamic programming, and stochastic programming.

Mathematical programming is used in solving optimization problems in operations research.

The method of finding the extremum is completely determined by the class of the problem. But before you get a mathematical model, you need to perform 4 stages of modeling:

  • Determining the boundaries of the optimization system
    • We discard those connections of the optimization object with the outside world that cannot greatly affect the optimization result, or, more precisely, those without which the solution is simplified
  • Choice of controlled variables
    • We “freeze” the values ​​of some variables (unmanaged variables). Others are left to take any values ​​from the area of ​​​​admissible decisions (controlled variables)
  • Defining Constraints on Controlled Variables
    • … (equalities and/or inequalities)
  • Selecting a numerical optimization criterion (for example, a performance indicator)
    • Create an objective function

Story

Kantorovich together with MK Gavurin developed the method of potentials in 1949, which is used in solving transport problems. In subsequent works by Kantorovich, Nemchinov, V.V. Novozhilov, A.L. Lur'e, A. Brudno, Aganbegyan, D.B. Yudin, E.G. programming, and the application of its methods to the study of various economic problems.

Many works of foreign scientists are devoted to the methods of linear programming. In 1941, F. L. Hitchcock set the transportation challenge. The basic method for solving linear programming problems, the simplex method, was published in 1949 by Danzig. The methods of linear and nonlinear programming were further developed in the works of Kuhn ( English), A. Tucker ( English), Gass (Saul. I. Gass), Charnes (Charnes A.), Beale (Beale E. M.), etc.

Simultaneously with the development of linear programming, much attention was paid to non-linear programming problems in which either the objective function, or constraints, or both are non-linear. In 1951 Kuhn and Tucker published the necessary and sufficient optimality conditions for solving nonlinear programming problems. This work formed the basis for subsequent research in this area.

Since 1955, many works have been published on quadratic programming (works by Beal, Barankin and Dorfman (Dorfman R.), Frank (Frank M.) and Wolfe (Wolfe P.), Markowitz, etc.). Dennis J. B., Rosen J. B. and Zontendijk G. developed gradient methods for solving non-linear programming problems.

At present, for the effective application of mathematical programming methods and solving problems on computers, algebraic modeling languages ​​have been developed, representatives of which are AMPL and LINGO.

see also

Notes

Literature

  • Abakarov A. Sh., Sushkov Yu. A. A Statistical Study of One Global Optimization Algorithm. - Proceedings of FORA, 2004.
  • Akulich I. L. Mathematical programming in examples and tasks: Proc. allowance for students economy. pets. universities. - M .: Higher School, 1986.
  • Gill F., Murray W., Wright M. Practical optimization. Per. from English. - M .: Mir, 1985.
  • Girsanov I.V. Lectures on the mathematical theory of extremal problems. - M.; Izhevsk: Research Center "Regular and Chaotic Dynamics", 2003. - 118 p. - ISBN 5-93972-272-5
  • Zhiglyavsky A. A., Zhilinkas A. G. Methods for finding the global extremum. - M .: Nauka, Fizmatlit, 1991.
  • Karmanov V. G. Mathematical programming. - Publishing House of Phys.-Math. Literature, 2004.
  • Korn G., Korn T. Handbook of mathematics for scientists and engineers. - M .: Nauka, 1970. - S. 575-576.
  • Korshunov Yu. M., Korshunov Yu. M. Mathematical foundations of cybernetics. - M .: Energoatomizdat, 1972.
  • Maksimov Yu. A., Filippovskaya E. A. Algorithms for solving problems of nonlinear programming. - M .: MEPhI, 1982.
  • Maksimov Yu. A. Algorithms for linear and discrete programming. - M .: MEPhI, 1980.
  • Plotnikov A.D. Mathematical programming = express course. - 2006. - S. 171. - ISBN 985-475-186-4
  • Rastrigin L. A. Statistical search methods. - M., 1968.
  • Hemdy A. Taha. Introduction to Operations Research = Operations Research: An Introduction. - 8th ed. - M .: Williams, 2007. - S. 912. - ISBN 0-13-032374-8
  • Keaney R. L., Raifa H. Decision making under multiple criteria: preferences and substitutions. - M .: Radio and communication, 1981. - 560 p.
  • S.I. Zukhovitsky, L.I. Avdeeva. Linear and convex programming. - 2nd ed., revised. and additional .. - M .: Publishing house "Nauka", 1967.

Links

  • B.P. Pole. History of mathematical programming in the USSR: analysis of the phenomenon // Proceedings of the 14th Baikal School-Seminar "Optimization Methods and Their Applications". - 2008. - T. 1. - S. 2-20.

Wikimedia Foundation. 2010 .

The most acceptable version of the decision, which is made at the managerial level regarding any issue, is considered to be optimal, and the process of finding it is considered to be optimization.

The interdependence and complexity of the organizational, socio-economic, technical and other aspects of production management is currently reduced to making a management decision that affects a large number of different kinds of factors that are closely intertwined with each other, which makes it impossible to analyze each separately using traditional analytical methods.

Most of the factors are decisive in the decision-making process, and they (by their nature) cannot be quantified in any way. There are also those that are practically unchanged. In this regard, there was a need to develop special methods that can ensure the choice of important management decisions within the framework of complex organizational, economic, technical tasks (expert assessments, operations research and optimization methods, etc.).

Operational research methods are used to find optimal solutions in such areas of management as the organization of production and transportation processes, planning of large-scale production, material and technical supply.

Methods for optimizing decisions consist in studying by comparing numerical estimates of a number of factors that cannot be analyzed by traditional methods. The optimal solution is the best among the possible options regarding the economic system, and the most acceptable in relation to individual elements of the system is suboptimal.

Essence of operations research methods

As mentioned earlier, they form methods for optimizing management decisions. Their basis is mathematical (deterministic), probabilistic models representing the process under study, type of activity or system. Models of this kind represent a quantitative characteristic of the corresponding problem. They serve as the basis for making an important management decision in the process of finding the optimally acceptable option.

The list of issues that play a significant role for the immediate supervisors of production and which are resolved in the course of using the methods under consideration:

  • the degree of validity of the chosen solutions;
  • How much better are they than alternatives?
  • degree of consideration of determining factors;
  • what is the optimality criterion for the chosen solutions.

These decision optimization methods (managerial) are aimed at finding optimal solutions for as many firms, companies or their divisions as possible. They are based on existing achievements in statistical, mathematical and economic disciplines (game theory, queuing, graphs, optimal programming, mathematical statistics).

Methods of expert assessments

These methods of optimizing management decisions are used when the task is partially or completely not subject to formalization, and its solution cannot be found using mathematical methods.

Expertise is the study of complex special issues at the stage of developing a specific management decision by the relevant persons who have a special store of knowledge and impressive experience in order to obtain conclusions, recommendations, opinions, and assessments. In the process of expert research, the latest achievements of both science and technology are applied within the framework of the expert's specialization.

The considered methods of optimizing a number of management decisions (expert assessments) are effective in solving the following management tasks in the production sector:

  1. The study of complex processes, phenomena, situations, systems that are characterized by non-formalized, qualitative characteristics.
  2. Ranking and determination, according to a given criterion, of essential factors that are decisive in relation to the functioning and development of the production system.
  3. The considered optimization methods are especially effective in the field of forecasting trends in the development of the production system, as well as its interaction with the external environment.
  4. Increasing the reliability of expert evaluation of predominantly target functions, which are quantitative and qualitative in nature, by averaging the opinions of qualified specialists.

And these are just some of the methods for optimizing a number of management decisions (peer review).

Classification of the considered methods

Methods for solving optimization problems, based on the number of parameters, can be divided into:

  • One-dimensional optimization methods.
  • Multidimensional optimization methods.

They are also called "numerical optimization methods". To be precise, these are the algorithms for its search.

As part of the application of derivative methods, there are:

  • direct optimization methods (zero order);
  • gradient methods (1st order);
  • 2nd order methods, etc.

Most of the multidimensional optimization methods are close to the problem of the second group of methods (one-dimensional optimization).

One-dimensional optimization methods

Any numerical optimization methods are based on an approximate or exact calculation of such characteristics as the values ​​of the objective function and functions that define the admissible set, their derivatives. So, for each individual task, the question regarding the choice of characteristics for calculation can be resolved depending on the existing properties of the function under consideration, the available capabilities and limitations in the storage and processing of information.

There are the following methods for solving optimization problems (one-dimensional):

  • Fibonacci method;
  • dichotomies;
  • golden section;
  • step doubling.

Fibonacci method

First you need to set the coordinates of point x on the gap as a number equal to the ratio of the difference (x - a) to the difference (b - a). Therefore, a has coordinate 0 relative to the interval, and b - 1, the midpoint - ½.

If we assume that F0 and F1 are equal to each other and take the value 1, F2 will be equal to 2, F3 - 3, ..., then Fn = Fn-1 + Fn-2. So, Fn are Fibonacci numbers, and Fibonacci search is the optimal strategy of the so-called sequential search for the maximum due to the fact that it is quite closely related to them.

As part of the optimal strategy, it is customary to choose xn - 1 = Fn-2: Fn, xn = Fn-1: Fn. For any of the two intervals (or ), each of which can act as a narrowed uncertainty interval, the (inherited) point relative to the new interval will have either coordinates , or . Further, as xn - 2, a point is taken that has one of the presented coordinates relative to the new interval. If you use F(xn - 2), a function value that is inherited from the previous interval, it becomes possible to reduce the uncertainty interval and transfer one function value to the inheritance.

At the final step, it will turn out to pass to such an uncertainty interval as , while the midpoint is inherited from the previous step. As x1, a point is set that has a relative coordinate ½ + ε, and the final uncertainty interval will be or [½, 1] with respect to .

At the 1st step, the length of this interval was reduced to Fn-1: Fn (from one). At the finishing steps, the reduction in the lengths of the corresponding intervals is represented by the numbers Fn-2: Fn-1, Fn-3: Fn-2, …, F2: F3, F1: F2 (1 + 2ε). So, the length of such an interval as the final version will take the value (1 + 2ε) : Fn.

If we neglect ε, then asymptotically 1: Fn will be equal to rn, with n→∞, and r = (√5 - 1) : 2, which is approximately equal to 0.6180.

It should be noted that asymptotically, for significant n, each subsequent step of the Fibonacci search significantly narrows the considered interval with the above coefficient. This result must be compared with 0.5 (the coefficient of narrowing the uncertainty interval in the framework of the bisection method to search for the zero of the function).

dichotomy method

If we imagine a certain objective function, then first we need to find its extremum on the interval (a; b). To do this, the abscissa axis is divided into four equivalent parts, then it is necessary to determine the value of the function in question at 5 points. Next, the minimum of them is selected. The extremum of the function must lie within the interval (a"; b"), which is adjacent to the minimum point. The search boundaries are narrowed by 2 times. And if the minimum is located in point a or b, then it narrows all four times. The new interval is also divided into four equal segments. Due to the fact that the values ​​of this function at three points were determined at the previous stage, then it is required to calculate the objective function at two points.

golden section method

For significant values ​​of n, the coordinates of points such as xn and xn-1 are close to 1 - r, equal to 0.3820, and r ≈ 0.6180. The push from these values ​​is very close to the desired optimal strategy.

If we assume that F(0.3820) > F(0.6180), then the interval is outlined. However, due to the fact that 0.6180 * 0.6180 ≈ 0.3820 ≈ xn-1, then at this point F is already known. Therefore, at each stage, starting from the 2nd, only one calculation of the objective function is necessary, and each step reduces the length of the considered interval by a factor of 0.6180.

Unlike the Fibonacci search, this method does not require fixing the number n before the start of the search.

The "golden section" of the section (a; b) is a section in which the ratio of its length r to the larger part (a; c) is identical to the ratio of the larger part of r to the smaller one, that is, (a; c) to (c; b). It is easy to guess that r is determined by the above formula. Therefore, for significant n, the Fibonacci method becomes given.

Step doubling method

The essence is the search for the direction of decreasing the objective function, movement in this direction in the case of a successful search with a gradually increasing step.

First, we determine the initial coordinate M0 of the function F(M), the minimum value of the step h0, and the search direction. Then we define the function at point M0. Next, we take a step and find the value of this function at a given point.

If the function is less than the value that was in the previous step, you should take the next step in the same direction, previously increasing it by 2 times. If its value is greater than the previous one, it will be necessary to change the direction of the search, and then start moving in the chosen direction with step h0. The presented algorithm can be modified.

Multivariate optimization methods

The above zero-order method does not take into account the derivatives of the minimized function, which is why their use can be effective in case of any difficulties in calculating the derivatives.

The group of methods of the 1st order is also called gradient, because to establish the direction of the search, the gradient of this function is used - a vector, the components of which are the partial derivatives of the minimized function with respect to the corresponding optimized parameters.

In the group of methods of the 2nd order, 2 derivatives are used (their use is rather limited due to the difficulties in their calculation).

List of unconstrained optimization methods

When using multivariate search without using derivatives, the unconditional optimization methods are as follows:

  • Hook and Jeeves (implementation of 2 types of search - according to the model and research);
  • minimization by the correct simplex (search for the minimum point of the corresponding function by comparing its values ​​at the vertices of the simplex at each separate iteration);
  • cyclic coordinate descent (use as reference points for the search for coordinate vectors);
  • Rosenbrock (based on the use of one-dimensional minimization);
  • minimization by the deformed simplex (modification of the method of minimization by the regular simplex: adding the procedure of compression, stretching).

In the situation of using derivatives in the process of multidimensional search, the method of steepest descent is distinguished (the most fundamental procedure for minimizing a differentiable function with several variables).

There are also methods that use conjugate directions (Davidon-Fletcher-Powell method). Its essence is the representation of search directions as Dj*grad(f(y)).

Classification of mathematical optimization methods

Conventionally, based on the dimension of functions (target), they are:

  • with 1 variable;
  • multidimensional.

Depending on the function (linear or nonlinear), there are a large number of mathematical methods aimed at finding an extremum for solving the problem.

According to the criterion of using derivatives, mathematical optimization methods are divided into:

  • methods for calculating 1 derivative of the objective function;
  • multidimensional (1st derivative-vector quantity-gradient).

Based on the efficiency of the calculation, there are:

  • methods of fast extremum calculation;
  • simplified calculation.

This is a conditional classification of the considered methods.

Business process optimization

Different methods can be used here, depending on the problems being solved. It is customary to single out the following methods for optimizing business processes:

  • exceptions (reducing the levels of the existing process, eliminating the causes of interference and input control, reducing transport routes);
  • simplification (facilitated order processing, reduced complexity of the product structure, distribution of work);
  • standardization (use of special programs, methods, technologies, etc.);
  • acceleration (parallel engineering, stimulation, operational design of prototypes, automation);
  • change (changes in raw materials, technologies, working methods, staffing, work systems, order volume, processing procedures);
  • ensuring interaction (in relation to organizational units, personnel, work system);
  • selection and inclusion (relative to the necessary processes, components).

Tax optimization: methods

Russian legislation provides the taxpayer with very rich opportunities to reduce taxes, which is why it is customary to single out such methods aimed at minimizing them as general (classical) and special.

The general methods of tax optimization are as follows:

  • elaboration of the company's accounting policy with the maximum possible use of the opportunities provided by the Russian legislation (the procedure for writing off the IBE, the choice of the method for calculating the proceeds from the sale of goods, etc.);
  • optimization through a contract (conclusion of preferential transactions, clear and competent use of wording, etc.);
  • the use of various kinds of benefits, tax exemptions.

The second group of methods can also be used by all firms, but they still have a rather narrow scope. Special tax optimization methods are as follows:

  • replacement of relations (an operation that provides for onerous taxation is replaced by another that allows you to achieve a similar goal, but at the same time use a preferential taxation procedure).
  • separation of relations (replacement of only part of a business transaction);
  • deferral of tax payment (postponing the moment of appearance of the object of taxation to another calendar period);
  • direct reduction of the object of taxation (getting rid of many taxable transactions or property without having a negative impact on the main economic activity of the company).

Federal Agency for Education GOU VPO "Ural State Technical University - UPI" PARAMETRIC OPTIMIZATION OF RADIO ELECTRONIC CIRCUITS Methodical instructions for laboratory work on the course "Computer analysis of electronic circuits" for students of all forms of education of the specialty 200700 - Radio Engineering Yekaterinburg 2005 UDC 681,3,06:621.396 .6 Compiled by V.V. Kiykov, V.F. Kochkina, K.A. Vdovkin Scientific editor Assoc., Ph.D. tech. Sciences V.I. Gadzikovsky PARAMETRIC OPTIMIZATION OF RADIO ELECTRONIC CIRCUITS: guidelines for laboratory work on the course "Computer analysis of electronic circuits" / comp. V.V. Kiyko, V.F. Kochkina, K.A. Vdovkin. Yekaterinburg: GOU VPO USTU-UPI, 2005. 21s. The guidelines contain information about the formulation of optimization problems, optimality criteria, the theory of finding the minimum of the objective function. An overview of parametric optimization methods is given, the Hook-Jeeves method is described in detail, questions for self-control are given. Bibliography: 7 titles. Rice. 6. Prepared by the Department of Radioelectronics of Information Systems.  Ural State Technical University-UPI, 2005 2 CONTENTS PURPOSE OF THE WORK....................................................... ................................................. .................................. 4 1.GENERAL GUIDELINES .................................. ............................................... 4 2. THEORY OF OPTIMIZATION....... ................................................. ......................... 4 2.1. Formal (mathematical) formulation of the optimization problem....................... 4 2.2. Statement of the problem of parametric optimization of RES .............................................. 5 2.3. Optimality criteria .................................................................. ................................................... 7 2.4. Strategy for solving the problems of optimal design of RES .......... 9 2.5. Global search algorithms .............................................................. ................... 9 2.5.1. Random search algorithm .............................................................. ......................... 10 2.5.2. Monotonic Global Search Algorithm .............................................. 10 2.5.3. Gray Code Grid Scanning Algorithm .............................................................. 10 2.6. Methods and algorithms of local search .............................................................. ........ 11 2.6.1. Direct Methods .................................................................. ............................................... 11 2.6. 2. Gradient optimization methods of the first order. ............................... 13 2.6.3. Gradient Methods of Optimization of the Second Order.................................... 13 3. DESCRIPTION OF THE COMPUTER PROGRAM FOR ANALYSIS......... ......... 15 3.1. Starting the program................................................... ............................................... 15 3.2. Setting up an optimization task .................................................................. ............. 15 3.3. Optimization results ................................................................ .................................................. 17 4. CONTENT OF THE LABORATORY WORK .............................. 17 .................................... 19 4.1. Order of execution ................................................................ .................................................. 19 4.2. Assignment for laboratory work .............................................................. ......................................... 19 5. GUIDELINES FOR THE PREPARATION OF INITIAL DATA .................. ................................................. ................................................. 20 6. CONTENT OF THE REPORT .............................................. .................................................... 20 7. QUESTIONS FOR SELF-CHECKING .......... ................................................. 20 REFERENCES .............................................................. .............................................. 21 3 PURPOSE OF THE WORK Obtain presentation and practical skills of parametric optimization of RES in automated circuit design of radio-electronic equipment (REA). 1. GENERAL INSTRUCTIONS This work is the third in the complex of laboratory works on methods of calculation, analysis and optimization of electronic circuits. The complex includes the following works: 1. Calculation of radio electronic circuits by the method of nodal potentials. 2. Analysis of electronic circuits by the modified method of nodal potentials. 3. Parametric optimization of radio electronic circuits. 4. Analysis of radio electronic circuits using circuit functions. In the first and second laboratory works, a frequency analysis was performed, the sensitivity of the voltage gain to variations in internal parameters was determined, the transient and impulse responses were calculated for the nominal values ​​of the parameters of the REM elements, which were initially selected (set or calculated) not in the best way. In this work, parametric optimization of the designed RES is performed to ensure that the output parameters meet the requirements of the technical specifications. 2. OPTIMIZATION THEORY 2.1. Formal (mathematical) formulation of the optimization problem Parameter optimization (parametric optimization) is usually called the problem of calculating the optimal nominal values ​​of the internal parameters of the design object. The problems of optimizing parameters in the CAD of radio-electronic equipment are reduced to the problem of mathematical programming extr F(X), XXД, (1) where XД = (XX0| k (X) ≥ 0, r (X) = 0, k  , r  ). The vector X=(x1, x2, . . . . xn) is called the vector of controlled (variable) parameters; F(X) - entire function (quality function); XD - allowable area; X0 is the space in which the objective function is defined; k(X) and r(X) functions are restrictions. 4 Verbal formulation of the problem (1): find the extremum of the objective function F(X) within the region XD, bounded in the space X0 N by the inequalities k(X) ≥ 0 and M by the equalities r (X) = 0. The objective function must be formulated based on the existing ideas about the quality of the designed object: its value should decrease with the improvement of quality, then minimization is required in (1) (extr is min), or increase, then maximization is required in (1) (extr is max). Constraints - inequalities having the form xi > xi min or xi< xi max , называют прямыми ограничениями, где xi min и xi max - заданные константы, остальные ограничения называют функциональными. Задача поиска максимума, как правило, сводится к задаче поиска минимума путем замены F(Х) на -F(Х). Функция F(Х) имеет локальный минимум в точке Х0, если в малой окрестности этой точки F(Х) ≥ F(Х0). И функция F(Х) имеет глобальный минимум в точке Х*, если для всех Х справедливо неравенство F(Х) ≥ F(Х*). Классическая теория оптимизации подробно изложена в соответствующей литературе, например . Ниже основное внимание уделено применению теории оптимизации для поиска оптимальных решений при проектировании радиоэлектронной аппаратуры. 2.2. Постановка задачи параметрической оптимизации РЭС Решение задачи проектирования обычно связана с выбором оптимального, наилучшим образом удовлетворяющего требованиям технического задания варианта устройства из некоторого допустимого множества решений. Эффективное решение задач базируется на формальных поисковых методах оптимизации и неформальных способах принятия оптимальных проектных решений. Поэтому решение задач оптимального проектирования необходимо рассматривать не только в вычислительном аспекте, но скорее в творческом, учитывая опыт и знания инженера-схемотехника на всех этапах автоматизированного проектирования. Одной из наиболее cложных операций при решении задач оптимального проектирования является этап математической формулировки задачи, которая включает в себя выбор критерия оптимальности, определение варьируемых параметров и задание ограничений, накладываемых на варьируемые параметры . Среди задач схемотехнического проектирования, которые целесообразно решать с привлечением методов оптимизации, выделяют следующие задачи параметрического синтеза и оптимизации: - определение параметров компонентов схемы, обеспечивающих экстремальные характеристики при заданных ограничениях; - определение параметров функциональных узлов схем исходя из требований технического задания на характеристики устройства в целом; - адаптация существующих схемных решений с целью подбора параметров, удовлетворяющих новым требованиям к схеме; 5 - уточнение значений параметров компонентов схемы, полученных в результате ручного инженерного расчета. Для схем приемно-усилительной техники оптимизация ведется по отношению к таким выходным параметрам, как: - коэффициент усиления и полоса пропускания: - форма частотной характеристики; - устойчивость усилителя или активного фильтра; - время запаздывания, длительность фронта импульса. Примечание. Класс задач, связанный с определением значений параметров компонентов, при которых проектируемая схема удовлетворяет совокупности условий технического задания на разработку, принято называть параметрическим синтезом (по отношению к определяемым параметрам) или параметрической оптимизацией (по отношению к реализуемым характеристикам). В любой из перечисленных задач реализуемые характеристики проектируемого устройства являются функциями вектора варьируемых (настраиваемых) параметров, составляющих некоторое подмножество полного набора параметров компонентов схемы. Целью параметрического синтеза или оптимизации является определение вектора параметров X, обеспечивающего наилучшее соответствие характеристик устройства Y = Y(X) требованиям технического задания. Для решения этой задачи необходимо, прежде всего, выбрать формальный критерий оценки качества каждого из вариантов проектируемого устройства, который позволил бы различать их между собой и устанавливать между ними отношения предпочтения. Такая оценка может быть представлена функциональной зависимостью вида F(X) =F(Y(X)), называемой обычно критерием оптимальности, функцией качества или целевой функцией. Задача поиска параметров компонентов схемы сводится к классической задаче оптимизации - нахождения экстремума некоторой функции качества F(X) при наличии ограничений (равенств, неравенств или двухсторонних границ), накладываемых на варьируемые параметры и характеристики проектируемой схемы . Разнообразные задачи оптимизации аналоговых радиоэлектронных схем имеют общие черты, основные из которых: - многокритериальность оптимизационных задач; - отсутствие явных аналитических зависимостей выходных параметров от внутренних параметров, связь между внутренними и внешними параметрами выражается системами уравнений и оценивается количественно только через численное решение этих систем. Эти особенности обуславливают трудности постановки и решения задач оптимизации аналоговых радиоэлектронных схем. 6 2.3. Критерии оптимальности В процессе поиска оптимального решения для каждой конкретной задачи может оказаться предпочтительным определенный вид критерия оптимальности. Базовый набор критериев оптимальности, позволяющий удовлетворить разнообразные требования инженера-схемотехника к оптимизируемым характеристикам проектируемых устройств, изложен в . Так, для отыскания экстремума (минимума или максимума) показателя качества, например, как потребляемая схемой мощность, частота среза, используется само значение критерия оптимальности без преобразования: F1(X) = Y(X), (2) В задачах, требующих максимального соответствия оптимизируемой характеристики и некоторой желаемой, например, при оптимизации частотных характеристик, наиболее целесообразно использовать критерий среднего квадратического отклонения F2 ()  (Y() - Y )2 , (3) где Y* - желаемое или требуемое по техническому заданию значение характеристики, () - знак усреднения. Для характеристики, заданной дискретным набором точек, целевая функция 1 F2 (X)  N N  (Y(X , p i 1 i)  Yi)2 , * i (4) где N - число точек дискретизации независимой переменной р; Y(Х, рi) - значение оптимизируемой характеристики в i-ой точке интервала дискретизации; i - весовой коэффициент i-го значения оптимизируемой характеристики, отражающей важность i-ой точки по сравнению с другими (как правило, 0 < i > one). Minimization of the function (3) and (4) ensures the similarity of the characteristics in terms of the standard deviation. Function (4) is used in numerical methods for calculating Y(X). In some optimization problems, it is necessary to ensure that the optimized characteristic exceeds or does not exceed a given level. These optimality criteria are implemented by the following functions: - to ensure that the specified level F3 (X)  0 is exceeded at Y (X)  YH* ; (Y  Y (X)) 2 at Y (X)  YH* ; 7 (5) - to ensure that the specified level is not exceeded F4 (X)  0 at Y (X)  YB* (Y (X)  YB*) 2 at Y (X)  YB* , (6) where YH*, YB* - lower and upper limits of the admissible area for the characteristic Y(X). If it is necessary that the optimized characteristic pass in a certain allowable zone (corridor), a combination of the two previous optimality criteria is used: 0at YH*  Y (X)  YB* ; F(X)  (Y (X)  YB*) 2 at Y (X)  YB* , (YH*  Y (X)) 2 at Y (X)  YH* . (7) In those cases when it is required to realize only the shape of the curve, while ignoring the constant vertical displacement, the shift criterion N F6 (X)    i (Yi *  Y (X , pi)  Yav) 2 , ( 8) i 1 where Yav  1 N *  (Yi  Y (X, pi)). N i 1 Important characteristics of the computational process and, first of all, the convergence of the optimization process depend on the type of the objective function. The signs of the derivatives of the objective function with respect to the controlled parameters do not remain constant in the entire admissible region. For objective functions of the form (4) and (8), the latter circumstance leads to their gully character. Thus, a feature of objective functions in solving problems of circuit design is their ravine nature, which leads to high computational costs and requires special attention to the choice of optimization method. Another feature of the objective functions is that they are usually multiextremal and, along with the global minimum, there are local minima. A feature of optimization problems for electronic circuits is that the internal parameters cannot take arbitrary values. So, the values ​​of resistors and capacitors are limited by some maximum and minimum values. In addition, from several external parameters, it is usually possible to single out one main one, according to which optimization is carried out, and for others, indicate the allowable limits of change. 8 A constrained optimization problem is reduced to an unconstrained optimization problem by introducing penalty functions. In this case, the objective function takes the form M N r 1 k 1  (X)  Fi (X)   r ( Т (X)) 2    k ( k (X)) 2 , (9) where r , k are numerical coefficients that take into account the importance of one or another restriction relative to others. They are equal to zero if the corresponding inequality from (1) is satisfied and take some values ​​otherwise; Fi(X) is one of the quality functions described by relation (2) - (8). Thus, going beyond the limits of the admissible domain of the CD leads to an increase in the minimized function of the chain, and the intermediate solutions X j are kept by the "barrier" on the boundary of the domain of the CD. The height of the "barrier" is determined by the values ​​of  and , which in practice are in a wide range (1-1010). The more  and , the less the probability of going beyond the allowable area. At the same time, the steepness of the slope of the ravine at the boundary also increases, which slows down or completely disrupts the convergence of the minimization process. Due to the impossibility of specifying the optimal values ​​of  and , it is advisable to start optimization from small values, then increasing them when a solution is obtained outside the allowable area. 2.4. Strategy for solving the problems of optimal design of REM The problems of optimal design of REM have specific features, which include multi-extremality and ravine quality function, the presence of restrictions on the internal and output parameters of the designed device, a large dimension of the vector of variable parameters. The strategy for solving optimal design problems involves the use of global optimization procedures at the initial stages of the search and refinement of the obtained global solution by local algorithms that rapidly converge in the vicinity of the optimal point. Such a strategy makes it possible, firstly, to determine the value of the global extremum with sufficient reliability and accuracy and, secondly, to significantly reduce the computational costs of the search. In this case, the stages of the global search can be performed with low accuracy, and the stages of local refinement are carried out in the area of ​​attraction of the global extremum, which requires a much smaller number of calculations. 2.5. Global Search Algorithms As a rule, global search algorithms give a rather rough estimate of the global extremum with little computational resources and require a significant increase in the number of calculations to obtain a more accurate estimate of the extremum position. 2.5.1. Random Search Algorithm The simplest, from the point of view of the implementation of the computational process, is the global extremum search algorithm, based on probing the admissible area of ​​the HD with a sequence of points evenly distributed in it with the selection of the best variant from the obtained ones. The quality of the algorithm is largely determined by the properties of the generator of uniformly distributed random numbers used to generate the vectors Х  ХД 2. 5.2. Monotone Global Search Algorithm Multidimensional optimization by this algorithm is based on the construction of a sweep (Peano curve) that maps a segment of the real axis into the hypercube of the admissible HD domain. With the help of a sweep, an unambiguous and continuous mapping X() is carried out, which for any point 0,1 allows you to get the point X  XD. Then the problem of minimizing F(X) in the domain of XD is equivalent to finding the minimum * of the one-dimensional function F(X) = F(X()). To carry out a global one-dimensional minimization of the function F() on the interval 0.1 in the optimization subsystem of the circuit design system of DISP, a monotonic modification of the global search algorithm is used, which implements a monotonic transformation F() in the form  ()  to accelerate convergence ( 1  [ 1  F ()] 2 )0 ,5 , (10) which preserves the location of the global extremum point, but makes the function smoother. The algorithm gives a fairly good estimate of the global extremum within the first 50-100 iterations. The best results are obtained if the number of variables does not exceed 5-7. For the considered algorithm, in some cases, the best results can be obtained by using the transformation of the search space according to the logarithmic law. Such a transformation is especially effective if the search boundaries differ by several orders of magnitude, which is relevant in CEA optimization problems, and if the extremum is near the area boundaries. 2.5.3. Gray Code Grid Scanning Algorithm The main idea of ​​the method is to sequentially change the specific search area with characteristic beams containing test points while accumulating and processing the received information. The direction of scanning is carried out on a special grid, given by the binary code 10 Gray. The search sphere on the grid of the Gray code in the algorithm under consideration differs from the traditional one (a circle with the number of variables equal to 2) and has, in addition to the circle, characteristic rays. The rays are directed from the center of the sphere to the borders of the HD area and thus, as it were, “shine through” the entire area to its borders. The algorithm under consideration has the only adjustable parameter -sensitivity of the quality function to parameter variations, which is used to determine the discreteness step for each of the variables. 2.6. Methods and algorithms of local search Methods and algorithms of local search most often find the nearest local extremum, and the trajectory of their movement strongly depends on the choice of the starting point and the nature of the objective function. 2.6.1. Direct methods Zero-order methods (direct methods) basically do not have a rigorous mathematical justification and are built on the basis of reasonable suggestions and empirical data. The simplest zero-order method is the method of coordinate descent (Gauss-Seidel). At each step, all variables are fixed, except for one, which is used to determine the minimum of the objective function. Optimization is achieved by sequential enumeration of variables. This algorithm turns out to be inefficient if the objective function contains expressions like x1x2. For circuit design problems in which it is not possible to obtain an analytical expression of the objective function, its complex dependence on circuit components is typical, and therefore this method is usually inapplicable. Of the zero-order methods in the case of ravine objective functions, Rosenbrock's method, which combines the ideas of coordinate-wise descent and the ideas of coordinate transformation, gives good results. The best way to search for an extremum is to move along the ravine. Therefore, after the first cycle of coordinate descent, the coordinate axes are rotated so that one of them coincides with the direction of the ravine Xk - Xk - n, k = n, 2n, 3n…. The Rosenbrock method does not provide information about hitting the minimum point. Therefore, the count stops either after the decrease in F(X) becomes less than some small number , or after a certain number of cycles. The Hook-Jeeves method was developed in 1961, but is still very effective and original. The search for the minimum of the objective function consists of a sequence of exploratory search steps around the base point, followed, if successful, by pattern search. This procedure consists of the following steps: 1. Choose an initial base point b1 and a step length hj for each variable xj, j=1,2,…,n of the scalar objective function F(X). 11 2. Calculate F(X) at the base point b1 in order to obtain information about the local behavior of the function F(X). This information will be used to find the direction of the pattern search, with which we can hope to achieve a larger decrease in the value of the function F(X). The value of the function F(X) at the base point b1 is found as follows: a) the value of the function F(b1) at the base point b1 is calculated; b) each variable in turn is changed by changing the step. Thus, the value F(b1 + he1) is calculated, where e1 is the unit vector in the direction of the x1 axis. If this leads to a decrease in the values ​​of the function, then b1 is replaced by b1 + he1. Otherwise, the value of the function F(b1 - he1) is calculated, and if its value has decreased, then b1 is replaced by b1 - he1. If none of the steps taken leads to a decrease in the values ​​of the function, then the point b1 remains unchanged and changes in the direction of the x2 axis are considered, i.e. i.e. the value of the function F(b1 + h2e2) is found, etc. When all n variables have been considered, a new base point b2 is determined; c) if b2 = b1 , i.e., the decrease in the function F(X) has not been achieved, then the study continues around the same base point b1 , but with a reduced step length. As a rule, in practice, the step is reduced by 10 times from the initial length; d) if b2  b1 , then the pattern is searched. 3. When searching, the information obtained during the research is used, and the minimization of the objective function ends with a search in the direction given by the sample. This procedure is performed as follows: a) the movement is carried out from the base point b2 in the direction b2 - b1, since the search in this direction has already led to a decrease in the value of the function F(X). Therefore, the value of the function at the sample point P1 = b2 + (b2 - b1) is calculated. In general, Pi = 2bi+1 - bi; b) research around the point P1(Pi) is performed; c) if the smallest value at step 3,b is less than the value at the base point b2 (generally bi+1), then a new base point b3(bi+2) is obtained, after which step 3a is repeated. Otherwise, the pattern search from the point b2 (bi+1) is not performed. 4. The process of finding the minimum ends when the step length (step lengths) is reduced to the specified small value. 12 2.6.2. Gradient optimization methods of the first order Methods for finding an extremum using derivatives have a rigorous mathematical justification. It is known that when finding an extremum, there is no better direction than moving along a gradient. Of the gradient methods, one of the most effective is the Fletcher-Powell method (conjugate gradients), which is a variation of the steepest descent method. The steepest descent method consists of the following stages: 1) the initial point is set (vector Xk k=0); 2) F(Xk) and F(Xk) are calculated; 3) X is changed in the direction Sk = -F(Xk) until F(X) stops decreasing; 4) it is assumed k = k+1, a new value F(Xk) is calculated and the process is repeated from the 3rd stage. The disadvantage of the method is that, for ravine functions, the approximation to the minimum is zigzag in nature and requires a large number of iterations. The essence of the Fletcher-Powell method is that at all iterations, starting from the second (at the first iteration, this method coincides with the steepest descent method), the previous values ​​of F(X) and F(X) are used to determine the new direction vector   S k  F X k  d k S k 1 , where (11) [F (X k)]T  F (X k) d . [F (X k 1)]T  F (X k 1) This eliminates the zigzag nature of the descent and accelerates convergence. This algorithm is easy to program and requires a moderate amount of machine memory (only the previous search direction and the previous gradient need to be filled). 2.6.3. Second Order Gradient Optimization Methods An iterative method based on the knowledge of second derivatives is generally known as Newton's method. Let the function F(X) be expanded into a Taylor series and keep three terms in it. We write the result in the following form: 1 F (X k  X)  F (X k)  (X)T F k  (X)T G k X 2 (12) It is required to maximize the difference in the left parts. This can be done by differentiating (12) with respect to X and equating the result to zero:  F k . This equation can be solved, for example, by the LU-expansion method, with respect to Х. Formally, you can write X  (G k) 1 F k   H k F k where H=G-1. The direction of the search is now assumed to coincide with the vector S k  X k   H k F k . (13) When passing to the minimum, the Hessian matrix1 will be positive definite, and the full step size dk=1 can be used (i.e., no search in the direction of Sk is needed). However, far from the minimum, the Hessian matrix may not be positive definite. Moreover, the calculation of this matrix is ​​expensive. Therefore, a whole class of other methods, called methods with variable metrics or quasi-Newtonian methods, have been developed that are devoid of these shortcomings. These methods were developed quite a long time ago, but have only been generalized recently. They are based on the estimation of gradients and on the approximation of the Hessian matrix or its inverse. Approximation is achieved by changing the original positive-definite matrix in a special way so as to preserve positive-definiteness. Only when the minimum is reached, the resulting matrix approximates the Hessian matrix (or its inverse). In all methods of this class, the search direction is determined, as in Newton's method (13). At each iteration of the matrix Hk, according to a special formula, the matrix Hk+1 is obtained. As an example, here is the formula derived by Davidon, Fletcher and Powell, and it is sometimes called the DFP formula:  2F 2F 2F  . . .   x1x n   x1x1 x1x 2  2F 2F 2F  . . .   1 Hessian matrix - matrix of second derivatives G (x)   x 2 x1 x 2 x 2 x 2 x n   .  . .    2F 2F 2F   x x x x . . . x x  n 2 n n   n 1 14 H k 1 X (X)T H k  T H k H   T k (X)T   H  k (14) This formula is suitable only if (X)Т   0,  THk  0. Here k=Fk+1-Fk. 3. DESCRIPTION OF THE ANALYSIS COMPUTER PROGRAM The program has a convenient graphical user interface for working under the Windows operating system. The initial description of the optimized electronic circuit is the information in the file created during the second laboratory work. By loading this file and selecting the elements to be optimized, this program calculates new element values. The criterion for the correctness of calculations is the value of the minimum of the objective function, which is calculated as a weighted standard deviation of the required and real characteristics of the RES: amplitude-frequency, transient or impulse responses. The program has a standard set of controls - menu, toolbar ... . A report on the conducted laboratory work in html format is automatically generated. Note. After all the dialog boxes have been filled in with values, the button is pressed.<Далее>. If the result displayed in the next window is not satisfactory, then by pressing the button<Назад> you can go back to the previous steps and change the search conditions. 3.1. Starting the program When you start the program, a window opens in which, in the File menu bar, you need to open the file saved after the second laboratory work (Fig. 1). 3.2. Drawing up an optimization task The file with the description of the circuit contains the parameters of the elements, including the equivalent circuit of the transistor. In the left window it is necessary to select variable parameters for parametric optimization. The desired response, such as frequency response, is given by the frequency values ​​(in Hz) and the corresponding gain values ​​(in dB). At the next stage, the initial step for measuring parameters during optimization is set (Fig. 2). 15 Fig. Fig. 1. Window for opening the input file. 2. Window for selecting optimization values ​​16 3.3. Optimization results At the next stage, the program presents the calculation results:  minimum objective function;  parameters of variable elements before and after optimization;  number of objective function calculations;  number of step length reductions and pattern searches. The criterion for the correctness of the obtained results is the value of the minimum of the objective function. For a bipolar transistor, it should be approximately 10-7 I10-8, and for a field-effect transistor it should be 10-4 I 10-5 (Fig. 3). If the optimization results are satisfactory, then we proceed to the next stage - the construction of the amplitude-frequency or time characteristics (Fig. 4, 6,). To accurately determine (find) the bandwidth of the RES, i.e. upper and lower boundary frequencies, as well as to determine the time of transients, there are calculation tables (Fig. 5). Rice. 3. Calculation window after optimization 17 Pic. Fig. 4. Window for plotting frequency response. 5. Frequency response values ​​in table 18 6. Time characteristics window 4. CONTENT OF THE LABORATORY WORK 4.1. Order of execution 1. The prepared stage includes familiarization with the guidelines for laboratory work, the study of the theory of optimization according to the lecture notes, literature sources and section 2 of these guidelines. 2. The second stage includes the implementation of theoretical work: - formation of requirements for the optimized characteristic of the RES; - selection of an element or elements of the circuit, according to the parameters of which it is supposed to carry out optimization. 3. Downloading the optimization program with a description of the circuit to be optimized and a task for parametric optimization. 4. Perform optimization. 5. Calculation of the characteristics of the circuit with optimized parameters. 6. Final stage. At this stage, the characteristics of the RES before and after optimization are compared. Based on the received materials, a report is drawn up on sheets of A4 format (297x210) with the obligatory application of printouts of the results. 4.2. Assignment for laboratory work 1. According to the results of the analysis of the frequency response of the amplifier, obtained in the second laboratory work, to form the requirements for the ideal frequency response. Select the method for setting the ideal frequency response and the coordinates of points on the frequency response graph. 19 2. Determine the group of elements, according to the parameters of which it is supposed to carry out optimization. 5. GUIDELINES FOR THE PREPARATION OF INITIAL DATA 5.1. According to the frequency response graph calculated during the second laboratory work, the upper and lower cutoff frequencies are determined and the effect of high-frequency inductive correction is clarified. 5.2. Using the knowledge of the circuitry of amplifying devices, components are determined whose parameters determine the upper and lower cutoff frequencies. 5.3. On the frequency response graph, an ideal (required according to the terms of reference) characteristic is built. Optimization points are selected. In order to preserve the shape of the frequency response in the passband, it is also necessary to select points in this part of the characteristic. 6. CONTENT OF THE REPORT 1. Purpose of the work. 2. Initial data in the form of a circuit diagram of the amplifying stage and the parameters of its elements before optimization. 3. Listing of machine analysis results. 4. Analysis of the results. Findings. 7. QUESTIONS FOR SELF-CHECKING 1. Name the necessary and sufficient condition for the existence of a minimum of the function. 2. What matrix is ​​called positive definite? 3. Why is the objective function called the quality function? 4. Name the main property of the objective function. 5. What tasks are called parametric synthesis, and which ones are called parametric optimization? 6. In what cases is the problem of numerical search for the minimum of the objective function related to non-linear programming problems? 7. What is the difference between gradient methods for finding the extremum of a function and direct methods? 8. Explain the concept of global and local minimum. 9. What are the limitations in the parametric optimization of radio electronic devices? 10. Explain the method of coordinate descent. 11. What is the difference between the conjugate gradient method and the steepest descent method? 12. What does "pattern search" mean in the Hook-Jeeves method? 13. What are the termination criteria for the iterative optimization process? 20 REFERENCES 1. Computer-aided design systems in radio electronics: Handbook / E.V. Avdeev, A.T. Eremin, I.P. Norenkov, M.I. Peskov; Ed. I.P. Norenkova. M.: Radio and communication, 1986. 368s. 2. Bundy B. Methods of optimization. Introductory course: Per. from English. M.: Radio and communication, 1988. 128s. 3. Vlah I., Singhal K. Machine methods of analysis and design of electronic circuits. M.: Radio and communication. 1988. 560s. 4. Collection of tasks on microcircuitry: Computer-aided design: Textbook for universities / V.I. Anisimov, P.P. Azbelev, A.B. Isakov and others; Ed. IN AND. Anisimov. L.: Energoatomizdat, Leningrad branch, 1991. 224p. 5. Dialogue systems of circuit design / V.N. Anisimov, G.D. Dmitrievich, K.B. Skobeltsyn and others; Ed. V.N. Anisimov. M.: Radio and communication, 1988. 288s. 6. Razevich V.D., Rakov V.K., Kapustyan V.I. Machine analysis and optimization of electronic circuits: A textbook for the courses "Amplifying Devices" and "Radio Receiving Devices". M.: MPEI, 1981. 88s. 7. Textbook on mathematical analysis / Tabueva V.A. Mathematics, mathematical analysis: Textbook. Yekaterinburg: USTU-UPI, 2001. 494 p. 8. Kiyko V.V. Kochkina V.F. Vdovkin K.A. Analysis of electronic circuits by the modified method of nodal potentials. Ekaterinburg: UGTUUPI, 2004. 31s. 21