Business plan - Accounting.  Agreement.  Life and business.  Foreign languages.  Success stories

Decision making under uncertainty Kahneman. Decision making under conditions of uncertainty

ANDREY PLATONOV HUMANITIES LIBRARY

Daniel Kahneman, Paul Slovik, Amos Tversky

Decision Making in Uncertainty

The book brought to your attention contains the results of reflections and experimental studies of foreign scientists, little known to the Russian-speaking reader.

We are talking about the peculiarities of thinking and behavior of people when assessing and predicting uncertain events and quantities, such as, in particular, the chances of winning or getting sick, preferences in elections, assessing professional suitability, assessing accidents and much more.

As the book convincingly shows, when making decisions under uncertain conditions, people usually make mistakes, sometimes quite significantly, even if they have studied probability theory and statistics. These errors are subject to certain psychological patterns that have been identified and well experimentally substantiated by researchers.

It must be said that not only the natural errors of human decisions in situations of uncertainty, but also the organization of experiments that reveal these natural errors is very interesting and practically useful.

It is safe to think that the translation of this book will be interesting and useful not only to domestic psychologists, doctors, politicians, and various experts, but also to many other people who are in one way or another connected with the assessment and forecast of essentially random social and personal events.

Scientific editor

Doctor of Psychology

Professor at St. Petersburg State University

G.V. Sukhodolsky,

St. Petersburg, 2004

The approach to decision making presented in this book is based on three lines of research that developed in the 1950s and 1960s. For example, a comparison of clinical and statistical prediction initiated by Paul Teehl; the study of subjective probability in the Bayes paradigm, presented in psychology by Ward Edwards; and a study of heuristics and reasoning strategies presented by Herbert Simon and Jerome Bruner.

Our collection also includes contemporary theory at the intersection of decision making with another branch of psychological research: the study of causal attribution and everyday psychological interpretation, pioneered by Fritz Heider.

Teal's classic book, published in 1954, confirms the fact that simple linear combinations of statements are superior to the intuitive judgments of experts in predicting significant behavioral criteria. The enduring intellectual legacy of this work and the tumultuous controversy that followed it probably did not prove that clinicians did a poor job of doing a job that, as Teale noted, they should not have undertaken.

Rather, it was a demonstration of a significant discrepancy between people's objective measures of success at prediction tasks and their sincere beliefs about their own productivity. This conclusion is not only true for clinicians and clinical predictions: people's opinions about how they reach conclusions and how well they do so cannot be taken as a basis.

After all, clinical researchers often used themselves or their friends as subjects, and the interpretation of errors and deviations was cognitive rather than psychodynamic: impressions of errors rather than actual errors were used as a model.

Since the introduction of Bayesian ideas into psychological research by Edwards and his colleagues, psychologists have for the first time been offered a coherent and clearly articulated model of optimal behavior under uncertainty with which to compare human decision making. The conformity of decision making to normative models has become one of the main research paradigms in the field of judgment under conditions of uncertainty. This inevitably raised the issue of the biases that people tend to have when making inductive inferences, and the methods that could be used to correct them. These issues are addressed in most sections of this publication. However, many early works used a normative model to explain human behavior and introduced additional processes to explain deviations from optimal performance. In contrast, the goal of research in decision heuristics is to explain both correct and erroneous judgments in terms of the same psychological processes.

The emergence of such a new paradigm as cognitive psychology has had a serious impact on the study of decision making. Cognitive psychology looks at internal processes, mental limitations, and how limitations influence these processes. Early examples of conceptual and empirical work in this area included Bruner and his colleagues' examination of thinking strategies and Simon's treatment of reasoning heuristics and bounded rationality. Both Bruner and Simon were concerned with simplification strategies that reduce the complexity of decision-making problems in order to make them acceptable to the way people think. We have included most of the work in this book based on similar considerations.

In recent years, a large amount of research has been devoted to judgment heuristics and their effects. This publication takes a comprehensive look at this approach. It contains new works written specifically for this collection, and already published articles on issues of judgment and assumptions. Although the line between judgment and decision-making is not always clear, our focus here is on judgment rather than choice. The topic of decision making is important enough to be the subject of a separate publication.

The book consists of ten parts. The first part contains early research on heuristics and stereotypes in intuitive decision making. Part II looks specifically at the representativeness heuristic, which Part III extends to problems of causal attribution. Part IV describes the availability heuristic and its role in social judgment. Part V examines the understanding and study of covariation and shows the presence of illusory correlations in the decision making of ordinary people and experts. Part VI discusses the testing of probability estimates and argues for the common phenomenon of overconfidence in prediction and explanation. Biases associated with multi-step inference are discussed in Part VII. Part VIII examines formal and informal procedures for correcting and improving intuitive decision making. Part IX summarizes the study of the consequences of stereotypes in risk decision making. The final section contains some contemporary thoughts on several conceptual and methodological issues in the study of heuristics and biases.

For convenience, all links are collected in a separate list at the end of the book. Numbers printed in bold type refer to material included in the book, indicating the chapter in which the material appears. We have used parentheses (...) to indicate deleted material from previously published articles.

Our work in preparing this book was supported by the Naval Research Service, Grant N00014-79-C-0077 to Stanford University, and the Naval Research Service, Decision Research Contract N0014-80-C-0150.

We would like to thank Peggy Rocker, Nancy Collins, Jerry Henson, and Don McGregor for their assistance in preparing this book.

Daniel Kahneman

Paul Slovik

Amos Tversky

Introduction

1. Decision making under conditions of uncertainty: rules and prejudices*

Amos Tversky and Daniel Kahneman

Many decisions are based on beliefs about the likelihood of uncertain events, such as the outcome of an election, the guilt of a defendant in a court case, or the future value of the dollar. These beliefs are usually expressed in statements such as I think that..., the probability is..., it is unlikely that...

Etc. Sometimes beliefs about uncertain events are expressed numerically as odds or subjective probabilities. What determines such beliefs? How do people estimate the probability of an uncertain event or value of an uncertain quantity? This section shows that people rely on a limited number of heuristic principles that reduce the complex tasks of estimating probabilities and predicting the values ​​of quantities to simpler operations of judgment. In general, these heuristics are quite useful, but sometimes they lead to serious and systematic errors.

The subjective assessment of probability is similar to the subjective assessment of physical quantities such as distance or size. All these estimates are based on data of limited reliability, which is processed according to heuristic rules. For example, the estimated distance to an object is partly determined by its clarity. The sharper the object, the closer it appears. This rule has some validity because in any terrain, distant objects appear less clear than closer objects. However, constant adherence to this rule leads to systematic errors in estimating distance. Typically, when visibility is poor, distances are often overestimated because the contours of objects are blurred. On the other hand, distances are often underestimated when visibility is good because objects appear clearer. Thus, using clarity as a measure of distance leads to common biases. Such biases can also be found in intuitive probability assessments. This book describes three types of heuristics that are used to estimate probability and predict the values ​​of quantities. The biases that these heuristics lead to are outlined, and the practical and theoretical implications of these observations are discussed.

* This chapter first appeared in Science, 1974, 185, 1124-1131. Copyright (c) 1974 by the American Association for the Advancement of Science. Republished by permission.

Representativeness

Most questions about probability are of one of the following types: What is the probability that object A belongs to class B? What is the probability that event A is caused by process B? What is the probability that process B will lead to event A? In answering such questions, people typically rely on the representativeness heuristic, in which probability is determined by the degree to which A is representative of B, that is, the degree to which A is similar to B. For example, when A is highly representative of B, the probability that that event A comes from B is assessed as high. On the other hand, if A is not similar to B, then the probability is assessed as low.

To illustrate the representativeness judgment, consider a description of a man by his former neighbor: “Steve is very introverted and shy, always ready to help me, but has too little interest in other people and reality in general. He is very meek and neat, likes order and structure, and also tends to detail " How do people rate the likelihood of Steve's occupation (for example, farmer, salesman, airplane pilot, librarian, or doctor)? How do people rank these occupations from most to least likely? In the representativeness heuristic, the likelihood that Steve is a librarian, for example, is determined by the extent to which he is representative of a librarian, or fits the stereotype of a librarian. Indeed, research on similar problems has shown that people assign occupations in exactly the same way (Kahneman and Tversky 1973, 4). This approach to probability estimation leads to serious errors because similarity or representativeness is not influenced by the individual factors that should influence the probability estimate.

Insensitive to prior probability of outcome

One of the factors that does not affect representativeness, but does significantly affect probability, is prior probability, or the frequency of the underlying values ​​of the results (outcomes). In Steve's case, for example, the fact that there are many more farmers than librarians in the population is necessarily taken into account in any reasonable assessment of the probability that Steve is more likely to be a librarian than a farmer. Taking into account the frequency of base values, however, does not really affect Steve's conformity with the stereotype of librarians and farmers. If people estimate probability through representativeness, it follows that they will neglect prior probabilities. This hypothesis was tested in an experiment in which prior probabilities were varied (Kahneman and Tversky, 1973,4). Subjects were shown short descriptions of several people selected at random from a group of 100 specialist engineers and lawyers. Test takers were asked to rate, for each description, the likelihood that it belonged to an engineer rather than a lawyer. In one experimental case, subjects were told that the group from which the descriptions were given consisted of 70 engineers and 30 lawyers. In another case, subjects were told that the group consisted of 30 engineers and 70 lawyers. The odds that any given description is that of an engineer rather than a lawyer should be greater in the former case, where the majority are engineers, than in the latter, where the majority are lawyers. This can be shown by applying Bayes' rule that the proportion of these odds must be (0.7/0.3)2, or 5.44, for each description. In gross violation of Bayes' rule, subjects in both cases demonstrated essentially the same probability estimates. Apparently, participants in the experiment rated the likelihood that a particular description belonged to an engineer rather than a lawyer as the degree to which that description was representative of the two stereotypes, with little, if any, consideration for the prior probabilities of those categories.

Subjects correctly used prior probabilities when they had no other information. In the absence of a brief personality description, they estimated the probability that the unknown person is an engineer to be 0.7 and 0.3, respectively, in both cases, under both baseline frequency conditions. However, prior probabilities were completely ignored when a description was presented, even if it was completely uninformative. The reactions to the description below illustrate this phenomenon:

Dick is a 30 year old man. Married, has no children yet. A very capable and motivated employee who shows great promise. Recognized by colleagues.

This description was designed to provide no information as to whether Dick is an engineer or a lawyer. Therefore, the probability that Dick is an engineer must be equal to the proportion of engineers in the group, as if no description had been given at all. Subjects, however, rated the probability that Dick was an engineer as 5, regardless of whether the proportion of engineers in the group was given (7 to 3 or 3 to 7). Obviously, people react differently in situations where there is no description and when a useless description is given. In the case where descriptions are missing, prior probabilities are used appropriately; and prior probabilities are ignored when a useless description is given (Kahneman and Tversky, 1973,4).

Insensitive to sample size

To estimate the likelihood of obtaining a particular outcome in a sample selected from a specified population, people typically use the representativeness heuristic. That is, they estimate the probability of a result in a sample, such as that the average height of a random sample of ten people will be 6 feet (180 centimeters), to the extent to which this result is similar to the corresponding parameter (that is, the average height of people in the entire population). The similarity of statistics in a sample to a typical parameter in the entire population does not depend on the sample size. Therefore, if the probability is calculated using representativeness, then the statistical probability in the sample will be essentially independent of the sample size.

Indeed, when test subjects estimated the distribution of mean heights for samples of different sizes, they produced identical distributions. For example, the probability of obtaining an average height of more than 6 feet (180 cm) has been estimated to be similar for samples of 1000, 100, and 10 persons (Kahneman and Tversky, 1972b, 3). In addition, subjects failed to appreciate the role of sample size even when it was emphasized in the problem statement. Let's give an example that confirms this.

A certain city is served by two hospitals. In the larger hospital, approximately 45 babies are born every day, and in the smaller hospital, approximately 15 babies are born each day. As you know, approximately 50% of all babies are boys. However, the exact percentage varies from day to day. Sometimes it can be higher than 50%, sometimes lower.
For one year, each hospital kept track of the days on which more than 60% of babies born were boys. Which hospital do you think has recorded more of these days?
Large hospital (21)
Smaller hospital (21)
Approximately equally (that is, within a difference of 5%) (53)

The numbers in parentheses indicate the number of final year students who responded.

Most test takers estimated the likelihood that there would be more than 60% boys in both the small and large hospitals equally, perhaps because these events are covered by the same statistics and are thus equally representative of the entire population.

In contrast, according to sampling theory, the expected number of days on which more than 60% of babies born are boys is much higher in a small hospital than in a large one because a large sample is less likely to deviate from 50%. This fundamental concept of statistics is obviously not part of people's intuitions.

Similar insensitivity to sample size has been documented in estimates of a posteriori probability, that is, the probability that a sample was selected from one population rather than another. Let's look at the following example:

Imagine a basket filled with balls, 2/3 of which are one color and 1/3 of another. One person takes 5 balls from a basket and discovers that 4 of them are red and 1 is white. Another person takes out 20 balls and discovers that 12 of them are red and 8 are white. Which of these two people should be more confident in saying that the basket contains 2/3 red balls and 1/3 white balls rather than vice versa? What are the chances for each of these people?

In this example, the correct answer is to estimate the subsequent odds to be 8 to 1 for the 4:1 sample and 16 to 1 for the 12:8 sample, assuming the prior probabilities are equal. However, most people think that the first sample provides much stronger support for the hypothesis that the basket is filled primarily with red balls, because the percentage of red balls in the first sample is greater than in the second. This again shows that intuitions are dominated by sample proportion rather than sample size, which plays a decisive role in determining actual subsequent odds. (Kahneman and Tversky, 1972b). In addition, intuitive estimates of subsequent odds (post odds) are much less radical than the correct values. In problems of this type, underestimation of the influence of the obvious has been repeatedly observed (W. Edwards, 1968, 25; Slovic and Lichtenstein, 1971). This phenomenon is called “conservatism.”

Misconceptions of Chance

People believe that a sequence of events organized as a random process represents an essential characteristic of that process even when the sequence is short. For example, regarding a coin landing on heads or tails, people believe that the sequence O-P-O-P-P-O is more likely than the sequence O-O-O-R-P-P, which does not seem random, and also more likely than the sequence O-O-O-O-R-O, which does not reflect the equivalence of the sides of the coin (Kahneman and Tvegsky, 1972b, 3). Thus, people expect that the essential characteristics of the process will be represented, not only globally, i.e. in complete sequence, but also locally in each of its parts. However, the locally representative sequence systematically deviates from the expected odds: it has too many alternations and too few repetitions. Another consequence of beliefs about representativeness is the well-known casino gambler's fallacy. Seeing that reds take too long to land on the roulette wheel, for example, most people mistakenly believe that it is more likely that black should now come up, because hitting black would complete a more representative sequence than landing another red. Chance is generally viewed as a self-regulating process in which a deviation in one direction leads to a deviation in the opposite direction in order to restore equilibrium. In fact, deviations are not corrected, but simply “dissolve” as the random process proceeds.

Misconceptions about chance are not limited to inexperienced test takers. A study of intuitions in statistical inferences among experienced theoretical psychologists (Tvegsky and Kahneman, 1971, 2) revealed a strong belief in what may be called the law of small numbers, according to which even small samples are highly representative of the populations from which they are selected. The results of these researchers reflected the expectation that a hypothesis that is valid relative to the entire population will show up as a statistically significant result in the sample, and sample size does not matter. As a result, experts place too much faith in results obtained from small samples and overestimate the repeatability of these results. When conducting research, this bias leads to the selection of samples of inadequate size and to exaggerated interpretation of results.

Insensitive to forecast reliability

People are sometimes forced to make numerical predictions, such as the future price of a stock, the demand for a product, or the outcome of a football game. Such predictions are based on representativeness. For example, suppose someone is given a description of a company and is asked to predict its future profits. If the description of the company is very favorable, then very high profits will seem most representative from that description; if the description is mediocre, then the most representative development will seem to be an ordinary development of events. The extent to which a description is favorable does not depend on the veracity of that description or the extent to which it allows for accurate prediction.

Therefore, if people make a prediction based solely on the favorableness of a description, their predictions will be insensitive to the reliability of the description and to the expected accuracy of the prediction.

This way of making judgments violates normative statistical theory, in which the extreme and range of predictions depend on predictability. When predictability is zero, the same prediction must be made in all cases. For example, if company descriptions do not contain information regarding profits, then the same value (at the average profit) should be predicted for all companies. If predictability is perfect, of course, the predicted values ​​will match the actual values, and the range of predictions will equal the range of results. In general, the higher the predictability, the wider the range of predicted values.

Some numerical forecasting studies have shown that intuitive predictions violate this rule and that subjects pay little, if any, attention to predictability considerations (Kahneman and Tversky 1973, 4). In one of these studies, subjects were given several paragraphs of text, each of which described the work of a university teacher during a separate practical lesson. Some test takers were asked to rate the quality of the lesson described in the text using a percentage scale, relative to a specified population. Other test takers were asked to predict, also using a percentage scale, the position of each university teacher 5 years after this practical lesson. Judgments made under both conditions were identical. That is, the prediction of a criterion distant in time (the success of a teacher after 5 years) was identical to the assessment of the information on the basis of which this prediction was made (the quality of a practical lesson). Students who suggested this were undoubtedly aware of the limited predictability of teacher competence based on a single trial lesson taught 5 years earlier; however, their predictions were as extreme as their estimates.

The illusion of validity

As we have discussed, people often make predictions by choosing an outcome (such as an occupation) that is most representative of an input (such as a description of a person). How confident they are in their forecast depends primarily on the degree of representativeness (that is, the quality of the choice's fit to the input data), regardless of the factors that limit the accuracy of their forecast. Thus, people are quite confident in predicting that a person is a librarian when given a description of his personality that fits the stereotype of a librarian, even if it is poor, unreliable, or outdated. Unfounded confidence, which is a consequence of the successful coincidence of the predicted outcome and the input data, can be called the illusion of validity. This illusion persists even when the subject knows the factors that limit the accuracy of his predictions. It is quite common to say that psychologists who conduct sample interviews often have considerable confidence in their predictions, even though they are familiar with the extensive literature that shows that sample interviews are prone to error.

Continued confidence in the validity of the results of the clinical sample interview, despite repeated evidence of its reliability, is sufficient evidence of the strength of this effect.

The internal consistency of a sample of input data is a primary indicator of the degree of confidence in a prediction based on that input data. For example, people express more confidence in predicting the GPA of a student whose first-year report card consists entirely of Bs (4 points) than in predicting the GPA of a student whose first-year report card contains many A grades (5 points). ), and C (3 points). Highly consistent patterns are most often observed when the input variables are highly redundant or interrelated. Consequently, people tend to be confident in predictions based on redundant input variables. However, an elementary rule in correlation statistics states that if we have input variables of a certain validity, a prediction based on several such inputs can achieve higher accuracy when the variables are independent of each other than when they are redundant or interrelated. Thus, input redundancy reduces accuracy even as it increases confidence, so people often have confidence in predictions that are likely to be wrong (Kahneman and Tversky 1973, 4).

Misconceptions about regression

Suppose a large group of children were tested on two similar versions of an aptitude test. If one selects ten children from among those who did best on one of these two versions, he will usually be disappointed with their performance on the second version of the test. Conversely, if one selects ten children from among those who did the worst on the first version of the test, then on average he will find that they did slightly better on the other version. To summarize, consider two variables X and Y that have the same distribution. If you select people whose average X scores deviate from the average X by k units, then the average of their Y scale will usually deviate from the average Y by less than k units. These observations illustrate a general phenomenon known as regression to the mean, which was discovered by Galton more than 100 years ago.

In everyday life, we all encounter a large number of cases of regression to the average, comparing, for example, the height of fathers and sons, the level of intelligence of husbands and wives, or the results of passing exams following one another. However, people have no idea about this. First, they do not expect regression in the many contexts where it is expected to occur. Secondly, when they recognize the occurrence of regression, they often invent incorrect explanations for the reasons. (Kahneman and Tversky, 1973,4). We believe that the regression phenomenon remains elusive because it is inconsistent with the view that the predicted outcome should be as representative as possible of the input data, and therefore the value of the outcome variable should be as extreme as the value of the input variable.

Failure to recognize the meaning of regression can have detrimental consequences, as was illustrated in the following observations (Kahneman and Tversky, 1973,4). In discussions of training flights, experienced instructors have noted that praise for an exceptionally soft landing is usually followed by a poorer landing on the next attempt, while harsh criticism after a hard landing is usually followed by an improvement in performance on the next attempt. The instructors concluded that verbal rewards were detrimental to learning while reprimands were beneficial, contrary to accepted psychological doctrine. This conclusion is untenable due to the presence of mean bias. As in other cases where examinations follow one another, improvement usually follows poor performance, and deterioration follows excellent work, even if the teacher or instructor does not react in any way to the student's achievements on the first attempt. Because instructors praised their students after good landings and scolded them after bad ones, they came to the erroneous and potentially harmful conclusion that punishment was more effective than reward.

Thus, failure to understand the regression effect leads to the fact that the effectiveness of punishment is overestimated and the effectiveness of reward is underestimated. In social interaction, as in learning, reward is usually applied when a job is done well, and punishment when a job is done poorly. Following only the law of regression, behavior is most likely to improve after punishment and most likely to worsen after punishment. Therefore, it turns out that, by pure chance, people are rewarded for punishing others, and punished for rewarding them. People in general are not aware of this fact. In fact, the subtle role of regression in determining the apparent consequences of reward and punishment seems to have escaped the attention of scholars working in this area.

Availability

There are situations in which people estimate class frequencies or probability of events based on the ease with which they recall instances or events. For example, you can estimate the likelihood of a heart attack among middle-aged people by recalling such cases among your friends. In a similar way, someone might estimate the likelihood that a business venture will fail by imagining the various difficulties it might encounter. This evaluation heuristic is called availability. Availability is very useful for estimating the frequency or probability of events because events belonging to large classes are typically recalled and more quickly than instances of less frequent classes. However, availability is affected by factors other than frequency and probability. Confidence about accessibility therefore leads to predictable biases, some of which are illustrated below.

Biases due to the degree of recall of events in memory

When the size of a class is estimated based on the accessibility of its elements, a class whose elements are easily recalled will appear larger than a class of the same size but whose elements are less accessible and less easily recalled. In a simple demonstration of this effect, subjects were read a list of famous people of both sexes and then asked to judge whether the list contained more male names than female ones. Different lists were given to different groups of test takers. In some of the lists, men were more famous than women, and in others, women were more famous than men. In each of the lists, subjects mistakenly believed that the class (in this case gender) that contained more famous people was more numerous (Tvegsky and Kahneman, 1973,11).

In addition to familiarity, there are other factors, such as vividness, that affect the recall of events in memory. For example, if a person witnessed a fire in a building with his own eyes, he will consider the occurrence of such accidents to be probably more subjectively probable than if he read about this fire in the local newspaper. In addition, recent incidents are likely to be somewhat easier to remember than earlier ones. It often happens that the subjective assessment of the likelihood of road accidents temporarily increases when a person sees an overturned car near the road.

Search direction efficiency biases

Suppose a word (of three letters or more) is chosen at random from an English text. What is more likely, that the word begins with the letter r or that r is the third letter? People approach this problem by recalling words that begin with r (road) and words that have r in third position (car) and estimating relative frequency based on the ease with which words of these two types come to mind. Since it is much easier to search for words by the first letter than by the third, most people find that there are more words that begin with that consonant than words in which the same consonant appears in the third position. They draw this conclusion even for consonants such as r or k, which appear more often in third position than in first (Trevsky and Kahneman, 1973,11).

Different tasks require different search directions. For example, suppose you were asked to rate the frequency with which words with an abstract meaning (thought, love) and a concrete meaning (door, water) appear in written English. A natural way to answer this question is to look for contexts in which these words might appear. It seems easier to recall contexts in which an abstract meaning might be mentioned (love in women's novels) than to recall contexts in which a word with a concrete meaning is mentioned (e.g., door). If word frequency is determined based on the accessibility of the contexts in which they appear, words with abstract meanings will be judged to be relatively more numerous than words with concrete meanings. This stereotype was observed in a recent study (Galbgaith and Undewood, 1973), which showed that “the frequency of occurrence of words with an abstract meaning was much higher than the frequency of words with a concrete meaning, while their objective frequency was equal. It was assessed in the same way as abstract words appeared in a much greater variety of contexts than words with specific meanings.

Preconceptions Based on Imaginative Ability

Sometimes you need to estimate the frequency of a class whose elements are not stored in memory, but can be created according to a certain rule. In such situations, certain elements are usually produced, and frequency or probability is assessed by the ease with which the corresponding elements can be constructed. However, the ease of recall of relevant items does not always reflect their actual frequency, and this method of assessment introduces biases. To illustrate this, consider a group of 10 people who form committees of k members, with 2< k < 8. Сколько различных комитетов, состоящих из k членов может быть сформировано? Правильный ответ на эту проблему дается биноминальным коэффициентом (k10), который достигает максимума, paвнoгo 252 для k = 5. Ясно, что число комитетов, состоящих из k членов, paвняется числу комитетов, состоящих из (10-k) членов, потому что для любогo комитета, состоящего из k членов, существует единственно возможная грyппа, состоящая из (10-k) человек, не являющихся членами комитета.

One way to answer without calculating is to mentally create committees of k members and estimate their number using the ease with which they come to mind. Committees consisting of a small number of members, for example, 2, are more accessible than committees consisting of a large number of members, for example, 8. The simplest scheme for creating committees is to divide the group into disjoint sets. It is immediately clear that it is easier to create five non-overlapping committees of 2 members each, while it is impossible to generate two non-overlapping committees of 8 members each. Therefore, if frequency is estimated by the ability to visualize it, or accessibility to mental reproduction, there will appear to be more small committees than large ones, contrary to a proper parabolic function. Indeed, when non-expert test takers were asked to estimate the number of different committees of different sizes, their estimates were a monotonically decreasing function of committee size (Tvegsky and Kahneman 1973, 11). For example, the average score for 2-member committees was 70, while the average score for 8-member committees was 20 (the correct answer was 45 in both cases).

The ability to imagine images plays an important role in assessing the probabilities of real life situations. The risk associated with a dangerous expedition, for example, is assessed by mentally reproducing unforeseen circumstances for which the expedition does not have sufficient equipment to overcome. If many of these difficulties are vividly depicted, the expedition may seem extremely dangerous, although the ease with which disasters are imagined does not necessarily reflect their actual likelihood. Conversely, if the possible hazard is difficult to imagine or simply does not come to mind, the risk associated with an event may be grossly underestimated.

Illusory relationship

Chapman and Chapman (1969) described an interesting bias in estimating the frequency with which two events will occur simultaneously. They provided lay subjects with information regarding several hypothetical patients with mental disorders. Data for each patient included clinical diagnosis and patient drawings. Subjects later rated the frequency with which each diagnosis (such as paranoia or persecutory delusions) was accompanied by various pattern features (specific eye shapes). Subjects markedly overestimated the frequency of co-occurrence of two natural events, such as persecution delusions and a specific eye shape. This phenomenon is called illusory correlation. In their erroneous assessments of the data presented, subjects "rediscovered" much of the already known, but unsubstantiated, clinical knowledge regarding the interpretation of the drawing test. The illusory correlation effect was extremely robust in the face of contradictory data. It persisted even when the relationship between the trait and the diagnosis was actually negative, preventing subjects from determining the actual relationship between them.

Availability is a natural explanation for the illusory correlation effect. An assessment of how often two phenomena are interrelated and occur simultaneously can be based on the strength of the associative connection between them. When the association is strong, one is more likely to conclude that events often occurred simultaneously. Therefore, if the association between events is strong, then people judge that they will often occur simultaneously. According to this view, the illusory correlation between a diagnosis of persecutory mania and the specific shape of the eyes in a drawing, for example, arises because persecution mania is more likely to be associated with the eyes than with any other part of the body.

Long life experience has taught us that, in general, elements of large classes are remembered better and faster than elements of less frequent classes; that more probable events are easier to imagine than unlikely ones; and that associative connections between events are strengthened when events frequently occur simultaneously. As a result, a person has at his disposal a procedure (the availability heuristic) for estimating class size; the probability of an event, or the frequency with which events can simultaneously occur, is estimated by the ease with which the corresponding mental processes of recall, recall, or association can be performed. However, as the previous examples have shown, these estimation procedures systematically lead to errors.

Adjustment and “anchoring”

In many situations, people make estimates based on an initial value that is deliberately selected in such a way as to obtain a final answer. The initial value, or starting point, may be obtained through the formulation of the problem, or it may be partly the result of a calculation. In any case, such an “estimate” is usually not enough (Slovic and Lichtenstein, 1971). That is, different starting points lead to different estimates that are biased toward those starting points. We call this phenomenon “anchoring.”

Insufficient "adjustment"

To demonstrate the anchoring effect, test takers were asked to estimate various quantities expressed as percentages (for example, the percentage of African countries in the United Nations). Each value was assigned a number from 0 to 100 using random selection in the presence of the test takers. Test takers were first asked to indicate whether this number was greater or less than the value of the value itself, and then to estimate the value of this value, moving up or down relative to its number . Different groups of test takers were given different numbers for each quantity, and these random numbers had a significant impact on the test takers' scores. For example, the average estimates of the percentage of African countries in the United Nations were 25 and 45 for groups that received 10 and 65 as starting points, respectively. Monetary rewards for accuracy did not reduce the anchoring effect.

"Anchoring" occurs not only when subjects are given a starting point, but also when subjects base their estimate on the result of some incomplete calculation. A study of intuitive numerical estimation illustrates this effect. Two groups of high school students spent 5 seconds estimating the value of a number expression that was written on the board. One group assessed the meaning of the expression

8 x 7 x 6 x 5 x 4 x 3 x 2 x 1,

while the other group assessed the meaning of the expression

1 x 2 x 3 x 4 x 5 x 6 x 7 x 8.

To quickly answer such questions, people can perform a few calculation steps and estimate the meaning of an expression by extrapolation or "adjustment." Since “adjustment” is usually not sufficient, this procedure should lead to an underestimation of the value. Moreover, since the result of the first few steps of multiplication (performed from left to right) is higher in a decreasing sequence than in an increasing one, the first expression mentioned must be evaluated more than the last one. Both predictions were confirmed. The average score for the increasing sequence was 512, while the average score for the decreasing sequence was 2250. The correct answer is 40320 for both sequences.

Biases in the framing of conjunctive and disjunctive events

In a recent study by Bar-Hillel (1973), test takers were given the opportunity to bet on one of two events. Three types of events were used: (i) a simple event, such as drawing a red ball from a bag containing 50% red and 50% white balls; (ii) a related event, such as drawing a red ball seven times in a row from a bag (returning the balls) containing 90% red balls and 10% white and (iii) an unrelated event, such as drawing a red ball at least at least 1 time in seven consecutive attempts (with the return of balls) from a bag containing 10% red balls and 90% white balls. In this problem, a significant majority of test takers chose to bet on the bound event (which had a probability of 0.48) rather than on the idle event (which had a probability of 0.50). The subjects also preferred to bet on a simple event rather than on a disjunctive one, which has a probability of 0.52.

Thus, the majority of test takers bet on the less likely event in both comparisons. These test takers' decisions illustrate a general finding: Studies of gambling decisions and probability estimates indicate that people: tend to overestimate the probability of conjunctive events (Cohen, Chesnik, and Haran 1972, 24) and tend to underestimate the probability of disjunctive events. These stereotypes are entirely explained by the anchoring effect. The established probability of an elementary event (success at any stage) provides a natural starting point for estimating the probabilities of both conjunctive and disjunctive events. Since the “adjustment” from the starting point is usually not enough, the final estimates remain too close to the probabilities of elementary events in both cases. Note that the total probability of conjunctive events is lower than the probability of each elementary event, while the total probability of an unrelated event is higher than the probability of each elementary event. A consequence of anchoring is that the overall probability will be overestimated for conjunctive events and underestimated for disjunctive ones.

Biases in the assessment of complex events are particularly significant in the context of planning. The successful completion of a business venture, such as the development of a new product, is usually complex in nature: for the enterprise to succeed, each of the series of events must occur. Even if each of these events is highly likely, the overall probability of success may be quite low if the number of events is large.

The general tendency to overestimate the probability of opportunistic events leads to unreasonable optimism in assessing the likelihood that a plan will succeed or that a project will be completed on time. On the contrary, disjunctive event structures are usually encountered when assessing risk. A complex system, such as a nuclear reactor or the human body, will be damaged if any of its necessary components fail. Even when the probability of failure in each component is small, the probability of failure of the entire system can be high if many components are involved. Due to anchoring bias, people tend to underestimate the probability of failure in complex systems. Thus, anchoring bias may sometimes depend on the structure of the event. The structure of an event or phenomenon, similar to a chain of links, leads to an overestimation of the probability of this event; the structure of an event, similar to a funnel, consisting of disjunctive links, leads to an underestimation of the probability of the event.

"Anchoring" when estimating the subjective probability distribution

In decision analysis, experts are often required to express their opinion about a value, such as the average value of the Dow Jones Industrial Average on a given day, in the form of a probability distribution. Such a distribution is usually constructed by choosing values ​​for a quantity that correspond to its percentage scale of the probability distribution. For example, an expert might be asked to choose a number, X90, such that the subjective probability that this number will be higher than the average Doy Jones number is 0.90. That is, he must choose the value X90 so that in 9 cases to 1 the average value of the Doy-Jones index does not exceed this number. The subjective probability distribution of the Dow Jones average can be constructed from several such estimates, expressed using different percentage scales.

By accumulating such subjective probability distributions for various quantities, the accuracy of the expert's estimates can be verified. An expert is considered to be calibrated (see Chapter 22) properly in a given set of problems if only 2 percent of the correct values ​​of the estimated quantities are below the specified X2 values. For example, correct values ​​should be below X01 for 1% of values ​​and above X99 for 1% of values. Thus, the true values ​​must strictly fall within the interval between X01 and X99 in 98% of problems.

Several researchers (Alpert and Raiffa, 1969, 21; Stael von Holstein, 1971b; Winkleg, 1967) have analyzed the irregularities in probability estimation for many quantities for a large number of experts. These distributions indicated extensive and systematic deviations from proper estimates. In most studies, the actual estimated values ​​are either less than X01 or greater than X99 for approximately 30% of the problems. That is, subjects establish very narrow strict intervals that reflect their confidence, more than their knowledge, regarding the estimated quantities. This bias is common among both trained and untrained test takers, and this effect is not eliminated by introducing scoring rules that provide incentives for external assessment. This effect is, at least in part, due to anchoring.

To select X90 as the value of the Dow Jones average, for example, it is natural to start by thinking about the best estimate of the Dow Jones Industrial Average and "adjusting" the upper values. If this “adjustment”, like most others, is insufficient, then the X90 will not be extreme enough. A similar fixation effect will occur in the choice of X10, which is presumably obtained by adjusting someone else's best estimate downward. Consequently, the credible interval between X10 and X90 will be too narrow, and the estimated probability distribution will be too rigid. In support of this interpretation, it can be shown that subjective probabilities are systematically changed through a procedure in which one's best estimate does not serve as an “anchor.”

Subjective probability distributions for a given quantity (the average Dow Jones number) can be obtained in two different ways: (i) ask the subject to select a value for the Dow Jones number that corresponds to the probability distribution expressed using a percentage scale and (ii) ask the subject to estimate the probabilities of that that the true value of the Doy Jones number will exceed some of the indicated values. These two procedures are formally equivalent and should result in identical distributions. However, they offer different ways of adjusting from different “bindings”. In procedure (i), the natural starting point is the best quality estimate. In procedure (ii), on the other hand, the test taker can "bind" to the value stated in the question. In contrast, he may stick to the even odds, or 50-50 odds, which are the natural starting point for estimating probability. In any case, procedure (ii) should result in less extreme scores than procedure (i).

To contrast these two procedures, a group of test subjects were given a set of 24 quantitative measurements (such as air distance from New Delhi to Beijing) that scored either X10 or X90 for each task. Another group of test takers received the average scores of the first group for each of these 24 values. They were asked to estimate the chances that each given quantity exceeded the true value of the corresponding quantity. In the absence of any bias, the second group should reconstruct the probability specified by the first group, that is, 9:1. However, if equal odds or a given value serves as an "anchor", the probability specified by the second group should be less extreme, that is, closer to 1: 1. In fact, the average probability reported by this group, across all problems, was 3:1. When the judgments from these two groups were tested, it was found that subjects in the first group were too extreme in their judgments, consistent with earlier studies. Events whose probability they determined to be 0.10 actually occurred 24% of the time. On the contrary, those tested in the second group were too conservative. Events whose probability they determined to be 0.34 actually occurred 26% of the time. These results illustrate how the degree of correctness of an estimate depends on the assessment procedure.

Discussion

This part of the book looked at the cognitive stereotypes that arise as a result of confidence in assessment heuristics. These stereotypes are not characteristic of motivational effects such as wishful thinking or judgment biases due to approval and blame. Indeed, as previously reported, some serious scoring errors occurred despite the fact that test-takers were encouraged for accuracy and rewarded for correct answers (Kahneman and Tversky, 1972b, 3; Tversky and Kahneman, 1973,11).

Confidence in heuristics and the prevalence of stereotypes are not limited to ordinary people. Experienced researchers are also prone to the same biases when they think intuitively. For example, the tendency to predict an outcome that is most representative of the data, without sufficient attention to the prior probability of that outcome occurring, has been observed in the intuitive judgments of people who have extensive knowledge of statistics (Kahneman and Tversky, 1973,4; Tversky and Kahneman, 1971). ,2). Although those who have knowledge of statistics and avoid elementary mistakes, such as casino gamblers' mistakes, make similar mistakes in intuitive judgments for more complicated and less understandable problems.

Not surprisingly, useful types of heuristics such as representativeness and availability persist, even though they sometimes lead to errors in predictions or estimates. What is possible and surprising is the inability of people to infer from long-term life experience such fundamental statistical rules as regression to the mean or sample size effects when analyzing variability within a sample. Although we all encounter numerous situations throughout our lives to which these rules can be applied, very few people independently discover the principles of sample selection and pepecca through their own experience. Statistical principles are not learned from everyday experience because the relevant examples are not coded in the right way. For example, people don't find that the average word length of successive lines of text is any more different than that of successive pages because they simply don't pay attention to the average word length of individual lines or pages. Thus, people do not study the relationship between sample size and within-sample variability, even though there is ample evidence to support such a conclusion.

The lack of appropriate encoding also explains why people generally do not detect stereotypes in their probability judgments. A person could find out whether his estimates are correct by counting the number of events that actually occur out of those that he considers equally probable. However, it is not natural for people to group events based on their likelihood. In the absence of such a grouping, a person cannot discover, for example, that only 50% of predictions whose probability he assessed as 0.9 or higher actually came true.

Empirical analysis of cognitive stereotypes has implications for the theoretical and applied role of probability assessment. Modern decision theory (de Finetti, 1968; Savage, 1954) views subjective probability as the quantitative opinion of an idealized individual. Specifically, the subjective probability of a given event is determined by the set of chances relative to that event from which a person is asked to choose. An internally consistent or holistic measure of subjective probability can be obtained if a person's choices among the odds offered are subject to certain principles, that is, the axioms of the theory. The resulting probability is subjective in the sense that different people may have different estimates of the probability of the same event. The main contribution of this approach is that it provides a rigorous subjective interpretation of probability that is applicable to unique events and is part of a general theory of rational decision making.

It may be worth noting that while subjective probabilities can sometimes be inferred from the selection of odds, they are not usually formed in this way. A person bets on team A rather than team B because he believes that team A is more likely to win; he does not derive his opinion as the result of preferences for certain odds.

Thus, in reality, subjective probabilities determine preferences for chances, but are not derived from them, unlike the axiomatic theory of rational decision making (Savage, 1954).

The subjective nature of probability has led many scientists to believe that integrity, or internal consistency, is the only valid criterion by which probabilities should be assessed. From the point of view of the formal theory of subjective probability, any set of internally consistent probability estimates is as good as any other. This criterion is not entirely satisfactory because an internally consistent set of subjective probabilities may also be inconsistent with other opinions held by a person. Consider a person whose subjective probabilities for all possible outcomes of a coin toss reflect the casino gambler's error. That is, his estimate of the probability of the occurrence of “tails” for each specific toss increases with the number of consecutive “heads” that preceded this toss. Such a person's judgments may be internally consistent and therefore acceptable as adequate subjective probabilities according to the criterion of a formal theory. These probabilities, however, are inconsistent with the generally accepted view that a coin has “no memory” and is therefore incapable of producing sequential dependencies. For the estimated probabilities to be considered adequate, or rational, internal consistency is not enough. Judgments must be consistent with all other views of that person. Unfortunately, there cannot be a simple formal procedure for assessing the compatibility of a set of probability estimates with the subject's complete belief system. The rational expert will, however, fight for consistency, even though internal consistency is more easily achieved and assessed. In particular, he will try to make his probability judgments consistent with his knowledge of the subject matter, the laws of probability, and his own evaluation heuristics and biases.

This article describes three types of heuristics that are used in judgments under uncertainty: (i) representativeness, which is typically used when people are asked to estimate the probability that an object or case A belongs to a class or process B; (ii) availability of events or scenarios, which is often used when people are asked to estimate the frequency of a class or the likelihood of a particular course of events; and (iii) adjustment or "anchoring", which is typically used in quantitative forecasting when an appropriate quantity is available. These heuristics are highly parsimonious and usually effective, but they lead to systematic errors in the forecast. A better understanding of these heuristics and the biases they lead to could contribute to assessment and decision making under conditions of uncertainty.

Let's consider the mathematical foundations of decision making under conditions of uncertainty.

The essence and sources of uncertainty.

Uncertainty is a property of an object, expressed in its vagueness, ambiguity, groundlessness, leading to insufficient opportunity for the decision maker to realize, understand, and determine its present and future state.

Risk is a possible danger, an action at random, requiring, on the one hand, courage in the hope of a happy outcome, and on the other, taking into account the mathematical justification of the degree of risk.

The practice of decision-making is characterized by a set of conditions and circumstances (situation) that create certain relationships, conditions, and positions in the decision-making system. Taking into account the quantitative and qualitative characteristics of the information at the disposal of the decision maker, we can distinguish decisions made under the following conditions:

certainty (reliability);

uncertainty (unreliability);

risk (probabilistic certainty).

Under conditions of certainty, decision makers quite accurately determine possible decision alternatives. However, in practice it is difficult to assess the factors that create the conditions for decision-making, so situations of complete certainty are most often absent.

Sources of uncertainty in the expected conditions in the development of an enterprise can be the behavior of competitors, organization personnel, technical and technological processes, and market changes. In this case, conditions can be divided into socio-political, administrative-legislative, production, commercial, financial. Thus, the conditions that create uncertainty are the influence of factors from the external to internal environment of the organization. Decisions are made under conditions of uncertainty, when it is impossible to estimate the likelihood of potential outcomes. This should be the case when the factors to be taken into account are so new and complex that it is not possible to obtain sufficient relevant information about them. As a result, the likelihood of a particular outcome cannot be predicted with sufficient confidence. Uncertainty is characteristic of some decisions that must be made in rapidly changing circumstances. The sociocultural, political and knowledge-intensive environment has the highest potential for uncertainty. Department of Defense decisions to develop extremely complex new weapons are often initially uncertain. The reason is that no one knows how the weapon will be used or whether it will happen at all, as well as what weapon the enemy may use. So the department is often unable to determine whether a new weapon will actually be effective by the time it reaches the military, which could be five years, for example. However, in practice, very few management decisions have to be made under conditions of complete uncertainty.

When faced with uncertainty, a manager has two main options. First, try to obtain additional relevant information and analyze the problem again. This often reduces the novelty and complexity of the problem. The manager combines this additional information and analysis with accumulated experience, judgment, or intuition to give a range of outcomes a subjective or perceived probability.

The second possibility is to act strictly on past experience, judgment, or intuition and make a guess about the likelihood of events. Time and information constraints are of utmost importance when making management decisions.

In a situation of risk, it is possible, using probability theory, to calculate the probability of a particular change in the environment; in a situation of uncertainty, probability values ​​cannot be obtained.

Uncertainty is manifested in the impossibility of determining the probability of the occurrence of various states of the external environment due to their unlimited number and the lack of assessment methods. Uncertainty is taken into account in various ways.

Rules and criteria for decision-making under conditions of uncertainty.

Let us present several general criteria for the rational choice of solution options from a variety of possible ones. The criteria are based on an analysis of a matrix of possible environmental states and decision alternatives.

The matrix shown in Table 1 contains: Aj - alternatives, i.e. options for action, one of which must be chosen; Si - possible options for environmental conditions; aij is a matrix element denoting the value of the cost of capital assumed by alternative j under environmental state i.

Table 1. Decision matrix

To select the optimal strategy in a situation of uncertainty, various rules and criteria are used.

Maximin rule (Waald criterion).

In accordance with this rule, from the alternatives aj, choose the one that, under the most unfavorable state of the external environment, has the highest value of the indicator. For this purpose, alternatives with a minimum indicator value are recorded in each row of the matrix, and the maximum is selected from the marked minimum ones. The alternative a* with the maximum value out of all the minimum ones is given priority.

The decision maker in this case is minimally prepared for risk, assuming a maximum of negative developments in the state of the external environment and taking into account the least favorable development for each alternative.

According to the Waald criterion, decision makers choose a strategy that guarantees the maximum value of the worst-case payoff (maximin criterion).

The maximax rule.

In accordance with this rule, the alternative with the highest achievable value of the evaluated indicator is selected. At the same time, the decision maker does not take into account the risk from adverse environmental changes. The alternative is found by the formula:

а* = (аjmaxj maxi Пij)

Using this rule, the maximum value for each row is determined and the largest one is selected.

A big drawback of the maximax and maximin rules is the use of only one option for the development of the situation for each alternative when making a decision.

Minimax rule (Savage criterion).

Unlike maximin, minimax is focused on minimizing not so much losses as regrets about lost profits. The rule allows for reasonable risk in order to obtain additional profit. The Savage criterion is calculated by the formula:

min max P = mini [ maxj (maxi Xij - Xij)]

where mini, maxj - search for the maximum by searching through the corresponding columns and rows.

Minimax calculation consists of four stages:

  • 1) Find the best result of each graph separately, that is, the maximum Xij (market reaction).
  • 2) The deviation from the best result of each individual graph is determined, that is, maxi Xij - Xij. The results obtained form a matrix of deviations (regrets), since its elements are lost profits from unsuccessful decisions made due to an erroneous assessment of the possibility of a market reaction.
  • 3) For each point of regret we find the maximum value.
  • 4) We choose a solution in which the maximum regret will be less than the others.

Hurwitz's rule.

According to this rule, the maximax and maximin rules are combined by associating the maximum of the minimum values ​​of the alternatives. This rule is also called the rule of optimism - pessimism. The optimal alternative can be calculated using the formula:

а* = maxi [(1-?) minj Пji+ ? maxj Пji]

where? - optimism coefficient, ? =1…0 at? =1 alternative is selected according to the maximax rule, when? =0 - according to the maximin rule. Given the fear of risk, is it advisable to ask? =0.3. The largest value of the target value determines the required alternative.

The Hurwitz rule is used taking into account more significant information than when using the maximin and maximax rules.

Thus, when making a management decision, in the general case it is necessary:

predict future conditions, such as demand levels;

develop a list of possible alternatives

evaluate the payback of all alternatives;

determine the probability of each condition;

evaluate alternatives based on the selected decision criterion.

The direct application of criteria when making management decisions under conditions of uncertainty is considered in the practical part of this work.

uncertainty management decision

DECISION MAKING THEORY

Topic 5: Decision making under conditions of uncertainty

Introduction

1. The concept of uncertainty and risk

3. Classification of risks when developing management decisions

4. Technologies for decision-making under conditions of stochastic risk

Conclusion

A management decision is made under conditions of certainty if the manager knows exactly the result of the implementation of each alternative. It should be noted that management decisions are made in conditions of certainty quite rarely.

Uncertainties are the main cause of risks. Reducing their volume is the main task of the manager.


Managers often have to develop and make management decisions in conditions of incomplete and unreliable information, and the results of implementing management decisions do not always coincide with planned indicators. These conditions are classified as circumstances of uncertainty and risk.

A management decision is made under conditions of uncertainty, when the manager does not have the opportunity to assess the likelihood of future results. This happens when the parameters to be taken into account are so new and unstructured that the likelihood of a particular outcome cannot be predicted with sufficient confidence.

Management decisions are made under conditions of risk, when the results of their implementation are not determined, but the probability of each of them occurring is known. The uncertainty of the result in this case is associated with the possibility of adverse situations and consequences for achieving the intended goals.

Uncertainty in decision making is manifested in the parameters of the information used at all stages of its processing. Uncertainty is difficult to measure and is more often assessed in terms of quality (high or low). It is also estimated as a percentage (information uncertainty at 30%).

Uncertainty is associated with the development of a management decision, and risk is associated with the results of implementation.

Uncertainties are the main cause of risks. Reducing their volume is the main task of the manager.

“Uncertainty is considered both a phenomenon and a process. If we consider it as a phenomenon, then we are dealing with a set of unclear situations, incomplete and mutually exclusive information. Phenomena also include unforeseen events that arise against the will of the manager and can change the course of planned events: for example, a sudden change in weather led to a change in the program for celebrating the city day.”

As a process, uncertainty is the activity of an incompetent manager who makes wrong decisions. For example, when assessing the investment attractiveness of a municipal loan, mistakes were made, and as a result, the city budget did not receive an additional 800 thousand rubles. In practice, it is necessary to consider uncertainty as a whole, since the phenomenon is created by the process, and the process shapes the phenomenon.

Uncertainties can be objective and subjective.

Objective - do not depend on the decision maker, and their source is outside the system in which the decision is made.

Subjective ones are the result of professional errors, shortcomings, and inconsistency in action; their source is located within the system in which the decision is made.

There are four levels of uncertainty:

Low, which does not affect the main stages of the process of developing and implementing a management decision;

Medium, which requires revision of some stages of development and implementation of the solution;

High implies the development of new procedures;

Ultra-high, which does not allow one to evaluate and adequately interpret data about the current situation.

2. Levels of uncertainty when assessing the effectiveness of management decisions

Consideration of the levels of uncertainty allows us to analytically present their use depending on the nature of the manager’s management activities.

In Fig. 1. a matrix of the effectiveness of management decisions is presented in the form of interaction between levels of uncertainty and the nature of management activities.

Effective decisions include those that are well-founded, well-developed, feasible, and understandable to the performer. Ineffective - unreasonable, unfinished, impracticable and difficult to implement.

As part of stable management activities, standard, repeating procedures are performed under conditions of weak disturbing influences from the external and internal environment.

The corrective nature of management activities is used when there are moderate disturbances from the external and internal environment, when the manager has to adjust the key processes of the management system.

Innovative management activities are characterized by a constant search and implementation of new processes and technologies to achieve the intended goals.

The combination of a low level of uncertainty with a stable and corrective nature of activity (areas A1 and B1) allows the manager to make informed decisions with minimal implementation risk. With the innovative nature of the activity

and a low level of uncertainty (region B 1), deterministic information will slow down the process of making effective decisions.

The combination of an average level of uncertainty with the corrective and innovative nature of management activities provides areas of effective solutions (B 2 and C 2).

A high level of uncertainty, combined with the stable nature of management activities, leads to ineffective decisions (area A 3), but is well suited for the innovative nature of management activities (area B 3).


Fig.1. Matrix of effectiveness of management decisions

“Ultra-high levels of uncertainty lead to ineffective decisions, as poorly structured, difficult to perceive and unreliable information makes it difficult to make effective decisions. »

Consideration of the levels of uncertainty allows us to analytically present their use depending on the nature of the manager’s management activities. Effective decisions include those that are well-founded, well-developed, feasible, and understandable to the performer. Ineffective - unreasonable, unfinished, impracticable and difficult to implement.

As part of stable management activities, standard, repeating procedures are performed under conditions of weak disturbing influences from the external and internal environment. The corrective nature of management activities is used when there are moderate disturbances from the external and internal environment, when the manager has to adjust the key processes of the management system. Innovative management activities are characterized by a constant search and implementation of new processes and technologies to achieve the intended goals. The combination of low with a stable and corrective nature of activity allows the manager to make informed decisions with minimal risk of implementation. Given the innovative nature of the activity and a low level of uncertainty, deterministic information will slow down the process of making effective decisions.

The combination of an average level of uncertainty with the corrective and innovative nature of management activities provides areas of effective solutions. High levels of uncertainty combined with the stable nature of management activities lead to ineffective decisions, but are well suited to the innovative nature of management activities. An extremely high level of uncertainty leads to ineffective decisions, since poorly structured, difficult to perceive and unreliable information makes it difficult to make effective decisions.

Kahneman D., Slovik P., Tversky A. Decision making under uncertainty: Rules and biases

I've been looking forward to this book for a long time... I first learned about the work of Nobel laureate Daniel Kahneman from Nassim Taleb's book Fooled by Randomness. Taleb quotes Kahneman a lot and deliciously, and, as I learned later, not only in this, but also in his other books (Black Swan. Under the sign of unpredictability, On the secrets of sustainability). Moreover, I found numerous references to Kahneman in the books: Evgeniy Ksenchuk Systems Thinking. Boundaries of mental models and systemic vision of the world, Leonard Mlodinow. (Not) a perfect accident. How chance rules our lives. Unfortunately, I couldn’t find Kahneman’s book in paper form, so I “had” to purchase an e-book and download Kahneman from the Internet... And believe me, I didn’t regret a single minute...

D. Kahneman, P. Slovik, A. Tversky. Decision making under uncertainty: Rules and biases. – Kharkov: Publishing House Institute of Applied Psychology “Humanitarian Center”, 2005. – 632 p.

The book brought to your attention deals with the peculiarities of people's thinking and behavior when assessing and predicting uncertain events. As the book convincingly shows, when making decisions under uncertain conditions, people usually make mistakes, sometimes quite significantly, even if they have studied probability theory and statistics. These errors are subject to certain psychological patterns that have been identified and well experimentally substantiated by researchers.

Since the introduction of Bayesian ideas into psychological research, psychologists have for the first time been offered a coherent and clearly articulated model of optimal behavior under uncertainty with which to compare human decision making. The conformity of decision making to normative models has become one of the main research paradigms in the field of judgment under conditions of uncertainty.

PartI. Introduction

Chapter 1. Decision making under conditions of uncertainty: rules and biases

How do people estimate the probability of an uncertain event or value of an uncertain quantity? People rely on a limited number of heuristic principles that reduce the complex tasks of estimating probabilities and predicting the values ​​of quantities to simpler operations of judgment. Heuristics are very useful, but sometimes they lead to serious and systematic errors.

The subjective assessment of probability is similar to the subjective assessment of physical quantities such as distance or size.

Representativeness. What is the probability that process B will lead to event A? When answering people usually rely on representativeness heuristic, in which probability is determined by the degree to which A is representative of B, that is, the degree to which A is similar to B. Consider the description of a man by his former neighbor: “Steve is very reserved and shy, always willing to help me, but takes too little interest other people and reality in general. He is very meek and neat, loves order, and also has a penchant for detail." How do people rate the likelihood of Steve's occupation (for example, farmer, salesman, airplane pilot, librarian, or doctor)?

In the representativeness heuristic, the likelihood that Steve is a librarian, for example, is determined by the degree to which he is representative of a librarian, or fits the stereotype of a librarian. This approach to probability estimation leads to serious errors because similarity or representativeness is not influenced by the individual factors that should influence the probability estimate.

Insensitivity to the prior probability of the outcome. One of the factors that does not affect representativeness, but significantly affects the probability, is the previous (prior) probability, or the frequency of the basic values ​​of the results (outcomes). In Steve's case, for example, the fact that there are many more farmers than librarians in the population is necessarily taken into account in any reasonable assessment of the probability that Steve is more likely to be a librarian than a farmer. Taking into account the frequency of base values, however, does not really affect Steve's conformity with the stereotype of librarians and farmers. If people estimate probability through representativeness, it follows that they will neglect prior probabilities.

This hypothesis was tested in an experiment in which prior probabilities were varied. Subjects were shown short descriptions of several people chosen at random from a group of 100 specialists - engineers and lawyers. Test takers were asked to rate, for each description, the likelihood that it belonged to an engineer rather than a lawyer. In one experimental case, subjects were told that the group from which the descriptions were given consisted of 70 engineers and 30 lawyers. In another case, subjects were told that the group consisted of 30 engineers and 70 lawyers. The odds that any given description is that of an engineer rather than a lawyer should be greater in the former, where the majority are engineers, than in the latter, where the majority are lawyers. This can be shown by applying Bayes' rule that the proportion of these odds must be (0.7/0.3)2, or 5.44 for each description. In gross violation of Bayes' rule, subjects in both cases demonstrated essentially the same probability estimates. Apparently, participants rated the likelihood that a particular description belonged to an engineer rather than a lawyer as the degree to which that description was representative of the two stereotypes, with little, if any, consideration for the prior probabilities of those categories.

Insensitive to sample size. People commonly use the representativeness heuristic. That is, they estimate the probability of an outcome in a sample by the degree to which that outcome is similar to the corresponding parameter. The similarity of a statistic in a sample to a typical parameter in the entire population does not depend on the sample size. Therefore, if probability is calculated using representativeness, then the statistical probability in a sample will be essentially independent of sample size. On the contrary, according to sampling theory, the larger the sample, the smaller the expected deviation from the mean. This fundamental concept of statistics is obviously not part of people's intuitions.

Imagine a basket filled with balls, 2/3 of which are one color and 1/3 of another. One person takes 5 balls from a basket and discovers that 4 of them are red and 1 is white. Another person takes out 20 balls and discovers that 12 of them are red and 8 are white. Which of these two people should be more confident in saying that the basket contains 2/3 red balls and 1/3 white balls rather than vice versa? In this example, the correct answer is to estimate the subsequent odds as 8 to 1 for a sample of 5 balls and 16 to 1 for a sample of 20 balls (Figure 1). However, most people think that the first sample provides much stronger support for the hypothesis that the basket is filled primarily with red balls, because the percentage of red balls in the first sample is greater than in the second. This again shows that intuitive estimates are dominated by sample proportion rather than sample size, which plays a decisive role in determining the actual subsequent odds.

Rice. 1. Probabilities in the problem with balls (for formulas, see the Excel file on the “Balls” sheet)

Misconceptions of chance. People believe that a sequence of events organized as a random process represents an essential characteristic of that process even when the sequence is short. For example, regarding a coin landing on heads or tails, people believe that the sequence O-R-O-R-R-O is more likely than the sequence O-O-O-R-R-R, which does not seem likely. random, and also more probable than the sequence O-O-O-O-R-O, which does not reflect the equivalence of the sides of the coin. Thus, people expect that the essential characteristics of the process will be represented, not only globally, i.e. in full sequence, but also locally - in each of its parts. However, the locally representative sequence systematically deviates from the expected odds: it has too many alternations and too few repetitions. 2

Another consequence of beliefs about representativeness is the well-known casino gambler's fallacy. For example, when seeing reds take too long to land on a roulette wheel, most people mistakenly believe that it is more likely that black should now come up, because hitting black would complete a more representative sequence than hitting another red. Chance is generally viewed as a self-regulating process in which a deviation in one direction leads to a deviation in the opposite direction in order to restore equilibrium. In fact, deviations are not corrected, but simply “dissolve” as the random process proceeds.

Showed a strong belief in what may be called the law of small numbers, according to which even small samples are highly representative of the populations from which they are selected. The results of these researchers reflected the expectation that a hypothesis that is valid relative to the entire population will be reported as a statistically significant result in the sample, and sample size does not matter. As a result, experts place too much faith in results obtained from small samples and overestimate the repeatability of these results. When conducting research, this bias leads to the selection of inadequately sized samples and exaggerated interpretation of results.

Insensitivity to forecast reliability. People are sometimes forced to make numerical predictions, such as the future price of a stock, the demand for a product, or the outcome of a football game. Such predictions are based on representativeness. For example, suppose someone is given a description of a company and is asked to predict its future earnings. If the company's description is very favorable, then very high profits will appear most representative from that description; if the description is mediocre, then the most representative development will seem to be an ordinary development of events. The extent to which a description is favorable does not depend on the veracity of that description or the extent to which it allows for accurate prediction. Therefore, if people make a prediction based solely on the favorability of a description, their predictions will be insensitive to the reliability of the description and to the expected accuracy of the prediction. This way of making judgments violates normative statistical theory, in which the extreme and range of predictions depend on predictability. When predictability is zero, the same prediction must be made in all cases.

The illusion of validity. People are fairly confident in predicting that a person is a librarian when given a description of that person's personality that fits the stereotype of a librarian, even if it is scant, unreliable, or outdated. Unfounded confidence, which is a consequence of the successful coincidence of the predicted outcome and the input data, can be called the illusion of validity.

Misconceptions about regression. Suppose a large group of children were tested on two similar versions of an aptitude test. If one selects ten children from among those who performed best on one of these two versions, he will usually be disappointed in their performance on the second version of the test. These observations illustrate a general phenomenon known as regression to the mean, which was discovered by Galton more than 100 years ago. In ordinary life, we all encounter a large number of cases of regression to the mean, comparing, for example, the height of fathers and sons. However, people have no idea about this. First, they do not expect regression in the many contexts where it is expected to occur. Secondly, when they recognize the occurrence of regression, they often invent incorrect explanations of the causes.

Failure to recognize the meaning of regression can have detrimental consequences. In discussions of training flights, experienced instructors have noted that praise for an exceptionally soft landing is usually followed by a poorer landing on the next attempt, while harsh criticism after a hard landing is usually followed by an improvement in performance on the next attempt. The instructors concluded that verbal rewards were detrimental to learning while reprimands were beneficial, contrary to accepted psychological doctrine. This conclusion is invalid due to the presence of regression to the mean. Thus, failure to understand the regression effect leads to the fact that the effectiveness of punishment is overestimated and the effectiveness of reward is underestimated.

Availability. People estimate class frequency or probability of events based on the ease with which they recall instances or events. When the size of a class is estimated based on the accessibility of its members, a class whose members are easily recalled will appear larger than a class of the same size whose members are less accessible and less readily recalled.

Subjects were read a list of famous people of both sexes and then asked to judge whether the list contained more male names than female ones. Different lists were given to different groups of test takers. In some of the lists, men were more famous than women, and in others, women were more famous than men. In each of the lists, subjects mistakenly believed that the class (in this case gender) that contained more famous people was more numerous.

The ability to imagine images plays an important role in assessing the probabilities of real life situations. The risk associated with a dangerous expedition, for example, is assessed by mentally reproducing unforeseen circumstances for which the expedition is not equipped to cope. If many of these difficulties are vividly depicted, the expedition may seem extremely dangerous, although the ease with which disasters are imagined does not necessarily reflect their actual likelihood. Conversely, if the possible hazard is difficult to imagine or simply does not come to mind, the risk associated with an event may be grossly underestimated.

Illusory relationship. Long life experience has taught us that, in general, elements of large classes are remembered better and faster than elements of less frequent classes; that more probable events are easier to imagine than unlikely ones; and that associative connections between events are strengthened when events frequently occur simultaneously. As a result, a person has a procedure at his disposal ( availability heuristic) to estimate class size. The probability of an event, or the frequency with which events can occur simultaneously, is measured by the ease with which the corresponding mental processes of recall, recall, or association can be performed. However, these estimation procedures systematically lead to errors.

Adjustment and “binding” (anchoring). In many situations, people make estimates based on an initial value. Two groups of high school students spent 5 seconds estimating the value of a number expression that was written on the board. One group estimated the value of the expression 8x7x6x5x4x3x2x1, while the other group estimated the value of the expression 1x2x3x4x5x6x7x8. The average score for the increasing sequence was 512, while the average score for the decreasing sequence was 2250. The correct answer was 40,320 for both sequences.

Biases in the assessment of complex events are particularly significant in the context of planning. The successful completion of a business venture, such as the development of a new product, is usually complex in nature: for the enterprise to succeed, each of the series of events must occur. Even if each of these events is highly likely, the overall probability of success may be quite low if the number of events is large. The general tendency to overestimate the probability of conjunctive events leads to unreasonable optimism in assessing the likelihood that a plan will succeed or that a project will be completed on time. On the contrary, disjunctive 4 event structures are commonly encountered in risk assessment. A complex system, such as a nuclear reactor or the human body, will be damaged if any of its necessary components fail. Even when the probability of failure in each component is small, the probability of failure of the entire system can be high if many components are involved. Due to anchoring bias, people tend to underestimate the probability of failure in complex systems. Thus, anchoring bias may sometimes depend on the structure of the event. The structure of an event or phenomenon, similar to a chain of links, leads to an overestimation of the probability of this event; the structure of an event, similar to a funnel, consisting of disjunctive links, leads to an underestimation of the probability of the event.

“Anchoring” in estimating the subjective probability distribution. In decision analysis, experts are often required to express their opinion regarding a quantity. For example, an expert might be asked to select a number, X 90, such that the subjective probability that the number will be higher than the Dow Jones average is 0.90.

An expert is considered to be properly calibrated in a given set of problems if only 2% of the correct values ​​of the estimated quantities are below the specified values. Thus, the true values ​​should strictly fall within the interval between X 01 and X 99 in 98% of problems.

Confidence in heuristics and the prevalence of stereotypes is not limited to ordinary people. Experienced researchers are also prone to the same biases when they think intuitively. It is surprising that people are incapable of deducing from long life experience such fundamental statistical rules as regression to the mean or the sample size effect. Although we all encounter numerous situations throughout our lives to which these rules can be applied, very few discover the principles of sample selection and regression on their own through experience. Statistical principles are not learned from everyday experience.

PartIIRepresentativeness

Daniel Kahneman (March 5, 1934, Tel Aviv) is an Israeli-American psychologist, one of the founders of psychological economic theory and behavioral finance, which combine economics and cognitive science to explain the irrationality of a person's attitude to risk in decision making and in managing their behavior.

Known for his work, done with Amos Tversky and others, in establishing the cognitive basis for common human biases in the use of heuristics and for the development of prospect theory; laureate of the 2002 Nobel Prize in Economics “for the application of psychological methods in economic science, in particular in the study of the formation of judgments and decision-making under conditions of uncertainty” (together with W. Smith), despite the fact that the research was carried out as a psychologist, and not as an economist.

Kahneman was born in Tel Aviv, spent his childhood in Paris, and moved to Palestine in 1946. He received a bachelor's degree in mathematics and psychology from the Hebrew University of Jerusalem in 1954, after which he worked in the Israel Defense Forces, mainly in the psychological department. The unit in which he served was engaged in the selection and testing of conscripts. Kahneman developed the personality assessment interview.

After leaving the army, Kahneman returned to Hebrew University, where he took courses in logic and philosophy of science. He moved to the United States in 1958 and received a PhD in psychology from the University of California, Berkeley in 1961.

Since 1969, he collaborated with Amos Tversky, who, at the invitation of Kahneman, lectured at the Hebrew University on assessing the probability of events.

Currently works at Princeton University, as well as at Hebrew University. He is on the editorial board of the journal Economics and Philosophy. Kahneman never claimed that he was the only one involved in psychological economics; he pointed out that everything he achieved in this field was achieved by him and Tversky together with their co-authors Richard Tayler and Jack Knetsch.

Kahneman is married to Ann Triesman, a renowned researcher of attention and memory.

Books (2)

Decision Making in Uncertainty

Decision making under uncertainty: Rules and biases.

“Decision Making in Uncertainty: Rules and Biases” is a fundamental work on the psychology of decision making.

References to individual works of these authors are quite common in academic literature, but a complete collection of these articles has been published in Russian for the first time. The publication of this book is certainly an important event for specialists in the field of management, strategic planning, decision making, consumer behavior, etc.

The book is of interest to specialists in the field of management, economics, psychology, both in theory and in practice, who deal with such a complex and interesting area of ​​human activity as decision making.