In general, we believe that the use of linear models can help decision terjemahan - In general, we believe that the use of linear models can help decision Bahasa Indonesia Bagaimana mengatakan

In general, we believe that the use

In general, we believe that the use of linear
models can help decision makers avoid the pitfalls of many judgment biases, yet this
method has only been tested in a small subset of the potentially relevant domains.
Another System 2 strategy involves taking an outsider’s perspective: trying to
remove oneself mentally from a specific situation or to consider the class of decisions to
which the current problem belongs (Kahnmean and Lovallo, 1993). Taking an outsider’s
perspective has been shown to reduce decision makers’ overconfidence about their
knowledge (Gigerenzer, Hoffrage, & Kleinbölting, 1991), the time it would take them to
complete a task (Kahneman & Lovallo, 1993), and their odds of entrepreneurial success
(Cooper, Woo, and Dunkelberg, 1988). Decision makers may also be able to improve
their judgments by asking a genuine outsider for his or her view regarding a decision.
Other research on the power of shifting people toward System 2 thinking has
shown that simply encouraging people to “consider the opposite” of whatever decision
they are about to make reduces errors in judgment due to several particularly robustdecision biases: overconfidence, the hindsight bias, and anchoring (Larrick, 2004;
Mussweiler, Strack, & Pfeiffer, 2000). Partial debiasing of errors in judgment typically
classified as the result of “biases and heuristics” (see Tversky and Kahneman, 1974) has
also been achieved by having groups rather than individuals make decisions, training
individuals in statistical reasoning, and making people accountable for their decisions
(Larrick, 2004; Lerner & Tetlock, 1999).
One promising debiasing strategy is to undermine the cognitive mechanism that is
hypothesized to be the source of bias with a targeted cue to rely on System 2 processes
(Slovic and Fischhoff, 1977). In a study designed to reduce hindsight bias (the tendency
to exaggerate the extent to which one could have anticipated a particular outcome in
foresight), Slovic and Fischhoff developed a hypothesis about the mechanism producing
the bias. They believed that hindsight bias resulted from subjects’ failure to use their
available knowledge and powers of inference. Armed with this insight, Slovic and
Fischhoff hypothesized and found that subjects were more resistant to the bias if they
were provided with evidence contrary to the actual outcome. This result suggests that the
most fruitful directions for researchers seeking to reduce heuristics and biases may be
those predicated upon “some understanding of and hypotheses about people’s cognitive
processes” (Fischhoff, 1982) and how they might lead to a given bias. Along these lines,
another group of researchers hypothesized that overclaiming credit results from focusing
only on estimates of one’s own contributions and ignoring those of others in a group.
They found that requiring people to estimate not only their own contributions but also
those of others reduces overclaiming (Savitsky, Van Boven, Epley, and Wight, 2005). Another promising stream of research that examines how System 2 thinking can
be leveraged to reduce System 1 errors has shown that analogical reasoning can be used
to reduce bounds on people’s awareness (see Bazerman and Chugh 2005 for more on
bounded awareness). Building on the work of Thompson, Gentner, and Loewenstein
(2000), both Idson, Chugh, Bereby-Meyer, Moran, Grosskopf, and Bazerman (2004) and
Moran, Ritov, and Bazerman (2008) found that individuals who were encouraged to see
and understand the common principle underlying a set of seemingly unrelated tasks
subsequently demonstrated an improved ability to discover solutions in a different task
that relied on the same underlying principle. This work is consistent with Thompson et
al.’s (2000) observation that surface details of learning opportunities often distract us
from seeing important underlying, generalizable principles. Analogical reasoning
appears to offer hope for overcoming this barrier to decision improvement.
Work on joint-versus-separate decision making also suggests that people can
move from suboptimal System 1 thinking toward improved System 2 thinking when they
consider and choose between multiple options simultaneously rather than accepting or
rejecting options separately. For example, Bazerman, White and Loewenstein (1995)
find evidence that people display more bounded self-interest (Jolls, Sunstein, and Thaler,
1998) – focusing on their outcomes relative to those of others rather than optimizing their
own outcomes – when assessing one option at a time than when considering multiple
options side by side. Bazerman, Loewenstein and White (1992) have also demonstrated
that people exhibit less willpower when they weigh choices separately rather than jointly.
The research discussed above suggests that any change in a decision’s context that
promotes cool-headed System 2 thinking has the potential to reduce common biase . resulting from hotheadedness, such as impulsivity and concern about relative outcomes.
Research on joint-versus-separate decision making highlights the fact that our first
impulses tend to be more emotional than logical (Moore and Loewenstein, 2004). Some
additional suggestive results in this domain include the findings that willpower is
weakened when people are placed under extreme cognitive load (Shiv and Fedorkihn,
1999) and when they are inexperienced in a choice domain (Milkman, Rogers and
Bazerman, 2008). Other research has shown that people make less impulsive, sub-
optimal decisions in many domains when they make choices further in advance of their
consequences (see Milkman, Rogers and Bazerman, in press, for a review). A question
we pose in light of this research is when and how carefully selected contextual changes
promoting increased cognition can be leveraged to reduce the effects of decision making
biases?
Another Important Question: Can We Leverage System 1 to Improve Decision Making?
Albert Einstein once said, “We can't solve problems by using the same kind of
thinking we used when we created them.” However, it is possible that the unconscious
mental system can, in fact, do just that. In recent years, a new general strategy for
improving biased decision making has been proposed that leverages our automatic
cognitive processes and turns them to our advantage (Sunstein and Thaler, 2003). Rather
than trying to change a decision maker’s thinking from System 1 to System 2, this
strategy tries to change the environment so that System 1 thinking will lead to good
results. This type of improvement strategy, which Thaler and Sunstein discuss at length
in their book Nudge (2008), calls upon those who design situations in which choices are
made (whether they be the decision makers themselves or other “choice architects”) tomaximize the odds that decision makers will make wise choices given known decision
biases. For example, a bias towards inaction creates a preference for default options
(Ritov and Baron, 1992). Choice architects can use this insight to improve decision
making by ensuring that the available default is the option that is likely to be best for
decision makers and/or society. Making 401k enrollment a default, for instance, has been
shown to significantly increase employees’ savings rates (Benartzi and Thaler, 2007).
There is also some suggestive evidence that leveraging System 1 thinking to
improve System 1 choices may be particularly effective in the realm of decision-making
biases that people do not like to admit or believe they are susceptible to. For instance,
many of us are susceptible to implicit racial bias but feel uncomfortable acknowledging
this fact, even to ourselves. Conscious efforts to simply “do better” on implicit bias tests
are usually futile (Nosek, Greenwald, & Banaji, 2007). However, individuals whose
mental or physical environment is shaped by the involvement of a black experimenter
rather than a white experimenter show less implicit racial bias (Lowery, Hardin, &
Sinclair, 2001; Blair, 2002). The results of this “change the environment” approach
contrast sharply with the failure of “try harder” solutions, which rely on conscious effort.
In summary, can solutions to biases that people are unwilling to acknowledge be found in
the same automatic systems that generate this class of problems?
Conclusion
People put great trust in their intuition. The past 50 years of decision-making
research challenges that trust. A key task for psychologists is to identify how and in what
situations people should try to move from intuitively compelling System 1 thinking to
more deliberative System 2 thinking and to design situations that make System 1 thinking
0/5000
Dari: -
Ke: -
Hasil (Bahasa Indonesia) 1: [Salinan]
Disalin!
Secara umum, kami percaya bahwa penggunaan linear model dapat membantu keputusan pembuat menghindari perangkap banyak penghakiman bias, namun ini metode hanya telah diuji dalam subset kecil dari domain yang relevan berpotensi. Strategi 2 sistem lain melibatkan mengambil perspektif luar: berusaha menghapus diri mental dari situasi tertentu atau untuk mempertimbangkan kelas keputusan untuk yang termasuk masalah saat ini (Kahnmean dan Lovallo, 1993). Mengambil orang luar perspektif telah terbukti mengurangi lebihan pengambil keputusan tentang mereka pengetahuan (Gigerenzer, Hoffrage, & Kleinbölting, 1991), saat itu akan membawa mereka ke menyelesaikan tugas (Kahneman & Lovallo, 1993), dan peluang keberhasilan kewirausahaan (Cooper, Woo, dan Dunkelberg, 1988). Pengambil keputusan juga dapat meningkatkan penilaian mereka dengan meminta luar asli nya pandangan mengenai keputusan. Penelitian lain pada kekuatan pergeseran orang ke arah pemikiran 2 sistem telah ditunjukkan bahwa hanya mendorong orang untuk "mempertimbangkan sebaliknya" keputusan apa pun mereka akan membuat mengurangi kesalahan dalam penghukuman karena beberapa terutama robustdecision bias: lebihan, bias belakang dan penahan (Larrick, 2004; Mussweiler, Strack, & Pfeiffer, 2000). Debiasing sebagian kesalahan dalam penilaian biasanya diklasifikasikan sebagai hasil dari "bias dan heuristik" (Lihat Tversky dan Kahneman, 1974) telah also been achieved by having groups rather than individuals make decisions, training individuals in statistical reasoning, and making people accountable for their decisions (Larrick, 2004; Lerner & Tetlock, 1999). One promising debiasing strategy is to undermine the cognitive mechanism that is hypothesized to be the source of bias with a targeted cue to rely on System 2 processes (Slovic and Fischhoff, 1977). In a study designed to reduce hindsight bias (the tendency to exaggerate the extent to which one could have anticipated a particular outcome in foresight), Slovic and Fischhoff developed a hypothesis about the mechanism producing the bias. They believed that hindsight bias resulted from subjects’ failure to use their available knowledge and powers of inference. Armed with this insight, Slovic and Fischhoff hypothesized and found that subjects were more resistant to the bias if they were provided with evidence contrary to the actual outcome. This result suggests that the most fruitful directions for researchers seeking to reduce heuristics and biases may be those predicated upon “some understanding of and hypotheses about people’s cognitive processes” (Fischhoff, 1982) and how they might lead to a given bias. Along these lines, another group of researchers hypothesized that overclaiming credit results from focusing only on estimates of one’s own contributions and ignoring those of others in a group. They found that requiring people to estimate not only their own contributions but also those of others reduces overclaiming (Savitsky, Van Boven, Epley, and Wight, 2005). Another promising stream of research that examines how System 2 thinking can be leveraged to reduce System 1 errors has shown that analogical reasoning can be used to reduce bounds on people’s awareness (see Bazerman and Chugh 2005 for more on bounded awareness). Building on the work of Thompson, Gentner, and Loewenstein (2000), both Idson, Chugh, Bereby-Meyer, Moran, Grosskopf, and Bazerman (2004) and Moran, Ritov, and Bazerman (2008) found that individuals who were encouraged to see and understand the common principle underlying a set of seemingly unrelated tasks subsequently demonstrated an improved ability to discover solutions in a different task that relied on the same underlying principle. This work is consistent with Thompson et al.’s (2000) observation that surface details of learning opportunities often distract us from seeing important underlying, generalizable principles. Analogical reasoning appears to offer hope for overcoming this barrier to decision improvement. Work on joint-versus-separate decision making also suggests that people can move from suboptimal System 1 thinking toward improved System 2 thinking when they consider and choose between multiple options simultaneously rather than accepting or rejecting options separately. For example, Bazerman, White and Loewenstein (1995) find evidence that people display more bounded self-interest (Jolls, Sunstein, and Thaler, 1998) – focusing on their outcomes relative to those of others rather than optimizing their own outcomes – when assessing one option at a time than when considering multiple options side by side. Bazerman, Loewenstein and White (1992) have also demonstrated that people exhibit less willpower when they weigh choices separately rather than jointly. The research discussed above suggests that any change in a decision’s context that promotes cool-headed System 2 thinking has the potential to reduce common biase . resulting from hotheadedness, such as impulsivity and concern about relative outcomes. Research on joint-versus-separate decision making highlights the fact that our first impulses tend to be more emotional than logical (Moore and Loewenstein, 2004). Some additional suggestive results in this domain include the findings that willpower is weakened when people are placed under extreme cognitive load (Shiv and Fedorkihn, 1999) and when they are inexperienced in a choice domain (Milkman, Rogers and Bazerman, 2008). Other research has shown that people make less impulsive, sub-optimal decisions in many domains when they make choices further in advance of their consequences (see Milkman, Rogers and Bazerman, in press, for a review). A question we pose in light of this research is when and how carefully selected contextual changes promoting increased cognition can be leveraged to reduce the effects of decision making biases? Another Important Question: Can We Leverage System 1 to Improve Decision Making? Albert Einstein once said, “We can't solve problems by using the same kind of thinking we used when we created them.” However, it is possible that the unconscious mental system can, in fact, do just that. In recent years, a new general strategy for improving biased decision making has been proposed that leverages our automatic cognitive processes and turns them to our advantage (Sunstein and Thaler, 2003). Rather than trying to change a decision maker’s thinking from System 1 to System 2, this strategy tries to change the environment so that System 1 thinking will lead to good results. This type of improvement strategy, which Thaler and Sunstein discuss at length in their book Nudge (2008), calls upon those who design situations in which choices are made (whether they be the decision makers themselves or other “choice architects”) tomaximize the odds that decision makers will make wise choices given known decision biases. For example, a bias towards inaction creates a preference for default options (Ritov and Baron, 1992). Choice architects can use this insight to improve decision making by ensuring that the available default is the option that is likely to be best for decision makers and/or society. Making 401k enrollment a default, for instance, has been
shown to significantly increase employees’ savings rates (Benartzi and Thaler, 2007).
There is also some suggestive evidence that leveraging System 1 thinking to
improve System 1 choices may be particularly effective in the realm of decision-making
biases that people do not like to admit or believe they are susceptible to. For instance,
many of us are susceptible to implicit racial bias but feel uncomfortable acknowledging
this fact, even to ourselves. Conscious efforts to simply “do better” on implicit bias tests
are usually futile (Nosek, Greenwald, & Banaji, 2007). However, individuals whose
mental or physical environment is shaped by the involvement of a black experimenter
rather than a white experimenter show less implicit racial bias (Lowery, Hardin, &
Sinclair, 2001; Blair, 2002). The results of this “change the environment” approach
contrast sharply with the failure of “try harder” solutions, which rely on conscious effort.
In summary, can solutions to biases that people are unwilling to acknowledge be found in
the same automatic systems that generate this class of problems?
Conclusion
People put great trust in their intuition. The past 50 years of decision-making
research challenges that trust. A key task for psychologists is to identify how and in what
situations people should try to move from intuitively compelling System 1 thinking to
more deliberative System 2 thinking and to design situations that make System 1 thinking
Sedang diterjemahkan, harap tunggu..
 
Bahasa lainnya
Dukungan alat penerjemahan: Afrikans, Albania, Amhara, Arab, Armenia, Azerbaijan, Bahasa Indonesia, Basque, Belanda, Belarussia, Bengali, Bosnia, Bulgaria, Burma, Cebuano, Ceko, Chichewa, China, Cina Tradisional, Denmark, Deteksi bahasa, Esperanto, Estonia, Farsi, Finlandia, Frisia, Gaelig, Gaelik Skotlandia, Galisia, Georgia, Gujarati, Hausa, Hawaii, Hindi, Hmong, Ibrani, Igbo, Inggris, Islan, Italia, Jawa, Jepang, Jerman, Kannada, Katala, Kazak, Khmer, Kinyarwanda, Kirghiz, Klingon, Korea, Korsika, Kreol Haiti, Kroat, Kurdi, Laos, Latin, Latvia, Lituania, Luksemburg, Magyar, Makedonia, Malagasi, Malayalam, Malta, Maori, Marathi, Melayu, Mongol, Nepal, Norsk, Odia (Oriya), Pashto, Polandia, Portugis, Prancis, Punjabi, Rumania, Rusia, Samoa, Serb, Sesotho, Shona, Sindhi, Sinhala, Slovakia, Slovenia, Somali, Spanyol, Sunda, Swahili, Swensk, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Turki, Turkmen, Ukraina, Urdu, Uyghur, Uzbek, Vietnam, Wales, Xhosa, Yiddi, Yoruba, Yunani, Zulu, Bahasa terjemahan.

Copyright ©2025 I Love Translation. All reserved.

E-mail: