Organizations expect to see consistency in the decisions of their employees, but humans are unreliable. Judgments can vary a great deal from one individual to the next, even when people are in the same role and supposedly following the same guidelines. And irrelevant factors, such as mood and the weather, can change one person’s decisions from occasion to occasion. This chance variability of decisions is called noise, and it is surprisingly costly to companies, which are usually completely unaware of it. Nobel laureate Daniel Kahneman, a professor of psychology at Princeton, and Andrew M. Rosenfield, Linnea Gandhi, and Tom Blaser of TGG Group explain how organizations can perform a noise audit by having members of a professional unit evaluate a common set of cases. The degree to which their assessments vary provides the measure of noise. If the problem is severe, firms can pursue a number of remedies. The most radical is to replace human judgment with algorithms. Unlike people, algorithms always return the same output for any given input, and research shows that their predictions and decisions are often more accurate than those made by experts. Although algorithms may seem daunting to construct, the authors describe how to build them with input data on a small number of cases and some simple commonsense rules. But if applying formulas is politically or operationally infeasible, companies can still set up procedures and practices that will guide employees to make more-consistent decisions. INSETS: Types of Noise and Bias.;How to Build a Reasoned Rule.. [ABSTRACT FROM AUTHOR]
© 2001-2024 Fundación Dialnet · Todos los derechos reservados