Ayuda
Ir al contenido

Dialnet


Resumen de Identifying experts in the crowd for evaluation of engineering designs

Alexander Burnap, Richard Gerth, Richard Gonzalez, Panos Y. Papalambros

  • Crowdsourcing offers the opportunity to gather evaluations on concept designs from evaluators that otherwise may not have been considered, thus leveraging additional expertise to improve decision making during early stages of the design process. Previous research has shown that crowdsourcing may fail to evaluate correctly even ‘simple’ engineering design concepts, because non-expert evaluations overwhelm the entire crowd evaluation. This article proposes using expertise prediction heuristics to automatically identify experts and filter non-experts prior to a crowdsourced evaluation. We conducted an experiment to test four common expertise prediction heuristics: (1) evaluator demographics, (2) evaluation reaction time, (3) mechanical reasoning aptitude, and (4) ‘easy and known’ versions of the actual ‘difficult and unknown’ design evaluation task. The results show statistical significance between variables for all four heuristics; however, most predictive power is garnered going from easy to difficult tasks. A combination of these heuristics offers a practical way to identify and filter experts from non-experts, thus improving crowdsourced evaluation of early-stage design concepts.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus