How Reviewers Feel About Criteria

The “Factors” tab shows a table which tells us how strongly the reviewers felt about each factor (review criteria). In this example, “Higher Value” and “Lower Cost” elicited much more passion than “Lower Risk”.

What this means is easily illustrated with an illustration of sliders that a review team used when they cast their pairwise reviews. “Passion” is just a measure of how widely they swung the sliders for each factor:



Let’s assume that the blue arrows show the average “swing” for each factor, how far the reviewers moved the sliders in creating their reviews. In this case there were lots of pairs of ideas in which “have more value for new customers” generated strong opinions, but “have lower technical risk” had a more tepid response. “Passion” is simply the standard deviation of all pairwise votes for each factor (normalized to add up to 100% for all factors). It’s the width of the swing, not a measure of good-vs-bad.
What might this mean? There are several possibilities:

  • A “low passion” result (like the “Lower Risk” factor in this example) might indicate that reviewers didn’t have strong opinions on this factor, that they felt unable to give a useful response, or even that they found the wording confusing so they tended to leave this slider in the center.
  • A “high passion” result says that reviewers found the ideas highly polarized on this factor. There might have been two general types of ideas, which when paired against each other generated very strong opinions.
  • The Apply to Sliders button will jump back to the table view and use these “passion” values to set the factor weightings. Because the values represent the strength of opinion of the reviewers this is a reasonable starting point for exploration.

Correlations between factors (the review criteria) are also calculated. The three factors at the right are nicely independent, none is highly correlated to any other: 



High between-factor correlations show that the reviewers didn’t distinguish between them. As an example, a factor like “Technical Difficulty” is likely to be highly correlated to “Time to Implement”, it wastes reviewers’ time to ask both. The best reviews have the fewest factors and ask separate questions.

The Factors analysis is most useful for those who are managing review processes, as it offers an indication of effectiveness and those lessons can be applied toward future Challenges.