Introduction to Head to Head

Crafting a clear statement of a business need, sending it out to lots of people, and getting back lots of ideas and comments is the invigorating diverge process. But to be useful, that feedback must evaluated, distilled, and used to make decisions and take actions. This converge process is usually either assigned to a formal review team with awkward definitions and poor tools or itʼs tossed back to the crowd, hoping that theyʼll refine with 5-star (or similarly simplistic) ratings.

Both approaches usually disappoint: formal reviews bog down in up-front definitions and complex tools, and 5-star votes are incomplete, clustered at 4-5, and seem more about popularity than the business goal.

Head to Head reviewing addresses this problem. It gives you a structured process thatʼs easy to set up, easy and fast for the reviewers, balanced, fair, statistically validated, and with rich reporting to help with discussion and decision-making.




The concept is familiar from marketing tests for soft drinks. If you sipped Brand A and someone asked “How sweet is it?” youʼd ask “Compared to what? Sugar? Salt? Coffee? Chocolate?” and the conversation would digress. But if you compare two brands side-by-side, itʼs easy to answer “which is sweeter?”: itʼs Brand A, Brand B, or about-the-same.

You could add more criteria to the test: not only “which is sweeter” but also “which is fizzier”, “which has a richer color”, and “which do you prefer overall”. Once youʼve tasted both itʼs quick to answer all of these, and the information that comes back to the sponsor is much more useful — who learns not only “how are we doing” but how to improve!

Head to Head works the same way in Idea Central. The event sponsor lists a few criteria on which the reviewers should have valid opinions (things like “lower cost to proof-of-concept”, “more attractive in emerging markets”, “more technically feasible”), and the team sees ideas in pairs with simple sliders to express a comparative opinion. Thereʼs guidance on how many reviews achieve a statistically good result, and plenty of tabular and graphic tools to let the reviewers and sponsor explore the ideas and how the criteria weighting may change the overall outcome.
Our early adopters tell us itʼs fast, easy, fun, and gives them a really useful set of decision tools. It makes the converge half of the innovation process as easy and scalable as the earlier diverge half — which means youʼre more likely to run the process, get useful results, and go forward to decisions and actions.

History

The idea behind Head to Head Reviews goes back to the late 18th century when the Marquis de Condorcet, political scientist and mathematician, outlined a pairwise method that would be a fair and equitable way to run an election. Condorcet’s method returns a full rank-order of public preference for the candidates, suitable for a president (one winner) or an entire parliament (many winners).

A Condorcet vote is considered exceptionally fair and unbiased, but its application has been limited by the complexity of determining the final “right answer” as the sketch illustrates:



Here we have three voters (or reviewers), six options to review (A,B,C,D,E,F), and pairwise reviews represented by arrows. Alice thinks that option A is better than C, A is better than D, and F is better than D. So far she hasn’t seen B or E. Similarly the arrows show Bob’s and Chris’ pairwise votes.

Two things are evident: there is probably a “best answer” composed of all of the reviewers’ input, and it’s not obvious how to arrive at it! Fortunately this problem has been thoroughly addressed since Condorcet’s time and we can handle the complexity by embedding the validated algorithms into Idea Central.

Our task is a little different from a political election. Firstly, because we may have any number of factors, it’s as if a political ballot asked you multiple questions (“Which candidate is smarter / more experienced / better in foreign policy / a better orator”). Secondly, because our coverage is usually quite different: in a national election you might have 3 candidates and 30 million voters with one vote each, which means 10 million-fold coverage. In an Idea Central event you might have 100 ideas and 5 reviewers. If each reviewer does 20 reviews, you just achieve 1-fold coverage. If each reviewer sees all ideas, you’d achieve 5-fold coverage. The more consistency between reviewers and the more coverage, the more the final rank- order approaches “absolute truth.”

We’ve evaluated several different Cordorcet methods with real data and extensive simulation and have implemented the one which is by far the best at moderate reviewer agreement and coverage. For example, with reviewers whose votes are 50% aligned and 50% random, at just 2-fold coverage our method produces a rank- order which is 98% accurate.

Innovation Central is designed by default for a minimum of 1-fold coverage by each reviewer (that is, each reviewer should see every idea at least once), and 3-fold coverage overall (that is, a review team of at least 3 people). Since the Condorcet math can do a very good job with even less information, these minima assure fairness across ideas and diversity of reviewer knowledge and opinion.