The upper table is simply the progress at this point in time, showing how many pairwise reviews each reviewer has done. “Coverage” shows the same thing relative to the number of ideas. In the example there were 361 ideas, so for Alvin Wolfe, 145 pairs = 145/361 = 0.4-fold coverage.
The lower table shows how similar (or different) the reviewers are in their thinking. Behind the scenes the entire analysis is done separately for each reviewer and their top-to-bottom rank order for ideas is calculated. The squares show the Pearson correlation coefficients between these idea ranks, color-coded green (strongly agree) to yellow (moderately agree) to red (no agreement or disagree). A gray “cap” on the columns shows that Antal and the others form two clear clusters. Reviewers with less than 50% coverage aren’t shown because there’s not enough information to put a rank order to the ideas.
In this example most of the reviewers see things differently. This is good: diversity of opinion in a review team is desirable because it helps with balance, critical thinking, and if the team can meet face-to-face, drives better discussions. Typically differences reflect a different balance of value-vs-cost or value-vs-risk. A good Innovation Central Consultant and/or Challenge Sponsor facilitate the review team through this discussion and learn from their differences.
Because the analysis uses the current slider settings, these correlations will change if you change the factor weights. For example, to see how the reviewers agree on any given factor, go back to the Table page, swing that slider over to 100%, and come back to the Reviewers tab.