Data Consistency for List Grids

How we ensure data consistency

In order to understand and mitigate any potential risk of inconsistent data between the traditional and list grids, we asked a mixture of scalar and categorical grid questions in 7 different side-by-side studies. When comparing means and top-boxes (scalar grid) or individual response categories (categorical grid), we made 178 comparisons and found no significant differences in 96% of the comparisons

This is within the same consistency rate we expect when running two identical studies, and means we can confidently conclude there is no data impact from using list grids.

Traditional Grid

List Grid


Validation test details


The purpose of the test is to find if the Grid-list causes any significant changes in the data we collect when compared to the Traditional grid view.


Product: Concept Test

Market: USA

Sample Size: 150 per cell (600 total)

Category: Chocolate


We fielded 4 different cells of n=150 each:

  1. Traditional Desktop
  2. Grid-list Desktop
  3. Traditional Mobile
  4. Grid-list Mobile

Sample performance:

  • Grid-list Mobile has a slightly lower dropout rate than Traditional Mobile. Their performance is on par on Desktop.
  • Quality terminations are broadly identical.
  • Mean LOI also performs remarkably similar. Desktop traditional and grid-list are virtually identical, and grid-list Mobile is slightly shorter.

In summary, grid-list performs no worse than traditional on all sample metrics, and slightly better on Dropouts.


Traditional Desktop Grid-list Desktop Traditional Mobile Grid-list Mobile
Dropout Rate 13% 13.5% 18% 12.5%
Quality Terminations 10 9 11 10
IR 51.5% 43.5% 48% 49%
Mean LOI 6 min 37 sec 6 min 37 sec 7 m 07 sec 7m 03 sec

Data consistency:


Across 7 scalar KPIs on an 11-pt scale, there were no significant differences in any of the cells.


Traditional Desktop Grid-list Desktop Traditional Mobile Grid-list Mobile
Overall Appeal 8.1 8.2 8.1 8.3
Brand Linkage 8.8 8.8 8.7 8.6
Unique and Different 7.1 6.9 7.3 7.2
Relevance 7.7 7.9 7.9 7.9
Believability 8.3 8.5 8.3 8.3
Brand Feeling 7.5 7.4 7.4 7.5
Behavior Change 7.4 7.5 7.7 7.9

The Emotions question appears to encourage more clicks in Grid-list format - possibly a consequence of being more compact/user-friendly.


Traditional Desktop Grid-list Desktop Traditional Mobile Grid-list Mobile
Average number of Emotions selected 1.3 1.8 1.6 2.1
Average number of Messages selected 3.3 3.2 3.2 3.4

There were two sets of grid questions, designed to test the consequences of the changed design. They returned no significant differences


Verbatim quality:


Brand recall is virtually identical, at the 86-87% mark for all cells.

A skim of Suggestions for Improvement and Likes found very few nonsensical verbatims - no more than 2-3 per cell per question.


Conclusion

In conclusion, Grid-lists perform on par or better than Traditional grids for every performance indicator on Concept Test.