Flat test results

What does this mean?

If you find that results in your project are unexpectedly flat or very similar to each other, the first conclusion you can extract from this is that what you have tested has been perceived to be very similar by your chosen target audience. That means that whatever you are attempting to compare is either not evident or meaningful to the people you surveyed. This, in and of itself, could be an insight.

For example, let’s imagine you tested two iterations of the same package, where one of the packages is red and the other is blue. When you review your results, you find both packs are performing below the norm for behavior change and uniqueness. While the stimulus you tested may perform similarly when compared to the norm (indicated in green, amber/gray, or red), there are ways for you to dig deeper into your results.

Finding the why behind your testing results

To dig deeper into your results and assert whether or not the results are truly similar, you could take the following course of action. The exact charts to review will vary by solution, but the overall recommendation remains the same, interrogate the data to see how results vary between your stimulus:

  • Look at the percentile chart, for an added level of granularity.
  • Focus on the key performance indicators that are the most relevant for this particular test; this could be relevance, in some instances, and uniqueness in others.
  • Review how your stimulus performs against the key attributes of your brand
  • Review the open-ended data for comments that might indicate any differences.
  • Filter results among key cohorts of your sample to look for preference within target groups.
  • Leverage advanced analytics to determine which of your stimulus is truly performing better.
Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.