Prioritize It

Zappi has built the best early-stage product development tool in the market. Developed jointly with global brands in your industry, Prioritize It is already road-tested for your business. Learn from data organized into your categories from the start.

In this article:

Key Measures

Testing Process

Understanding the Quadrant chart

Understanding the Concept Summary chart

Understanding potential with Profiles

How to interpret the results

Configuration checklist 

Questionnaire flow and key metrics definitions

Deep dive into Prioritize It sequential analysis


Key Measures

  • Trial
  • Breakthrough (Advantage + Distinctiveness)
  • Believability
  • Premium-ness
  • Impressions (Open-end)

Testing Process

  • Upload up to 20 stimuli (images) per test
  • 600 respondents are exposed to each concept
  • Each respondent sees 4 concepts. The first concept is a monadic read from the General Population. Subsequent concepts are included in the sequential read for a robust Audience Profile analysis
  • To understand Concept Potential among Profiles, you can configure the All Concepts Profiles Summary chart in a simpler way. The chart combines Trial and Breakthrough, to give a clearer understanding of the level of Potential your concepts have among each Audience

Understanding the Quadrant chart

All concepts are classified into five "buckets," based on their performance across two key measures, Trial Potential and Breakthrough Potential:

  • Scale & sustain: prioritize as big bets
  • Short term trial: leverage for competitive response or short-term opportunities
  • Seed & grow: consider moving forward with limited distribution to start
  • Emergent: consider monitoring and re-assess in the near future
  • Deprioritize: deprioritize or rework and retest

More details on how the quadrant chart works here.

Understanding the Concept Summary chart 

  • Concept Summary chart shows the performance of an individual concept across the KPIs and diagnostics.
  • Key measures and diagnostics are summarized on one page for each concept.

You can quickly spot themes in consumer reaction and see the full verbatims by clicking on each word. There's the ability to translate non-English language word clouds back into English for analysis.

Understanding potential with Profiles

You can use  Profiles to define and create norms for your subgroups, then get a clear read on concept potential among them in your reports. With the All Concepts Profiles Summary, you can explore the performance of your concepts across different audience profiles.

  • You will be able to customize this chart by selecting the desired audiences you want to explore. You will get by default a list of demographic profiles.
  • You can create new profiles based on any profiling/screener question used in the survey (i.e. brand buyers, shoppers for a specific retailer, heavy/lapsed buyers).
  • Norms will be available for any profile created (assuming at least 20 concepts available with a minimum base of n=30 per concept).
  • If your profile doesn’t exist, speak to your Customer Success rep to get it built.

How to interpret the results

  1. Rely on the monadic read, labeled as Total Population, for your overall performance evaluation.
  2. Then, explore relative potential among your sub-groups using Profiles, which includes data from the sequential read, for a robust Audience Profile analysis. 
  3. Note, in some tests, you might find that sub-group results from the sequential read do not align with the overall performance evaluation from the monadic read. For example, the Total Population result may be weak, while both Male and Female results are strong. If you see this, you should take the view that the monadic read is a more accurate view of overall performance. In Market Research, the generally accepted view is that a monadic read is more accurate because it eliminates the possibility of bias due to seeing other concepts.
  4. Consider the sub-group results on a relative basis for diagnostic understanding. For example, the overall performance evaluation among the Total Population is weak, and Male and Female results are strong, this means that the concept has a similar likelihood of success among Males and Females. In the context of weak overall performance, then it would be weak for both Males and Females.

Configuration checklist

  • Stimulus (stimuli): You can add up to 20 images per test. It is possible to run up to 5 concurrent tests (to a maximum of 100 stimuli).
  • Concept Name (Including the brand - key element in the decision-making process)
  • Key Benefit Statement (Max. 250 characters) - Introduction and one clear concise benefit statement without “copy” language 
  • Full Description (Max. 1000 characters) - A concise and conversational description — with a few statements to describe what the product is, how it works, and how it satisfies a consumer need. Ensure that Price information (with appropriate pack format/size/quantity) if you are selecting the Priced option.
  • Custom questions (optional): at an additional cost per question. You will need to provide translations for non-English markets. Questions can take the following forms: single-choice, multichoice, scale, open-ended or grid.
    • Possibility to upload an image for each custom question (optional)
    • For the non-English markets, please have questions and answer options ready in English and foreign languages

Questionnaire flow and key metrics definitions: 

  • Purchase frequency. Respondents are asked what their purchase frequency is for the specified test category. They are also shown a list of retailers and brands and asked to indicate when/if they visited or purchased from them.
  • Early adopters. Respondents are prompted with a series of questions to determine their early adopter status.
  • Concept exposure. Respondents are shown each concept one by one and are asked to evaluate the image and corresponding text.
  • Contextual Understanding. Respondents are asked about a typical usage/consumption/demand situation for the specified test category and asked to evaluate the concept against that real-life contextual situation. This ground the respondent's reaction into a real situation, framing substitution and suitability within it, to get to human truth. 
  • First impression. Respondents are asked to type in what first comes to mind when they think of the concept, and then asked once more if they have any additional comments.
  • Purchase likelihood. Respondents are asked to think about all of the different options currently available in the concept category and answer how likely they would be to purchase given those options.
  • Breakthrough potential. Respondents rate the concept for distinctiveness and advantage over existing options in the category.
  • Diagnostics. Concepts are then evaluated on believability of claims and perceived premiumness within the category.

Best practices:

  • Highlight core ideas/innovations. The concept should exclude information that is not relevant to describe the innovation (i.e. multiple claims & benefits that don’t relate directly to the key benefits).
  • Keep consistency in writing and image style to be able to make comparisons – especially within an order.
  • Images should not be a concept board. In Prioritize It, concepts have independent components, such as a description and an image of the innovation concept.

Make configuration super easy. For the text: keep your concepts organized in an easy-to-copy format (like this template). For the imagery: keep the images in a folder with all the separate files.

Deep dive into Prioritize It sequential analysis:

Prioritize It is a sequential product development tool, with each respondent being exposed to 4 concepts in a survey. The order of these concepts is randomized per respondent, such that we achieve a similar sample (+/- 5% variance) across each concept.

Sequential products allow us to run analysis in two ways:

  1. Monadically: A ‘clean’ read (i.e. the first concept respondents have seen) with zero biasing effect, among n=150 respondents, weighted to be representative of the total sample (for age/gender and income/SEC). This analysis is shown on the "Concept Classification Quadrant" chart.
  2. Sequentially: A combination of all n=600 respondents who saw the concept in any position in their survey, unweighted. This analysis is shown in the "All Concepts Profiles" chart.

Things to consider with sequential analysis:


While we weigh our monadic analysis back to represent the overall sample, it’s not currently possible to apply weighting to our sequential analysis in the same way. The reason for this is that we would be creating weighting conflicts for individual respondents (e.g. respondent #123 may need to be up-weighted for their contribution to Concept A, but down-weighted for their contribution to Concept B).

We recommend that all sequential analysis (via Profiles) is conducted only among the audiences (i.e. comparing scores for Concept A vs Concept B among a Female Profile).

Order effect

In the opposite of our ‘clean’ monadic read; sequential analysis is influenced somewhat by the order in which concepts were exposed to each respondent. The way that we have designed the random concept allocation per respondent in the survey means that this effect is equalized across all concepts (i.e. no concept suffers worse than any other). 

However, we do occasionally see order effect being demonstrated as the polarisation of stronger and weaker concepts. If one concept in a similarly tested batch is much stronger or weaker than the others, the similarities may mean that this isn’t shown in our monadic analysis.

Instead, as an example, it’s entirely possible that sequential analysis (Profiles) data may produce scores that are universally higher or lower than the monadic analysis for many/all subgroups. We recommend that all sequential analysis (via Profiles) is conducted only among the audiences (i.e. comparing scores for Concept A vs Concept B among a Female Profile), thereby accepting an equal order effect across all concepts.


Can I compare existing concepts with new ones?

Testing existing concepts versus new ones that haven't launched yet it's not an apples-to-apples comparison. It's generally not advisable to test concepts for products that are already in market as this affects the way consumers rate them.

Can I test small differences between concepts? 

Prioritize It is a tool for testing "core" concept ideas to get a steer on whether it's a worthwhile direction to go in. Therefore, we encourage you to make sure that what you are testing stands out in the concept and that the differences between concepts are significant enough. We don't recommend using this tool for evaluating small differences such as specific language or details.

If you are looking for any additional information, please check out the links below:

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.