Zappi Amplify TV

Summary

Zappi has taken the very best of existing legacy approaches and worked with leading global brands to develop a game-changing insight tool for TV-advertising effectiveness measurement. Zappi Amplify TV predicts how well your ads will deliver ROI, both via short-term sales and long-term brand equity. 

Insights are framed and communicated to ensure they are easily understood by stakeholders at all levels of the business.

Consumer engagement is captured across a range of rational, emotional, and behavioral measurement, and communicated in a way that is easily digested by non-insights professionals.

Stimuli: 1-5 video ads per test

Evaluation: Monadic

Sample default: 400 category consumers (with the option to set the sample size as anything between 200-800 respondents).

Norms: Yes; market-wide norms are activated when total stimuli tested reaches N=20 and your own customer norm per market activates when you've tested 20 stimuli in each market.

Contents

Getting started

Analysis topics

Key measures

Depending on which version of Amplify TV reporting you are using, you'll see either 5 or 3 key measures make up the framework. There are 3 key measures in our updated Amplify TV version based on customer feedback of what's most important to internal stakeholders and easiest to understand. They are otherwise quite similar, as Risk and Return were simply absorbed by Resonance and Response respectively.

Amplify 1.0: the 5 Rs framework

The Zappi Amplify TV framework is comprised of measures grouped into five key ROI-driver areas:

  • Reach: cut through the clutter, link to the brand, and build distinct memory structures
  • Resonance: engage and trigger an emotional response, and communicate key messages and category drivers
  • Response: increase brand purchase and appeal
  • Risk: avoid insensitivity and damaging virality
  • Return: deliver sales uplift and build brand equity

The reporting output focuses on a blend of behavioral and survey data, “Metrics that matter”, which ladder into two important KPIs:

  1. Creative Sales Impact (CSI): how likely is this ad to deliver ROI via short-term sales uplift?
  2. Creative Brand Impact (CBI): how likely is this ad to deliver ROI via long-term brand equity building?

Amplify 1.0: Configuration checklist

  • Videos: you can upload up to 5 videos in an order, within the same country and category, each of which will generate a separate survey.
  • Logos: we would recommend presenting the image files in a square format to provide consistency when viewing.
  • Ensure you add the following information about your stimuli: 
    • Ad name
    • Brand information 
    • Stage of development
    • Presence of music and celebrities
    • Target Audience Profile
    • 2-20 brand/category attributes that you are looking to test that people might associate with your stimuli
      • 255 character limit, per attribute
    • 1-4 key messages for your stimuli
    • Brand competitive set images
    • Tags
      • Tagging your stimuli allows you to categorize your content efficiently, as well as unlocking additional analytic capabilities
        • Standardized tags - taxonomy defined by your organization
        • Custom and smart tags

Amplify 2.0: the 3 Rs framework

  • Reach: cut through the clutter, link to the brand, and build distinct memory structures
  • Resonance: engage and trigger an emotional response, and communicate key messages and category drivers (includes metrics from the previous Risk chapter)
  • Response: increase brand purchase and appeal

The reporting output (the previous Return chapter) focuses on a blend of behavioral and survey data, which ladder into two comprehensive indicators for sales impact and brand impact: 

  1. Sales Impact Score: The sales impact score measures the potential of the creative to drive short term sales. If you are familiar with the former Creative Effectiveness Score, it is simply that score renamed.
  2. Brand Impact Score: The brand impact score measures the potential of the creative to build the brand and drive sales into the future.

Amplify 2.0: Configuration checklist

  • Videos: you can upload up to 5 videos in an order, within the same country and category, each of which will generate a separate survey.
  • Logos: we would recommend presenting the image files in a square format to provide consistency when viewing.
  • Ensure you add the following information about your stimuli: 
    • Ad name
    • Brand information 
    • Stage of development
    • Presence of music and celebrities
    • Target Audience Profile
    • 2-15 brand/category attributes that you are looking to test that people might associate with your stimuli
      • 255 character limit, per attribute
    • 1-4 key messages for your stimuli
    • Brand competitive set images
    • Tags
      • Tagging your stimuli allows you to categorize your content efficiently, as well as unlocking additional analytic capabilities
        • Standardized tags - taxonomy defined by your organization
        • Custom and smart tags

Testing process

  1. Upload between 1-5 video ads per test
  2. A default of 400 category-consumer respondents are exposed to each ad within the selected in-context environment, with the flexibility to increase or decrease the sample size to between 200-800 respondents depending on the specification of the project.
  3. Key questionnaire components:
    • Pre-exposure shopping exercise
    • In-context forced exposure
    • Unaided brand recall and brand cut-through
    • Post-exposure shopping exercise
    • Unaided message communication, questions metrics, and second-by-second emotion capture (with emojis)
  4. Results are provided in the context of the available norm (e.g. country-level, category-level)

Testing animatics

Zappi Amplify TV supports animatic and finished film format ads in the same testing methodology. By testing animatics, you can assess an ad’s potential early on in the development process and guide improvement before commissioning a finished film. Read the guidelines to test animatic stimuli. 

Update, April 2023: In response to user feedback, animatics will no longer have their scores calibrated upward. Previously, scores were adjusted upward to make them more directly comparable to final TV assets. Given at this stage the focus is on early learnings for optimization, this change will enable you to get a more differentiated view of performance between animatic assets. You may notice slight changes in scores in assets tested prior to April 2023 to reflect this update to our modeling. 

Understanding potential with Profiles

You can use Profiles to define and create norms for your subgroups, then get a clear read on advert potential among them in your reports.

  • You will be able to customize charts in your report by selecting the desired audiences you want to explore. You will get by default a list of demographic profiles.
  • You can create new profiles based on any profiling/screener question used in the survey (i.e. brand buyers, shoppers for a specific retailer, heavy/lapsed buyers).
  • NEW! We’ve expanded our profiling capabilities on Amplify TV to enable users to take advantage of the profiles functionality even if they have tested using non-standard sample sizes of n<400.
    • There are guardrails in place to ensure we maintain research quality and prevent incorrect decisions being made off a profile subset with a small base size. If you select a profile that has a subset sample size of n<20 for a project, then stat testing is turned off (turns to grey shading), “N/A” displays for CSI and CBI, and a warning appears in the notes section along with the corresponding subset sample size.
    • If you select a profile for which less than 20 studies have been run with n=20+ respondents for the profile subset, in the context of the selected norm, data is hidden and an alert message displays. In other words: if you select  “weekly brand users” as the profile and “country” as the norm, data will not appear unless 20 studies have been run in that country that captured data from n=20+ respondents who are weekly brand users.
  • Norms will be available for any profile created (assuming at least 20 adverts available with a minimum base of n=30 respondents in the profile per advert).
  • If your profile doesn’t exist, speak to your Customer Success rep to get it built.

Interpreting cultural sensitivity

Why measure it

It's crucial to approach the interpretation of sensitive or potentially offensive content with empathy and cultural awareness. This type of feedback is largely subjective, but by actively engaging with the feedback and striving to stay attuned to cultural nuances, you can refine your advertising to better align with the values and expectations of your audience, thus building trust and equity for your brand.

How to find it in your report

In Amplify, we use the Cultural Sensitivity prompt from the questionnaire to collect a numeric value for the portion of individuals who found something offensive, unpleasant, or disturbing about the ad. We then collect verbatim from those who did report something sensitive about the content.

Original Amplify:

  • You will find the “Cultural Sensitivity Check” numeric measure included on summary charts contained within the Return chapter.
  • You will find the “Cultural Sensitivity Verbatim” auto-coded chart within the Risk chapter.

Amplify 2.0:

  • You will find the “Cultural Sensitivity Check” numeric measure included on summary charts contained within the Creative Effectiveness Summary chapter.
  • You will find the “Cultural Sensitivity Verbatim” auto-coded chart within the Resonance chapter.

How to analyze the score

Zappi’s Cultural Sensitivity measure is designed to ensure that the content of your creative does not inadvertently offend any group of consumers. It does this by first capturing how many respondents feel that there may be room for any people to find the stimuli offensive, unpleasant or disturbing. We then ask those respondents what it is that they believe people may find offensive, unpleasant or disturbing.

No norm or benchmark is associated with this measure. Instead our advice is that any number greater than zero should be evaluated, and that all of the comments from respondents should always be read for this measure. Ideally these comments should be read by a cross-section of the communities represented in the population for which the stimuli was tested. We advise this for a few reasons:

  • The measure is itself a projective exercise. We do not ask respondents what they find offensive - instead we ask them if there is anything that they think others may find offensive. The data that we collect for this measure can therefore be somewhat speculative. For example, if an advert uses dry or close-to-the bone humor, then it is possible that while no respondents do find the ad offensive themselves, they may speculate that others will. This may result in ads that pursue a certain strategy receiving higher scores for this measure purely as a result of that creative strategy. This potential makes this measure unsuitable for benchmarking.
  • Groups that may find something offensive vary in size. Our experience with this measure tells us that different groups are likely to react to this measure in different ways, depending on the specifics of the stimuli being tested. Zappi uses stratified samples via weights and quotas to represent the populations of each of the markets that fieldwork is conducted in. It is very possible that the specific communities that may find a reason to find an ad offensive vary in size. As an example, roughly 6% of the US population is Asian, and 14% are Black or African American. If the offensive caused is only picked up on by a given community, the proportion of respondents reporting that an ad has potential for offense can vary in line with the relative size of that community.

Therefore, we recommend that any Cultural Sensitivity Check score greater than zero trigger a closer look at the verbatim.

How to analyze the verbatim

👉 Follow the links to learn more about the functionality and how to navigate the chart interface.

Cultural sensitivity is of course subjective, so we recommend drawing on these best practices when sensitive content has been flagged by respondents:

  • Be sure to assess the severity of the perceived offense, as well as the breadth. Some feedback may highlight minor cultural misunderstandings or preferences, while others may point to genuinely harmful or disrespectful content. Some may be mentioned only once or twice, or some may be mentioned by most who responded. Take the time to understand the specific aspects of the ad that triggered the offense and whether they align with widely accepted cultural norms.
  • Negate the risk of unconscious bias or the lack of cultural context of a single user. If you are analyzing on behalf of a market that you do not personally live in, or analyzing an advertisement geared toward an audience of which you don’t belong or identify, we recommend supplementing your analysis with that of a colleague or consultant living within the local market or target audience.
  • Prioritize transparency and accountability in your decision-making process. By communicating openly with stakeholders about the feedback received, it could be that an open and thorough review of the content in question leads to only minor adjustments to mitigate offense, without compromising the overall message.

If you are looking for any additional information, please check out the links below:

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.