Amplify and Iterative Research
We know that putting consumers at the heart of creative development pays back hugely. Creating an Iterative research consumer learning loop might include:
- Starting with a consumer insight which is relevant to your category
- Developing a rich campaign idea that brings to life the brand’s take on the consumer insight and can work over time, media and sometimes geographies
- Trying out storylines for video and audio content and understanding what resonates, what doesn’t, what should be exploited in finished ads and why. Iterating with rough versions of static or digital ads
- Finessing unfinished and finished ads which truly resonate and cue the brand positively across all the media being invested in
And we know that the earlier and more often you get consumer insights to help you understand consumer reaction and why, the better the final outcome in terms of campaign effectiveness.
So it’s great to see more and more brands putting consumers at the heart of their development process.
But there is a watch out. And that’s over-using consumers to check out small differences in video content which simply won’t impact how the ad works for people.
Let me explain.
In real life, people don’t attend fully to your ads. You can buy the media space, sure. But attention is earned. Only the best earn attention. And even the best ads that earn this attention won’t be remembered in every detail.
People will remember elements of the ad, certain characters, a rough story, maybe the music, hopefully the brand … and perhaps a feeling they felt from the ad or a message they took out. They’ll (hopefully) remember the key things they need to, in order for the ad to work for the brand.
So unless you make meaningful changes which result in your ad suddenly grabbing attention, people taking out a different, more positive or more relevant story OR a story that better heroes your brand, your results on both how well the ad is received AND why they like/don’t like it won’t see meaningful differences in market research.
Let’s take an example of an end card.
Usually they aren’t intimately connected to the story. They are added as a closing frame. They come up after the ad is ‘over’ for people. And so they don’t take up much of people’s memory. People focus on the overall story/action and switch off once that is over. The end card is rarely part of the experience. So, unless the end card is highly connected to your story (maybe taking a moment or character from the ad and integrating for example), they won’t be noticed by many. And the consequence of this is that we wouldn’t expect to see a difference in research results when you switch out the end card in most cases.
This is just one example. How can you decide if a change is meaningful enough to merit running fresh research and truly benefiting from further consumer insights (whether that be checking out a few options OR researching an ad you have optimized based on previous research)? Firstly, think not about the size of the change you are making but rather the impact that might have on how consumers experience and remember the ad (particularly role of the brand, comprehension, emotional connection, distinctiveness.
Have a think about the following questions. If the answer to any is ‘yes’, it could be worth getting insights on a revised/multiple edits:
Story meaning & structure:
- Are your edits likely to change the story people recall from the ad or improve their understanding of the story or meaning? Did your previous research pinpoint an issue in how people remember/understand the story that you have actively solved for with the revised edit?
- Especially for problem/resolution ads, have you made changes to the story to take people on a better emotional journey and create enough of a peak end?
- Have you changed the ad length significantly? This could be either extending the length to tell a clearer story or add moments of impact, or shortening the ad, meaning some elements of the story are no longer present.
Scene selection:
- Have you removed some scenes which were taking up space in the ad but either not resonating or being noticed? (no increase in moment by moment emotional response, not mentioned at story recall or highlighted as a dislike). If you don’t have results and want to start by researching a few edits, explore variations which are more obviously different to one another rather than just adapting a small scene which is more of a ‘filler’ and less likely to be integral to the meaning and emotional reaction to the ad.
- Have you added key scenes to improve understanding, enhance the story or entertainment value?
- Have you adapted key scenes in a meaningful way? If these adaptations change how people interact or the emotions they display, they may have some impact. If the scenes might change the role of the brand/frame how people see the role within the ad, again it could have an impact. If it is simply a slightly different camera angle or small adjustment to font size within a longer video, this is less likely to impact overall response.
Small adjustments within a scene could be researched by looking at that frame alone to identify, for example, the optimum font size.
Audio:
- Have you significantly changed the voice over in a way that could impact the story or how it’s received? (changed the voices, cut the voice over or added clarity)
- Have you changed the music track to something which changes the emotion/mood or reinforces the story arc?
Role of brand:
- Do any of the edits change the role for the brand and make it more likely to be triggered during the experience?
- Have you integrated more distinctive assets?
- Has something been added to clarify the relevance of the story to the brand?
- Have you given the brand a role in the more memorable moments of the ad?
Characters:
- Have you changed any of the main characters in the story - their character or appearance?
Messaging and associations:
- If a very functional ad, did you adapt the delivery of key messages or RTB in response to a lack of message take out or credibility?
- Have you made changes to the way an association or feeling is conveyed? Or even a change in what it aims to convey?
In many cases, you can review your initial research results to understand how likely it is that these changes are substantial:
- Did you remove scenes which aren’t creating emotional reactions in the moment by moment responses?
- Did you add scenes intended to evoke emotion where it was lacking? Anything specific that might make people laugh or love? Did you change the story arc as a problem wasn’t resolved positively enough?
- Did story of the ad or dislikes identify that people didn’t understand the story and you have adapted it to solve the misunderstanding?
- Did you identify a branding issue and restructure the story, distinctive assets or brand presence into the more memorable aspects of the ad?
If your changes are less substantial, in light of consumers paying less attention than we do to advertising, we would recommend either NOT doing further research or, in the case of changing an end frame, researching that in isolation to understand which of the options is more compelling and attractive for people. You could research multiple options for an end frame using ScreenIt to isolate that scene.
When you do have executions you are choosing between, how should they be compared?
- Use the stimuli-to-stimuli comparison view, NOT the ad-to-norm comparison view.
- You should first look at significant differences in individual measures - look at key measures which feed sales and brand impact scores and understand in which areas there are significant differences. If one ad is significantly better on a number of the key measures this is the best ad to progress and dive deeper on how to optimize yet further. If there are no significant differences and neither ad is performing better on the areas of focus for the advertising, it means that the ads are performing very similarly and the choice of which to progress will require further diagnosis using open responses and moment by moment emotional reaction.
- Diagnose the reasons behind any significant differences by looking at:
- Overall emotion - which edit is conveying the desired emotion more?
- Emotional moments - look at moment by moment emotional response to see which is best representing the intended journey and has more or stronger peaks in key emotions. Check which has the most positive end emotion (leave people feeling most positive)
- Story playback - is one ad more clearly understood than another?
- Unaided messaging - is one ad more clearly conveying intended associations?
- Open ended responses - are there particular moments, characters or themes that respondents frequently positively mention that are present in one edit but not the other? How about Dislikes? Does one version do a better job in driving home a memorable story?
- Don’t use the percentile scores to make the comparison - a change in percentile score doesn’t necessarily reflect significant differences (they are best used for an ‘at a glance’ sense of how the ad has done - great, good, OK or poor). On the stimuli comparison view, you will see your percentile score but the sig testing ad-to-ad is done using the absolute score so you can validate if there’s a meaningful difference on a metric or measure.