The use of actionable attributes
Much of research involves capturing, via survey, the evaluation of brands, concepts, products, ads, or other stimuli. The evaluations, provided by respondents who represent a specific type of consumer, typically involve the perception of stimulus performance. For brands or products currently in the market, perceptions are generated by past experiences. For a concept, or specifically the product (or service) depicted by the concept, perceptions are driven by both past experiences with products that respondents think fit the concept and by expectations about how the product would perform. Often, these evaluations are recorded using ratings. Of primary interest to researchers and marketers are ratings of higher-order Key Performance Indicators (KPIs) such as Overall Appeal and Purchase Likelihood.
Ratings are also used to capture impressions of specific features and characteristics (including attributes to measure their presence or absence) of the stimulus. These ratings serve at least two goals:
- Static description: Profiling the stimulus, providing a deeper sense of how it’s perceived. Profiling is of value, for example, to ensure the stimulus is communicating messages or perceptions consistent with consumer desires and marketing needs, e.g., a new flanker product communicates the same strengths as the parent brand.
- Dynamic prescription for improvement: Predictive modeling (e.g., Driver Analysis) for the purpose of improving stimulus performance, increasing its KPIs and ultimately achieving greater in-market performance. Modeling identifies those stimulus features and characteristics with the strongest statistical relationships to KPIs.
A difficulty afflicting modeling and prescription is features and characteristics that devolve into benefits or messages expressed in “soft” terms. For example, when evaluating a concept, the benefit “Is a brand for someone like me” has no physical sense regardless of the rating used to capture the perception.
A lack of specificity forces respondents to revert to a one-dimensional overall evaluation, a halo effect. When evaluating a concept, the respondent will rate any vague statement (e.g., “Is for someone like me” or “My family would like this”) the same way they rated the concept on higher-order KPIs such as Overall Appeal. The resulting model may be statistically strong (driven by the redundancy in the ratings) but of little practical value for directing substantive improvement to drive KPIs. For these features and characteristics to be of greatest use when modeling improvements (and for profiling as well), they need to be “actionable”.
A definition of “actionable” is knowing how / what actions to take to alter the specific features or characteristics to achieve the improvement in the KPIs. This is most easily accomplished when features and characteristics relate to physical aspects of the performance characteristics of the stimulus. Traditionally, changing physical aspects has been used most frequently for product testing, using experimental designs (including conjoint/discrete choice applications), with the goal of identifying a combination of features and characteristics for which KPIs are maximized.
However, the notion of “actionable” and “physical aspects” encounters difficulty when working with a static stimulus that can’t be immediately altered (via experimental design or otherwise). For example, testing a single concept. To succeed, the researcher can rely on respondents’ imagination* and ability to evaluate “what if…” scenarios when queries presented to respondents are “actionable”.
How to enable actionability
To make survey data actionable, the researcher can first create a list or catalog of physical features and characteristics of the product depicted by the stimulus and then understand how each can be altered.
An effective approach for researchers to follow, to fully enable actionability, is to
ask themselves for each feature or characteristic what actions should be taken to alter (improve) perceptions, and ratings, of the product. Ask of each “What does this feature mean? “How is this perceived by the user?”. And when contemplating poor ratings for a feature, ask “What actions must be taken, either in physical alterations or changing the language or imagery of the communication, to achieve the needed improvement?”
Best practices in writing feature and characteristics statements
It is important when writing feature and characteristic statements for the survey:
- Generally, be as specific as possible with each feature or characteristic and think about how each is directly related to the product depicted in the concept.
- Write the questions clearly and concisely about one feature or characteristic at a time (e.g., an ingredient, the sauce, in a food product)
- Be specific and ask about a single aspect of that feature or characteristic (e.g., the color or aroma of the sauce)
- Be specific about what is to be evaluated about that feature or characteristic (e.g., “How much do you agree or disagree that the color of the sauce is too dark”.)**
- Avoid any compounding or combinations of features or characteristics (no ifs, ands, or buts embedded in a survey question).
- Note: In the Zappi platform, there is a standard 255 character limit per attribute.
Specificity goes hand in hand with clarity, giving the respondent a better sense of what is being asked of them. The reward to the researcher is better quality data.
Examples of actionable attributes
- Would have a good texture
- Would have the right level of sweetness
- Would be soft enough
- Would be fluffy enough
- Would have a good flavor
- Has an appearance that makes me think this will taste great
- Comes in a size with the amount I would want to eat
- Would be creamy enough
- Would be thick enough
- Would have the right amount of cookie or candy pieces swirled in
- Would have the right amount of toppings
- Would have good quality ingredients
Notes:
* The desire or intention to ask respondents to evaluate the physical characteristics of a product they have not actually seen (or only experienced in a 2-dimensional photo) is often met with skepticism. However, the human cognitive process (the brain as a generative predictive processing model) is quite capable of reading the description, seeing the photo, and filling in the blanks, if any, to fully visualize the product depicted in the concept. The same neural process that allows humans to understand the 3-dimensional “real thing” is also responsible for translating both the 2-dimensional approximation as well as the mere thought generated from a suggestion into a mental representation. As such, respondents are cognitively capable of using their imagination to address survey questions regarding the product conveyed in a concept.
** “Just Right” scales are often useful when asking respondents to evaluate the quantity or strength of specific product characteristics. For each characteristic, respondents are asked their perceptions regarding whether the product depicted in the stimulus has “too little”, “too much” or is “Just Right” with regard to that characteristic. Asking “Just Right” questions of respondents who had only been exposed to a description and/or photo of the product is an excellent example of calling upon respondents to use their imagination based on past experiences. Their perceptions represent expectations. The success of the product depicted in the stimulus depends on how well matched it is to those expectations.