What sample size should I choose for my survey?

We recommend a default number of respondents (n)=150 per cell or sample group, but the more detailed answer to this question depends largely on how you want to analyze your data and how you want to trade-off between confidence and cost. Let’s unpack those 2 things separately: 

Who do you want to analyze?

If the sample of people you want to respond to your survey can all be considered as largely the same, or if you don’t wish to distinguish between them in your report, then you have one analysis group. 

In order to be confident in your results, you need a robust sample size per analysis group. Therefore, if you wish to look at sub-groups within this analysis group (a simple example might be females and males, but it could also be high spenders and low spenders, etc.) then you will need to consider a sample size for each sub-group that is big enough to make decisions on. 

So what's considered a robust sample size? 

1. Confidence vs. Cost

Here is the trade-off between confidence and cost. To take this to the extreme, if you wanted absolute confidence you would survey every single person in the world defined by your target audience, but of course, that would be expensive and take a long time. 

Survey sampling is based on the notion that a representative sample will be indicative of a total sample. So you don’t need to speak to 1,000 people to have confidence in what that 1,000 people will do, but the more people you do ask the more confident you can be. 

2. Margin of error

So once you have accepted that you are always making a cost vs confidence trade-off, you must then decide how confident you want to be, or how much ‘margin of error’ is acceptable to you. 

The first chart below shows the maximum margin of error for a few different sample sizes. This is the maximum margin of error, because margins of error are higher the closer a proportion/percentage score is to 50%.  

The second chart shows what these margins of error look like when they are applied to a proportion of 50%. Here we can see the range in which the 'true value' lies; for example, the 'true value' for a proportion of 50% on a sample size of n=150 could be anywhere between 42% and 58%.

The ideal sample size for agile testing

For the type of agile iterative testing we encourage, n=150 is a good, cost-efficient baseline and our platform default. However, if you want greater accuracy or want to analyze your data over multiple sub-groups, you should increase the sample size.

In the case of a sample size of n=150 at the 95% confidence level, we can be 95% sure that a percentage score of 50% is actually between 42% and 58% in the total population (i.e. +/- 8.0%). Again, this margin will frequently be lower than the figure listed in the table, as proportions often lie quite far away from 50%. As your sample size increases the degree to which the error decreases becomes smaller but it is important to note that an error still exists and will always exist. The question is where to draw the line. 

Of course, you may also be happy to accept a larger margin of error and base sizes as low as n=50 for lower profile customer groups, or where verbatim response is of particular interest. 

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.