MaxDiff FAQs

  • What happens if I close out a MaxDiff action early on Suzy? 
    • MaxDiff actions that do not achieve the recommended sample size minimums should be used somewhat directionally, and more directionally if the sample size is lower. For example, if a study was originally set to 1000 respondents, results closing at N=100 should be used more directionally than results that close at N=900.
    • If you cannot reach the minimum response for a MaxDiff study, consider the following recommendations:
      • Broaden your audience to ensure that you can reach a larger sample size, increasing your change that you meet the recommended minimums
      • Follow up with additional study metrics, such as a monadic test or rating questions on KPIs such as purchase intent or relevance to add additional insights to your MaxDiff result.
  • What if my MaxDiff study design is not balanced? 
    • Unbalanced study design leads to unreliable results. Balance and equal exposure of all attributes is critically important to achieving sound MaxDiff results.
  • How much does it cost to run a MaxDiff on Suzy? 
    • Regardless of the number of sets in your MaxDiff action, each action will be worth 10 credits each.
    • For example, if you creating a MaxDiff survey and you add 3 MaxDiff action (10 credits each) and 2 Multiple choice actions (1 credit each) - you will have used 32 credits total (10 + 10 + 10 + 1 + 1 = 32) 
    • Because of the additional research analysis embedded within a MaxDiff survey, each MaxDiff action will be worth 10 credits each
    • When adding any other non-maxdiff action to the maxdiff survey, it will be the same credit cost
  • When should I use a MaxDiff over a ranking question?
    • When you have a long list of attributes and need to force a winner/winners from the list. MaxDiff utilizes an extensive experimental design in order to maintain balance attribute exposure, allowing for reliable results when ranking a list of attributes. However, MaxDiff actions have a required number of respondents and potentially take longer to fill than ranking actions. When choosing between a MaxDiff action vs Ranking action, it boils down to: the number of attributes you are testing, the amount of time you have to run your research, and the number of potential respondents in your target audience.
  • What if my Attribute Performance Chart and Utility Score Chart are showing different results?
    • Attribute Performance shows only the number of best/worst selections, or the“raw” data, of your MaxDiff action. This is helpful to get a sense of how each attribute has performed, but does not take into account any polarizing attributes when sorting attributes by the # of best/worst selections. The Utility Score formula takes into account polarization, neutralizing any attributes that were selected best/worst an equal number of times, and promoting only attributes who were selected best a higher number of times than they were selected worst. 
    • For example, below is a set of MaxDiff results:



Count Best

Count Worst

Total Exposures

Utility Score
















Cookies and Cream





Mint Chip






  • In this example, the Attribute Performance Chart would show Chocolate as the highest performer when sorted by ‘Count Best.’ However, the Utility Score Chart would show Vanilla as the attribute with the highest utility and preference among consumers.
  • When interpreting MaxDiff results, be sure to understand the mechanisms behind Attribute Performance and Utility Score. Attribute Performance shows more “raw” information, while MaxDiff utilizes a formula to remove any potential polarization from your results

Interested in learning more? Head over to our Best Practices Guide for more tips and tricks!