Some kinds of choices are always hard. Figuring the best ideas for growing your business is often one of those. Costs, time, and opportunities are on the line. Therefore, knowing prioritization methods in depth is essential for anyone ahead of growth strategies and tactics. And even more critical than that is to understand how to use them on the business's behalf.
When optimizing your digital product or website, you need to prioritize specific actions. But which ones? It is essential to define effective methods for making the wisest choices.
AB testing is an excellent method to determine the best approach to maximize conversions. However, it may often cost you time and budget. Sometimes, it is necessary to step backward and ascertain which hypothesis has the best potential before testing them.
Bearing that in mind, we created this post with everything you need to know about the ICE Score, a functional method that will support your decision-making process when it's time to prioritize tasks, projects, and tests for your company.
The ICE Score was developed by GrowthHackers' CEO, Sean Ellis, to improve the classic impact/effort analysis. This framework is popular among growth professionals and revolves around three combined factors to calculate the priority level you should attribute to a given optimization idea.
This method was designed for group practice, so product, marketing, and technology professionals, among others, feel welcome to show their points of view. It facilitates the engagement of the whole team, enabling better collaboration between different areas of expertise. Therefore, the ICE Score encourages more interdisciplinary and responsible workflows when it's time to define priorities.
ICE stands for impact, confidence, and ease. The process consists of giving each of the three mentioned factors a score and multiplying it. There is no explicit rule on the minimum and the maximum scores for each, but using smaller ranges (such as 1 to 3 or 1 to 5) should make the process easier and avoid inaccurate deviations.
After scoring and multiplying the points, the prioritization becomes clear and explicit: the higher the score an item gets, the more critical it is.
Here's what each factor is all about:
- The impact score determines the potential impact of a particular action or test if put into practice. Would it resonate directly with the primary business goals? How many areas or segments would benefit from this action?
- The confidence score determines the team's confidence and guarantees for this specific action or idea. How likely is this option to succeed based on its belief? According to your experience, will this idea be effective? When team members have no previous experience with a given idea, the confidence level tends to be lower.
- The ease score determines how easy it is to implement a test. Would this action be easy to execute? Would it include too many professionals? Would it be too complex or have a high implementation cost? The easiest it would be, the higher the score.
Instead of simply applying frameworks without first considering whether they really make sense in your business, keep in mind that there are always many methods for you to test. Deciding which one is the best for you will depend on your project's stage.
Some companies benefit from fast, collaborative, and democratic methods. In contrast, others rely on more visual and simplistic scoring frameworks or customized customer-centric approaches, such as B2B businesses that serve large accounts.
The tip here is to cool down and cut the overthinking off. Instead, you should just:
- Acknowledge what the variables taken into account in the method are
- Ask yourself how much each of them affects the business's primary goals
- Acknowledge what data and tools you have previously gathered and are actionable to use.
In some cases, the variable "reach" matters more than other factors, or as much. It happens when the number of people reached after implementing a feature or executing an idea is more important than other metrics. In other cases, when companies have customer surveys and quantified customer satisfaction rates, they can consider "value" as the main prioritization criteria. This is due to their accurate understanding of the unique value they can deliver to customers.
We listed other methods that may meet your needs depending on your available resources and your company's main goals right now.
The acronym stands for reach, impact, confidence, and ease. It considers the same variables as the ICE Score method, plus the reach the prioritized experiment would have in a given time frame. For example, if the goal is to get 150 new customers by the end of the first quarter, the maximum reach score would be 150. If getting 1300 users converting from stage X to stage Y in 3 weeks is expected when implementing the new feature, 1300 would be your highest reach score. The equation is similar to the ICE Score, with the only difference that it also multiplies the other variables by the reach score.
MoSCoW is an acronym for must-have, should have, could have, and will not have. The purpose here is to distribute the possible ideas into four groups:
- Must have: the ones that are essential for the product
- Should have: the ones that are important but not essential
- Could have: the ones that are nice to put into practice if possible
- Will not have: the unnecessary ones, regardless of the time.
It can be used on an entire project or just part of it and is suitable for teams looking for a simple approach and who have a very clear due date for each task. Without well-defined time windows, you assume the risk of overloading the must-haves.
Firstly, you should define the time intervals such as the next month, the next semester, and the next year. Then, each team member receives 3 weighted voting dots (each one with numbers 1, 2, and 3 stamped on it) and assigns each one to an idea. These numbers are added up, and the ones with the highest score are must-haves, the ones with the second-highest score are should-haves, and so forth.
This model was created in 1984 by Dr. Noriaki Kano, and considers the possibility of implementation and customer satisfaction as the primary scoring criteria. In the process, you create a chart where the Y-axis indicates customer satisfaction while the X-axis indicates implementation difficulty. The closer an idea is to the upper right quadrant, the higher the score.
The method is suitable for teams with user browsing data that have already conducted customer surveys, and feel the need to adopt a more user-centric approach, which is often due to engineering-centric cultures.
Now that you know what prioritization methods are and can find all sorts of scoring methods out there let's go back to the ICE Score and discuss how you should use it to get the best out of it.
Even the most popular methods among growth professionals are of minimal use when metrics are unreliable or when post-test activities do not match the results obtained in experiments. With the ICE Score, this wouldn't be any different.
Initially, the method was created to determine which AB tests to prioritize. However, many growth professionals also apply it to day-to-day product and marketing teams' backlog management due to its efficacy and functionality.
Ultimately, it is not just about determining which action to take or not. It can also contribute to planning deadlines and in daily demands management: the ICE methodology can help you calculate the focus needed for each project or task. Identifying the level of opportunity when building a tactic work schedule for a new feature launch is essential for the company's growth.
As mentioned before, the ICE Score is efficient in many cases, but it is essential to be careful when applying it.
The method has instant appeal to leaders who want to be known for making data-driven decisions, but it's still a tactic, not a strategy. What does this mean in practice? When it comes to the dynamic tech and digital products universe, things may change in the blink of an eye, so you need to make decisions quickly. On this typical startup daily basis, it is often not possible – and also not advisable – to gather an entire team in a room for a scoring process.
If you don't have precious time to waste, it is crucial to have a well-architected and tied strategy behind prioritization methods – leaving no loose ends between different tactics. Tactics won't ever replace strategic thinking and must be tied together to achieve a greater goal: always keep an eye on a north star metric when deciding whether or not to use a tactic at any given time.
When the team defines the scores together, it is common to fall into traps such as a lack of alignment between stakeholders on what each variable means. Taking only opinions that are convenient for a single person into account is another problem since this person may already intend to put some tasks into practice.
When there's good communication about what each score means, the prioritized idea is, in fact, the one that makes the most sense for the team as a whole. You and your team can gradually improve the use of the method and decide if it's wiser to move on to the following ones or if it is time to rethink your tactics.
No matter how good a tactical plan is, if it is not well communicated in a language that the entire team understands, the practical aspects will never turn out the best way.
The benefits of using the ICE Score are many. The prioritization of ideas and projects ensures greater alignment between teams and better use of resources. Bringing the ICE methodology to everyday business can increase productivity, reduce costs, and generate more accurate results.
As mentioned in the Ladder blog, just like Growth Hacking was invented to make more meetings with tech founders who aren't interested in marketing, the ICE score is a solution marketers have found to make engineers listen to their needs.
We at Croct recommend using this methodology to ensure the continuous optimization of your product or website. Still, we emphasize that this must be done with caution so that you do not make the wrong decisions.