The product manager’s work is split around many different areas, but in the end, it all comes down to taking all the information we have gathered and make a decision on what is most important to develop next - those features that will bring us the desired outcome that will drive our company another step towards fulfilling its vision.
I’m still surprised to see how many product managers taking decisions based on gut feeling, higher management requests, the client who shouts the loudest or just what sounds like a cool feature to have. While luck may strike and we might accidentally build the right features here and there, there is a much better, more scientific way to make a prioritization decision, this is where priority scoring comes into play.
Now, over the years an array of prioritization scoring frameworks have been developed. The one you use depends entirely on several variables such as size, time, resources, as well as the type of company you are operating in.
Here are some of the most popular ones to get yourself familiarised with.
While priority scoring methods are a great way to understand what to develop next and share clarity with your company on what and why the next product releases will contain, the scoring methods will simply not work if you do not have the following ingredients for the features you are evaluating:
Data - while each scoring method may require different sets of data and information, you will always need at least some qualitative and quantitative feature data to make sure you understand the value it will bring
Effort Estimate - To understand what is feasible - not only valuable to be developed, you will need at least t-shirt size effort estimates from the relevant teams (Engineering, Product Design and others) to size the effort needed to get a feature deployed and maintained.
An acronym standing for Reach, Impact, Confidence and Effort, it was designed to help product management teams determine which products and services need prioritizing and which should be scrapped entirely if need be. It was initially developed by Intercom with the idea of creating a single score via utilizing four specific metrics.
If used correctly, this model can help teams undertake and evaluate the importance of several different projects simultaneously with the results from the RICE model enabling them to see what products and services will drive the company forward.
The R in the RICE model stands for reach, as in, how many people your project/service could potentially reach within a given time frame, usually between six-to-twelve months. Ultimately, you decide what Reach means for you depending on your needs i.e. sign-ups, customer transactions, new members, etc.
The final score is determined by the number estimate. If you estimate that your site will receive around 500 new subscribers within the next month and that 10% of them will sign up to become Patreon members – then your Reach score is 50.
The impact assessment combines both the quantitative as well as qualitative methodologies, with the former referring to how many new conversions your product/service will encounter over a while. Whereas reach determines the overall number of customers encountered, impact tells us a little more about the results of those site visits. The qualitative side of Impact would be attempting to find out and increase customer satisfaction via the use of online surveys etc.
The level of confidence you have in an idea/concept can be backed up by data-supported feedback, however, if a product is in its infancy then this data can be lacking or not immediately fully understood. This means that you will at times need to rely on your gut feeling.
Utilizing the Reach and Impact assessment, you can get a better sense of how confident you should be in the said product to produce the acquired objectives and key results (OKRs).
If Reach, Impact and Confidence represent the numerical in RICE, then Effort represents the denominator. This takes into consideration the number of resources required to complete tasks over a timeframe. These said resources refer to everything from the concept and design phase to engineering and implementing the product and testing it.
It is calculated similarly to the Reach score, via estimating the total number of resources needed against a specific time frame. If a project will take three person-months, then the effort score will be a 3. Anything less than a month is marked as a 5, according to Intercom.
Created by Doctor Noriaki Kano in 1984, the Kano model prioritizes features on a roadmap via looking at the level of customer satisfaction a product could bring against its potential costs. Therefore, it helps to determine what will likely satisfy or delight customers, as well as what you should stay clear of. You have to ask yourself whether the product is practical given your resources or whether the expected customer satisfaction will outweigh the implementation efforts.
It’s also a way of measuring potential growth for new features. This is what separates the KANO model from other prioritization features, it is solely used as a basis for measuring customer satisfaction.
Using the Kano Model, features, products, and services can be divided into five categories based on how much they satisfy users. These are Basic, Excitement, Performance, Indifferent and Dissatisfaction. Occasionally, you will find some under different names but the principle remains the same
As the name suggests, these products/services include the bare minimum that your customer would expect you to have. As a result, they will not necessarily bring a lot of satisfaction to your user base. However, they are still vitally important to the business as they form your building blocks to drive the company forward. Furthermore, if they are absent or not working properly, then they can become a dissatisfaction feature so it’s essential to get this right.
Performance factor refers to a proportionate increase in customer satisfaction with increased investment. An example of this would be decreasing the sign-up page from three minutes to just one minute, therefore upping the overall performance of the service by making it more user-friendly and thus increasing customer delight.
Whereas performance features will see an equal rise in satisfaction in tandem with investment, exciting features will see a disproportionate rise in user delight when you invest in them. In terms of functionality, they are not essential and won’t necessarily be missed if they have never been implemented. But once assigned, they will greatly increase your site’s traffic.
Whereas the first three are generally things your team should strive for, these next two are to be avoided at all costs. Indifferent simply means your customers won’t care about said feature whereas Dissatisfaction will anger or frustrate customers, making them less inclined to visit your site and utilize whatever product you have on the show. They are ultimately a waste of time, resources and will make your customers upset and therefore less inclined to use your services in the future.
Well, you can ask them. What better way to get a feel of your customers and see where you can improve than by going directly to the source. This can be done by using qualitative methods such as questionnaires, surveys or even getting genuine testimonials.
There are many ways in which the KANO model can be utilized to its full potential, but it is most effective for small teams with limited resources available to them.
Spawned from the outcome-driven-innovation (ODI) method created by business consultant Tony Ulwick in the 1990s, opportunity scoring can be seen as Important-versus-Satisfaction Analysis. This system of prioritization allows product teams to learn which features customers regard as extremely important but also underdeveloped.
As opposed to the Kano Model which takes a more general, overall assessment of its features, opportunity scoring is a far more narrowly defined approach. This, therefore, represents a fantastic opportunity for innovation and profit as it allows for product teams to realign and focus their efforts on already existing products that are regarded as highly important to the customer.
Furthermore, perhaps inadvertently, this method can also save businesses a lot of money as underappreciated services that customers regard as important could mean wasted resources unless immediately addressed.
You’ll need to conduct a survey on your customers asking them to grade features on a scale of 1-to-10 based on two simple questions; (1) ‘How important is this product to you?’ and (2) ‘How satisfied are you with the product today?’.
You should be on the lookout for a high score in terms of importance and a low score in terms of satisfaction. These will be the areas your team will need to work on for optimization.
The equation used in this framework is as follows; Importance + max(importance – satisfaction,0) = opportunity.
In terms of an equation, it should like this; Importance + max(importance – satisfaction,0) = opportunity.
Product teams can use opportunity scoring to suss out potential ROI (return on investment) via working on various features that customers say they value but currently find unsatisfying or underwhelming.
This is perhaps one of the more versatile frameworks available at your disposal on Prodeology as it enables you to compare the value of what is needed to be done against any other metric or variable.
This means that X could stand for many things depending on your company’s needs. Such examples include Value vs Risk and Value vs Complexity.
Whatever X means for you, the methodology is usually the same. Your team will first need to build a graph with its Y-axis representing Value and its X-axis representing whatever metric you’re using (hence the name Value vs X)
The graph can then be divided into a quadrant of high value against low value against high complexity and low complexity. Finally, you’ll be able to plot your initiatives on the graph-quadrant and assess where you need to prioritize and what can be put to the side for the time being. However, before an initiative can be plotted on the graph, your team will need to answer two questions; How much value can the initiative potentially bring?’. The second question depends on the X. If your X represents effort then you might want to ask yourself how much effort will it take to implement the said feature.
You’re looking for those initiatives which yield the highest value and lowest amount of complexity or risk (depending on what your X represents). In order words, work smarter, not harder (in this case).
For whatever variable you use as your X, you’ll need to estimate accordingly. Whether you’re estimating risk factor or complexity, you’ll need to look at practicality/functionality, time limits and operational costs to name but a few. Value vs X represents an objective and quantifiable approach to informed decision-making.
This prioritization method was developed by Dai Clegg of Oracle in 1994 and is used extensively together with agile practices. In a nutshell, MosCoW places the most value on items that are driving the higher business and user value.
MoSCoW stands for 4 different categories of features: must-haves, should haves, could haves, and will not have at this time.
Features labeled as “Must Have” are absolutely critical to the project. A release with even one feature in the “Must Have” category missing will mean certain failure
Features labeled as “Should Have” are important to the release, but missing a few doesn’t necessarily mean a failure. We may want to pick a few “Should Have” features that will require a lot of effort to accomplish and keep them out of our next release.
Features labeled as “Nice Haves” would sometimes create a nice addition to our product but we should only consider them if we get some extra time or if we feel this could lead to a long term value and a base for future “Must Haves” as a result of releasing them (i.e if we will get some valuable market feedback by having them in production, which may result in more evidence towards features we feel can be Must-Haves
Features labeled as “Won’t Have” are deemed as a waste of resources and time if implemented and should be archived or removed from your backlog.
MoSCoW removes bias and lets team members and stakeholders to decide together on what's important and what is not. Its simple concept enabled bringing all stakeholders together and get sign offs on a clear way forward.
You can have access to all these fantastic frameworks and more once you sign up to our ‘free forever account. Start prioritizing and make informed decision-making with Prodeology today!