Tracks, and the criticality of measuring digital performance
With more and more NGOs bridging the gap between their programmes and the digital world, how do we make sure these digital products don’t end up in a graveyard of apps?
Monitoring and evaluation (M&E) is the cornerstone of development programmes. Often whole departments are dedicated to measuring and reporting on whether a programme is having the intended impact on the communities that it is operating in. NGOs publish annual evaluation reports to ensure they remain accountable to donors, agencies and people working with them.
The shift for the NGO sector into the digital world was a bold, important and an exciting one. Around the world, mobile and internet access is rapidly increasing, meaning organisations can hugely expand their reach via digital compared with traditional programming.
However, despite the commitment to M&E within programming, when it comes to digital products, measurement of digital performance is often severely lacking, if considered at all.
As a consequence of this lack of insight into digital performance, an unfortunate trend is emerging. A significant number of digital products from the development sector are piling up in a graveyard of apps. Apps that are not invested in or monitored against, and ultimately become stagnant, unused tools. The effectiveness of the tool and therefore the development roadmap are alarmingly unclear, and as such the case for subsequent funding is eroded.
The effort, user involvement and funding that are required to launch a digital product into a development context are significant, and as such this trend for stagnation is a momentous waste. However, worse than that, these products have been introduced to a community to either solve a problem, provide an opportunity, or surface new and important data. To allow them to fade into history demonstrates a lack of respect for those lives the product has become a part of.
It’s also not indicative of responsible product development.
Introducing Tracks
When designing and building products for marginalised communities, iterative development and piloting are really important to ensure meaningful participation from the people we are building for. So many decisions go into designing and building a digital product. User research, UX testing and UX best practice shape and inform our design strategy, however there are some learnings that simply do not emerge until the pilot is underway and the product is being used in its intended context.
It’s at this point that measurement is most critical, and powerful, for informing where the next phase of investment should be directed.
At Here I Am we recognise our responsibility to ensure our partners are educated about the importance of responsible product development: iterative development, backlog management, testing and piloting with real users, and investing in the right places. The problem is, without ongoing measurement of digital performance, there is no structure to understanding what needs to be invested in.
So, we created Tracks.
Tracks is our proprietary framework for measuring the digital performance of the products and services that we create with our partners. It takes inspiration from frameworks that are well used within the M&E space, but focuses specifically on the performance of the digital product or service. Tracks is not designed to replace any measurement of the programme that the digital product might form a part of, but it is our way of ensuring that the products we build are delivering value to their users and can continue to have the right investment in the right places.
Tracks allows Here I Am and our partners to:
- Gain an objective, and data driven understanding of how well digital products are performing against their goals and objectives;
- Accurately and confidently prioritise backlog features;
- Support the MVP and iterative and human centred development principles, by understanding users’ needs and experience;
- Make clear decisions on future priorities for the product development.
So - how does it work?
1. Designing the framework
The aim of the framework is to clearly be able to see where investment is succeeding, and where the friction points are that future investment would be best placed. We work with our partners to co-design a bespoke framework that will provide this information, throughout pilot and beyond.
As we are finalising the first cycle of product development and preparing for the pilot with our partners, we host a workshop to build the Tracks framework. The pilot is the first point that the product will get into users hands, so it’s imperative that we’re measuring from this point onwards. During the workshop, we define the pillars, indicators and data sources that are important for the product’s success.
- Pillar: a key theme or overarching objective for the digital product.
- Indicators: clear, objective and specific pieces of information that will help to understand the performance against each piller. Multiple indicators will sit within each pillar.
- Data sources: where the indicator information will be sourced e.g. analytics data, user feedback surveys, interviews.
For each indicator, we need to agree where/who that data will be obtained from and how. This might require interaction with users, or could be from analytics data.
A poor example of an indicator might be: 'users feel safe'. It's not clear from this how the information would be measured or collected. It's also subjective, and could have different meanings for different people. Instead, the indicator could be: positive response to the statement “I felt comfortable and safe responding to questions using the app”. This presents exactly how the information will be gathered, and what insights we will have.
From this, we define a goal. What percentage of positive responses to the statement would we deem to be a success? (Eg. 95% of people agree with the statement)
2. Implementing the framework
During the pilot, teams collect the data that will feed into the framework to highlight the tech’s strengths and weaknesses, enabling evidence-based investment into the features and ideas that will create the highest impact.
Practising the principles of lean, we encourage our partners to consider their existing channels of monitoring, to see if the Tracks monitoring could be folded into these, so as not to burden staff and be respectful of users.
Beyond the pilot, we repeat this process of data collection at further appropriate moments throughout the product life cycle, ensuring data is the cornerstone of our continuous improvement strategy.
3. Data driven decision making
The primary goal of Tracks is to inform and guide ongoing optimisations. We conduct a Tracks Review Workshop with our partners after the pilot to collectively review the data. From this we are able to build a data informed roadmap for iterative improvement.
With every phase of work, we continue to collect data and use it to inform the subsequent phase. In the development sector, where budgets are often tight, this provides us and our partners with confidence that the funding is always being used as effectively as possible.
In cases where the funding is capped, Tracks can still provide value. Aside from highlighting key frictions, it provides evidence and reassurance that the product is performing against its strategic objectives. Which is valuable data, even if no further investment is anticipated.
Furthermore, when funding is tight, frictions identified through Tracks may be overcome through changes to content, or additional training where relevant. So there is always value in understanding the product performance.
Tracks, on the ground
We began evolving the Tracks framework in September 2020. Currently it is measuring digital product performance in 9 countries, across 3 programmes. From Malala Fund’s co-design and investment in Fatima and CARE International’s VoiceApp project, to NRC’s Better Learning Programme project we are proud to offer Tracks to our partners and increase their confidence that their investments are delivering the greatest impact to their users.
Inspired by this post?
We love to share perspectives, thoughts and ideas on creating digital ways to include the excluded. If you have a problem you'd like to discuss, we'd love to hear from you.