
Return On Member Activity (ROMA)
Introducing a loyalty marketing metric for incremental sales revenue
Mark Sage - 12 min read - 06/06/2024
Inthe age of digital marketing, measuring marketing has become much easier and much more prominent. No self-respecting digital agency would do a client pitch now without the term ROAS taking centre stage as a means to measure (and justify) what they do.
ROAS or given its full name — Return on Ad Spend — is a simple metric. It basically takes the revenue attributed to a given marketing campaign and divides it by the cost of the advertising to support it. Unlike Cost per Acquisition (CPA) which only measures the marketing cost of getting a sale (or customer), the ROAS metric compares this same marketing cost to the revenue or GMV that it generated. It’s generally something the agency can track end to end and feels like something you can shout about —
We generated $5 of sales for every $1 of ad spend!
The challenge with ROAS though is that it’s a high-level metric that only takes account of the ‘last click’ conversion. This means it doesn’t include any wider costs that helped to get to that ‘click’ such as brand building activity, sales promotion costs or the content strategy that underpinned it. Nor does it measure the actual profit that sale generated. If the product is highly discounted, the actual cost of advertising may have eaten up any remaining margin and so, while the metric may be positive, the end result may not.
That said, no single metric can answer all these questions and so ROAS is a good indicator in terms of the current (and changing) cost of marketing in relation to sales.
ROAS works well within digital advertising because it’s both simple to measure and simple track. The ad budget is known and with end-to-end tracking, it’s possible to attribute an individual online purchase conversion to a specific ad.
There are also more advanced ways to measure marketing such as Marketing Mix Modelling (MMM) which look to quantify the impact of various marketing activities on sales over time.
The key takeaway here though, is that marketing is increasingly measurable and hence there is an expectation (quite rightly), that it is measured.
And then there is loyalty marketing — where things get a little more complicated.
Firstly, most loyalty marketing programmes are not online specific. They cover in-store sales, online sales — sometimes channel-based sales. This means it’s hard to track individual marketing activity to specific purchase behaviour. Worse still, the nature of loyalty programmes is that they are long term investments. The design of the loyalty programme is not so much to trigger that near-term next purchase, but to foster longer term brand preference and continued, consistent purchases.
The marketing within loyalty marketing is not simply the specific content or offer, but the combination of features, mechanics, targeting, recognition and rewards which keep members engaged and excited.
But, contrary to popular belief — loyalty marketing can be measured.
On yuu Rewards, the digital coalition programme in Hong Kong, we implemented a set of simple measures which in combination, allowed us to track and report on the incremental sales lift that could be attributed to loyalty marketing activity. This is something I term ROMA or Return on Member Activity as it measures the impact that engaged loyalty members have on overall sales.
The calculation of ROMA is a little more complicated than a measure like ROAS as it looks across three different areas of influence and so requires a bit more data analysis. However, once implemented, it’s easy to track and easy to understand. More importantly, it provides the business (and finance) with a way to truly understand the impact and value that loyalty marketing brings.
Before breaking ROMA down however, it’s worth looking back at how we got to this measurement approach within yuu Rewards.
Loyalty value attribution
We spent a lot of time before the launch of yuu Rewards discussing how we would quantify the success of the loyalty programme for our participating banners and partners. This was important because ultimately, they were paying for the programme and yet they didn’t actually ‘own’ the programme.
This meant we couldn’t simply say ‘it’s a cost of doing business’. We had to be able to justify this cost and to show the attributed value to ensure we had continued buy-in at all levels of the organisation and especially finance!
It’s quite common when people talk about the value loyalty brings and how to measure this, to compare loyalty member spend to non-member spend and point out the difference.
Don’t.
There is a difference. There always will be a difference. It will always be higher. This though is a self-fulfilling prophecy as, by design, loyalty attracts and recruits a brands best and most loyal customers — it’s heavy buyers. Who it doesn’t attract (as much) is lighter brand buyers; those who spend occasionally.
In this respect, comparing member to non-member spend doesn’t indicate that the loyalty programme has created additional value — it hasn’t; it’s simply captured and identified those already providing it.
What we were looking for was true incrementality — where loyalty marketing could be shown to be creating new sales activity.
Knowing that the best place to look for value generated by loyalty members is within the loyalty members, one of the initial suggestions was to divide the member base up into segments and compare the highest performing with the lowest.
The theory here was that these lower engaged members would give us a good comparison with highly engaged members and the difference in spend and frequency would provide a comparison to justify uplift.
I remember sitting in a meeting when this was presented and feeling like we were essentially marking our own homework. The challenge was that this approach didn’t allow for a like for like comparison. You can’t simply take the worst performing, un-engaged members and compare them to the best to calculate uplift.
What we needed was a way to ensure that the comparative audience (the members) was equal in terms of current behaviours such that we could accurately compare the loyalty marketing we were doing and understand its impact on those targeted members.
To put it another way, we needed to measure Return on Member Activity (ROMA).
To do this we came up with an approach that looked at 3 core areas of activity which were independent of each other but could be combined to provide a view of the overall impact on member behaviour and the associated spend — and it’s these three areas that now make up the ROMA metric, namely : -
1. Marketing Impact — A measure of the overall direct and targeted marketing activity. Are those targeted actually spending more?
2. Coalition Impact — A measure of the ability to change behaviours. Could we shift people into new banners / partners?
3. Redemption Impact — A measure of the impact of loyalty rewards. Could the currency and its use change short/long term behaviour?
So lets break these down into a little more detail.
Marketing Impact — This is a wide measure that looks to cover all targeted and direct to member communications. If we believe there is value in customer data, then how we activate that data should deliver that value.
To actually measure this, we pulled out a group of members who would be excluded from all direct marketing for a 12-month period — we termed this a ‘fallow’ group as it’s similar to how a farmer leaves a field fallow. Members would roll into and out of this group, but once in it, they would stay a fallow member for those 12 months.
The members included were across all RFM segments and across all enrolment cohorts to ensure a true representation of the ‘average’ member.
A key watch out here is that those enrolling later (the late majority / the laggards) are likely to have very different levels of loyalty to the early adopters and so you don’t want to skew results either way by only comparing with certain enrolment cohorts.
This use of a fallow group allowed us to have a ‘hold out’ segment of members which would not receive any direct or targeted communications and so would provide a like for like comparison of how our regular marketing was impacting member engagement and programme usage. Any difference between normal members (those receiving our marketing) and fallow members (those excluded) could then be attributed to the loyalty programme.
Another thing to be careful of here is trying to take the easy option and compare members based on a marketing opt-out. Whilst it means you wouldn’t intentionally exclude any members from marketing, it may skew the fallow analysis results as those opting out of marketing will likely be less engaged in the brand and the programme, and hence not a like for like comparable audience.
This fallow group approach is something we’d previously used on Nectar in the UK. Back then though, Nectar was largely a direct mail (physical post) programme; the challenge now was that with a digital programme like yuu Rewards, it would be much harder to turn off marketing.
In the days before mobile apps, you could just stop sending the member programme offers through the post; now members would all have the app and could see offers whenever they wanted— we couldn’t truly disconnect them from our programme marketing.
To help combat this within yuu Rewards, we decided to not only exclude them from direct marketing (email, push, etc.) but also from any personalisation. This meant their offers in the app would have no specific prioritisation such that no offer was made more relevant than another. In a sense, our offers for these fallow members would become no more relevant than a paper flyer and so we could then measure how well our data insights and targeting were actually performing.
Coalition Impact — People are generally creatures of comfort, and we get set in our ways and our preferences.
Within a retail context this means we tend to have the brands we like for specific categories or sectors and will typically buy a mix of our needs such as groceries, fresh or beauty from a mix of different retailers and suppliers.
The opportunity within a loyalty programme then is to identify these wants, needs, and desires — as well as the gaps — and to see how we can help to fulfil them from across our range of goods and partners.
For yuu Rewards as a coalition programme, this meant both encouraging members to shop wider categories within a banner as well as shopping more banners within the network — something we actively looked to promote through acquisition specific marketing.
Obviously, if your loyalty programme isn’t a multi-partner programme like a coalition, then this measure isn’t needed as all uplifts in spend, including those from a member who becomes ‘new to category’ would be captured within the over-arching Marketing Impact measure through fallow.
For example, if we can see a regular customer who purchases from us but isn’t buying fresh product, then we can assume it’s likely they are purchasing it elsewhere and hence we can look to incentivise them to buy it from us. Where this marketing is successful within a banner, then the resultant uplift would be captured automatically within the Marketing Impact measure.
The challenge though is that we wouldn’t see this for new to banner customers.
If the marketing activity was successful in encouraging a grocery only customer to start shopping within the pharmacy banner, then this would look like a new customer for the pharmacy. Their average spend would likely be, well average, and hence we wouldn’t see a lift within the Marketing Impact measure, even though the programme itself had managed to create new to coalition spend.
So, we needed a way to specifically recognise and measure new to banner spend which the loyalty programme itself generated. This spend could then be classified as incremental lift from loyalty.
To measure this, we looked to take the first spend within a new banner where this was 3 months after the member had enrolled within the wider coalition. The rationale here was that after 3 months of activity, the member had demonstrated their ‘normal’ pattern of behaviour across participating banners and so, that first spend within a new banner could be ‘claimed’ as being loyalty marketing generated.
We only took the first spend as anything after that was down to the banner itself doing a great job at retaining them through product, pricing and promotion or the wider loyalty programme continuing to motivate their spend and hence would be captured within Marketing Impact
From a measurement perspective, whilst we used 3 months, the actual time period will depend on your business and your purchase cycle and so you’d need to tune this to what you consider ‘coalition impact’. For fast moving consumer retail, 3 months is a good measure of ‘normal’ — for speciality retail this may be 12 months.
Redemption Impact — Whilst the previous measures looked at how we could influence member behaviour through marketing, this measure looked at how the loyalty programme design was working — right where the rubber hits the road — on redemption.
Within points-based programme designs, redemption is the key metric as it provides a barometer in terms of programme health.
If redemption is low, it shows that engagement around the currency is low.There could be many reasons for this such as the time taken to earn enough value to redeem or the selection of rewards themselves not being motivating enough. It can also show how other metrics may be masking underlying problems.
At the other end of the spectrum, having super high levels of redemption can also be a sign of a troubled loyalty programme design — and of a prohibitively expensive loyalty programme cost through lower levels of points breakage.
Done well, redemption should be regular and considered. I’ll talk more about what is meant by considered redemption in a later post, but essentially, you’re looking for members to save points to redeem and to then choose to redeem.
This choice should translate into increased member engagement as they feel rewarded and should also translate into higher spend at the point of redemption as they look to treat themselves and to maximise the reward they earned.
From an incremental revenue perspective, this lift in redemption baskets is fully attributed to the success of the loyalty programme design and mechanics and so can be counted towards the overall incremental lift in sales the programme helps to generate.
Within most loyalty programmes there is an identifiable lift in spend both just before redemption as members look to maximise their earnings and just after redemption as members feel good about the programme and look to build their balance back up. This wider ‘halo lift’ can also be attributed to the loyalty programme and provide further incremental revenue within ROMA if required.
Return on Member Activity (ROMA)
To calculate ROMA then, we take these three different measures and the total incremental sales lift attributed to each, and then express this as a percentage of member sales.

ROMA is shown as a percentage of member sales as that is the audience the loyalty programme is designed to influence. If we cannot see and capture member spend, then the programme can’t influence it.
Obviously, you can also express as a percentage of total sales, but this will dilute the impact based on the relative success of another important loyalty metric — sales penetration.
These two metrics work closely together, but it is sometimes harder for the loyalty programme manager to drive sales penetration if this is being delivered by a partners customer facing staff. Having these as separate metrics allows you to show the value loyalty can bring when customer spend has been captured as member spend; helping to further incentivise partners and wider stakeholders to drive loyalty take-up and usage.
What ROMA doesn’t measure though is the impact of the existence of the loyalty programme itself — the difference in customer spend pre and post launch. As mentioned previously, this can be hard to measure accurately due to having a comparable audience to measure against and due to ever changing market conditions — but then that’s not the intention of ROMA.
ROMA is not a one-time measure of loyalty — it’s an active measure of how you actively manage the programme.
I’ve used ROMA to measure a number of programmes now and the incremental sales impact of loyalty can be anywhere from 2% to 10%+, depending on market, marketing and maturity.
Given it can measure incrementality in sales, it can also be used to calculate an ROI for the loyalty marketing operation as a whole — from technology, to media, to people. So when finance start to ask about ‘loyalty cost’, it allows you to change the conversation to one of ‘loyalty value’.
In case you’re wondering what an ROI on loyalty marketing looks like — roughly around 5–6x ROI. So now you can say ‘we generated $5 of incremental sales for every $1 of loyalty marketing spend’
Using ROMA to measure the sales impact month on month, allows you to see how the levers you’re pulling are working and the members you’re targeting are performing. In this way, it provides a level of accountability for the loyalty marketing team back to the business itself.
This can only be a good thing.
