How to measure economic impact of data and AI by Richard Benjamins

By Guest Contributor Richard Benjamins

Author of A Data-Driven Company, Richard Benjamins, explains how to measure economic impact of data and AI.

How do we put an economic value on big data initiatives in our organizations? How can we measure the impact of such projects in our businesses? How can we convince senior leadership to continue (and increase) their investment in this area?

Most of us who are familiar with the big data boom are also familiar with the big, bold promises made about its value for our economies and society. For example, McKinsey estimated in 2011 that big data would bring $300 billion in value for healthcare, €250 billion for the European public sector and $800 billion for global personal location data (Manyika et al., 2011). McKinsey subsequently published an estimate of what percentage of that anticipated value had become a reality as of December 2016 (Henke et al., 2016). It suggested that the actual value was up 30%, and up 50-60% for location-based data.

These astronomic numbers are convincing many organizations to start their big data journey. Back in 2017, Forbes (Press, 2017) estimated that the market value for big data and analytics technology would grow from $130 billion in 2016 to $203 billion in 2020. As with many of these predictions, one has to wonder who’s checking whether they have come true.

Indeed, these sky-high numbers do not tell individual companies and institutions how to measure the value they generate with their big data initiatives. Many organizations are struggling to assign an economic value to their big data investments, which is one of the main reasons so many initiatives aren’t reaching their ambitious goals.

So, how can we put numbers on big data and analytics initiatives? From my experience, there are three main sources of economic value. Let’s take a look at these.



There are considerable savings to be made on IT infrastructure, from propriety software to open source. The traditional data warehouse model of IT providers of data is to charge a license fee for the software part, and charge separately for the needed professional services. In addition, some solutions come with specific hardware, as so-called appliances.

Before the age of big data, this model worked well. How- ever, with the increasing amount of data, much of which is non-structured and real time, existing solutions have become prohibitively expensive. This, combined with a so-called ‘vendor lock-in’ — as investments and complexity make it costly and difficult to change to another vendor solution — has forced many organizations to look for more economical solutions.

One of the original, popular alternatives is provided by the open source Hadoop (Hadoop, 2020) ecosystem of big data management tools. Open source software has no license cost and is therefore quite attractive. However, to be able to take advantage of open source solutions for big data, organizations need to have the appropriate skills and experience, either in-house or outsourced.

The Hadoop ecosystem tools run on commodity software, scale linearly and are therefore much more cost effective.

For these reasons, many organizations have substituted part of their propriety data infrastructure with open source, potentially saving up to millions of euros annually. While saving on IT doesn’t give you the greatest economic value, it is relatively easy to measure in the Total Cost of Ownership (TCO) of your data infrastructure, so it’s a popular strategy to start with.



There is no question that big data and analytics can improve your core business. There are two ways to achieve such economic benefits: by generating additional revenues or reducing costs.

Generating additional revenues means doing more with the same — in other words, using big data to drive revenue. The problem here is that it isn’t easy to decide where to start, and it can be hard to work out how to measure the ‘doing more.’

Reducing costs means doing the same with less — using big data to make business processes more efficient, while maintaining the same results.

As discussed in Chapter 6, a good strategy involves starting your big data journey with a use case opportunity-feasibility matrix, which plots the value (business impact) against how feasible it is to realize that value. We also saw in Chapter 6 that a good way to estimate the business value of a use case is to multiply business volume by estimated percentage of optimization.

As we saw, for a revenue generation use case like churn prediction, if the churn rate of a company is 1% (per month) and there are about 10 million customers, with average monthly revenue per user of €10, then the business volume amounts to €1 million per month, or €12 million a year. If big data could reduce the churn rate by 25% (from 1% to 0.75%), the estimated value would be €250,000 per month. As an example of a cost saving use case, consider procurement. Suppose an organization spends €100 million on procurement every year. Analytics might lead to a 0.5% optimization, which would amount to a potential value of €500,000 a year.

However, once the initial use cases have been selected, how should you measure the benefits? This is all about com- paring the situation before and after, measuring the difference, and knowing how to extrapolate its value if it were applied as BAU. Over the years, we’ve learned that there are two main issues that make it hard to measure and disseminate the economic impact of big data in an organization:

  1. Big data and AI are almost never the only contributors to an improvement. Other business areas will be involved, making it difficult to decide how much value to assign to big data and AI.
  2. There may be reluctance to tell top management, and the whole organization, about the results obtained. Giving exposure to the value of big data is fundamental in raising awareness and creating a data-driven culture in your company.

Regarding point 1, big data is almost never the sole driver of value creation. Let’s again consider the churn use case, and assume you use analytics to better identify which customers are most likely to leave in the next month. Once these customers have been identified, other parts of the company need to define a retention campaign, and yet another department executes the campaign. For example, they might physically call the top 3,000 people at risk and pitch an attractive ‘stay with us’ offer. Once the campaign is done, and the results are there, it’s hard to decide whether the results, or what part of them, are due to analytics, due to the retention offer or due the execution by the call centres.

There are two ways to deal with this issue:


Start with use cases that have never been done before. An example would be to use real-time, contextual campaigns. Such campaigns aren’t frequently used in many industries, as they require expensive big data technology. Imagine you’re a mobile customer with a data tariff, watching a video. The use case is to detect in real time that you are watching a video and that you have almost reached the limit of your data bundle.

In these instances, you’re typically either throttled or disconnected from the internet. Either situation results in a bad customer experience. In the new situation enabled by the use case, you would receive a message in real time telling you about your bundle ending and asking you whether you want to buy an extra 500MB, for perhaps €2. If you accepted this offer, the service would be provisioned in real time, and you would be able to continue watching your video.

The value of this use case is easy to calculate: simply take the number of customers that have accepted the offer and multiply it by the price charged to the customer. Since there is no previous experience with this use case, few people will challenge you that the value is not due to big data and analytics.


Compare that with what would happen if you didn’t use analytics. The second solution is a bit more complex but applies more often than the previous case. Let’s go back to the churn example. It’s unlikely that an organization has never done anything about retention, either in a basic or more sophisticated way. So, when you undertake your analytics initiative to identify customers who are likely to leave the company, and you have a good result, you can’t just say that it’s all due to analytics. You need to compare it with what would have happened without analytics, all other things being equal. This requires using control groups.

When you select a target customer set for your campaign, you should reserve a small, random part of this group to treat the same as the target customers, but without the analytics part. Then, any statistically significant difference between the target set and the control group can be ascribed to the influence of analytics. For instance, if with analytics you retain 2% more customers than the control group, you then calculate how much revenue you would retain annually, if the retention campaign would be run every month.

Some companies can run control groups for every single campaign, and are always able to calculate the ‘uplift,’ and continuously report the economic value that can be assigned to analytics. However, most companies will only do control groups in the beginning, to confirm the business case. Once confirmed, they con- sider it BAU, and a new baseline has been created.


Regarding point 2, sharing results attributable to big data within the organization — in the right way — is fundamental. It’s been our experience that while business owners love analytics for the additional revenues or cost reduction, they might initially be reluctant to tell the rest of the organization about it. In fact, evangelizing about the success of internal big data projects is key to getting top management on board and changing the culture.

Why would individual business owners hesitate in sharing? The reason is simple: they’re human. Showing the wider organization that using big data and analytics creates additional revenue makes some business owners worry about getting higher targets, but not with more resources (apart from big data). Similarly, business owners might not

want to share a cost saving of 5%, since it might reduce their next budget accordingly. After all, haven’t they shown that with big data they can achieve the same goals with less? This is an example of a cultural challenge. Luckily, these things tend not to happen in stealth mode for long, and in the end, all organizations get used to publishing the value. But any time spent doing this ‘underground’ might be a problem, especially at the beginning of the big data journey, when such economic results are most needed.



In Chapter 5, we introduced external data monetization to tap into new sources of revenue, instead of generating value through business optimization. As we explained, this opportunity is appropriate for organizations that have reached a certain level of data maturity (see Chapter 4). Once they’re ready to exploit the benefits of big data to optimize their business, they can start looking to create new business around data. This can be achieved either by creating new data value propositions (i.e. new products with data at their heart), or by creating insights from big data to help other organizations optimize their business. In this sense, measuring the economic value of data, analytics and AI is not all that different than launching new products in the market and managing their P&L.

We believe that in the coming five years, the lion’s share of the value of big data will still come from business optimization — that is, by turning companies and institutions into data-driven organizations that take data-driven decisions. But with a growing interest in and activity of data sharing, as shown by the European Data Strategy,1 launched in February 2020, business opportunities through external monetization are set to grow significantly.



As we’ve seen, to measure the economic impact of data and AI, savings from IT are a good starting point, but will not scale with the business. Revenues from external data monetization and data sharing are also easy to measure but are still modest compared to the value that can be generated from internal use cases for business optimization.

For those who don’t ultimately succeed in measuring any concrete economic impact, don’t worry. Experience teaches us that while organizations in the early phase of their journey are obsessed with measuring value, more mature organizations know that the value is there and don’t feel the need to continue micro-measuring improvements. At this point, big data will have become fully integrated and be seen as BAU.


RICHARD BENJAMINS is Chief AI & Data Strategist at Telefonica. He was named one of the 100 most influential people in data-driven business (DataIQ 100,2018). He is also co-founder and Vice President of the Spanish Observatory for Ethical and Social Impacts of AI (OdiselA). He was Group Chief Data Officer at AXA, and before that spent a decade in big data and analytics executive positions at Telefonica. He is an expert to the European Parliament’s AI Observatory (EPAIO), a frequent speaker at AI events, and strategic advisor to several start-ups. He was also a member of the European Commission’s B2G data-sharing Expert Group and founder of Telefonica’s Big Data for Social Good department. He holds a PhD in Cognitive Science, has published over 100 scientific articles, and is author of the (Spanish) book, The Myth of the Algorithm: Tales and Truths of Artificial Intelligence.



Suggested Reading

Are you planning to start working with big data, analytics or AI, but don’t know where to start or what to expect? Have you started your data journey and are wondering how to get to the next level? Want to know how to fund your data journey, how to organize your data team, how to measure the results, how to scale? Don’t worry, you are not alone. Many organizations are struggling with the same questions.

This book discusses 21 key decisions that any organization faces when travelling its journey towards becoming a data-driven and AI company. It is surprising how much the challenges are similar across different sectors. This is a book for business leaders who must learn to adapt to the world of data and AI and reap its benefits. It is about how to progress on the digital transformation journey of which data is a key ingredient.

More information