Regulating Digital Businesses – Like Chasing Trains

chasing train

I don’t know if you’ve ever had the experience of running for a train that’s just started to move? I’ve had to do it a few times. Yes, I was younger and more foolish then. But it was usually within seconds of the train moving that I was on it. It’s only in old movies that you see the protagonists dashing down the platform as the train picks up speed. Usually, you just have the platform length and the problem is that the train is accelerating. There is a finite window of opportunity after which you’re just going to be left on the platform. This is my very long-winded analogy for regulators and technology. As technology accelerates – it’s getting harder for regulators to keep pace and in fact, in many areas they are just like the proverbial train chasers, running desperately after an accelerating train – often in a futile bid to control a business or industry that is on the verge of leaving the station of regulatory comfort. You can pick from a range of visual metaphors – a man trying to control seven unruly horses, or grabbing a tiger by the tail, but you get the idea. Regulators are in a fix.

The sight (and sounds) of the congressional hearing of Mark Zuckerberg did not bode well for regulators. They should have had Zuckerberg dead to rights over it’s (willing or unwilling) culpability in the Cambridge Analytica imbroglio. Yet he came out with barely a scar to show for 2 days of grilling. Many of the people asking him questions came across as the stereotypical grandparent trying to figure out the internet from their grandchild, even if these are very exaggerated caricatures. There is arguably a 40 year age gap between the average lawmaker and the average entrepreneur. But the age challenge is just a minor problem. Here are some bigger ones.

Technology businesses are shape-shifting enterprises invariably redefining industries. Platforms cannot be regulated like their industrial counterparts. Uber is not a taxi company. Facebook is not a media business. Airbnb is not a hotel. No matter how convenient it might be to classify and govern, or how often someone points out that the world’s biggest taxi company doesn’t have taxis. No, these are data and services platforms, and they need an entirely new definition. You could argue that the trouble with Facebook has come about because they were being treated like a media organisation, rather than a data platform. And let’s not forget that the only reason Facebook was in the dock is because of the success of Cambridge Analytica in actually influencing an election. Not for the misuse of customer data on a daily basis which may have gone on for months and years by Cambridge Analytica as well as other similar firms. While governments’ focus on Uber stems largely from incumbent and licensed taxi services, nobody seems to be worried that Uber knows the names, credit card details and the home and office residences of a majority of its users.

Tech businesses, even startups, are globally amorphous from a very early age. Even a 20 person startup barely out of its garage can be founded in California, have it’s key customers in Britain, its servers in Russia, its developers in Estonia and pay taxes in Ireland. Laws and governments are intrinsically country bound and struggle to keep up with this spread of jurisdiction. Just think of the number of torrent services that have survived by being beyond the reach of regulation.

These are known problems and have existed for a while. Here’s the next challenge which is a more fundamental and even an existential one for lawmakers. With the emergence of machine learning and AI, the speed of technology change is increasing. Metaphorically speaking, the train is about to leave the station. If regulators struggle with the speed and agility of technology companies today, imagine their challenge in dealing with the fast-evolving and non-determinate outcomes engendered by AI! And as technology accelerates, so do business models, and this impacts people, taxes, assets, and infrastructure. Imagine that a gig-economy firm that delivers food home builds an AI engine that routes its drivers and finds a routing mechanism that is faster but established as being riskier for the driver. Is there a framework under which this company would make this decision? How transparent would it need to be about the guidance it provides to its algorithms?

I read somewhere this wonderful and pithy expression for the challenge of regulation. A law is made only when it’s being broken. You make a law to officially outlaw a specific act or behaviour. Therefore the law can only follow the behaviour. Moreover, for most countries with a democratic process, a new law involves initial discussion with the public and with experts, crafting of terms, due debate across a number of forums and ultimately a voting process. This means we’re talking in months, not days and weeks. And if technology is to be effectively regulated and governed, a key challenge to address is the speed of law-making. Is it possible to create an ‘agile’ regulatory process? How much of the delay in regulation is because the key people are also involved with hundreds of other discussions. Would lawmaking work if a small group of people was tasked to focus on just one area and be empowered to move the process faster in an ‘agile’ manner? We are not talking about bypassing democratic processes, just moving through the steps as quickly as possible. A number of options are outlined in this piece from Nesta website – including anticipatory regulation (in direct contravention of the starting point of this paragraph), or iterative rather than definitive regulation. All of these have unintended consequences so we need to tread cautiously. But as with most businesses, continuing as present is not an option.

Then there’s the data challenge. The big technology platforms have endless access to data which allows them to analyse them and make smarter decisions. Why isn’t the same true of regulators and governments? What would true data-driven regulation look like? We currently have a commitment to evidence-driven policymaking in the UK (which has sometimes been unkindly called policy driven evidence making!) but it involves a manual hunt for supporting or contradicting data, which is again time-consuming. What if a government could analyse data at the speed of Facebook, and then present that to the experts, the public, and legislators in a transparent manner? The airline industry shares all the data about every incident, accident and near miss, across its ecosystem, competitors, and regulators, and this is a significant contributor to overall airline safety. (Outlined in the book Black Box Thinking, by Matthew Syed.) Why isn’t the same true for cybersecurity? Why isn’t there a common repository for all the significant cyber attacks, which can be accessed by regulators armed with data science tools and skills, so that they can spot trends, model the impact of initiatives and move faster to counter cyber attacks? If attacks seem to originate from a specific territory or impact a specific vulnerability of a product, pressure can be brought to bear on the relevant authorities to address those.

running after train
These are non-trivial challenges and we need to be aware of risks and unintended consequences. But there is no doubt that the time has come for us to think of regulation that can keep pace with the accelerating pace of change, or governments and regulators will start to feel like the protagonists of movies where people run after trains.

Advertisements

Seven in 7 – Agile @ Scale, Maturing AI, The Ring of Success, Defending Democracy and More…

Agile at scale
As we head into the TCS UK Innovation Forum this week, I’m preparing myself to discuss big ideas and disruptive changes. With that in mind, this week’s Seven in 7 looks at scaling AI, a startup that was bought for a billion dollars, and hacking democracy. But also, as we’re committed to becoming agile as an organisation, where better to start, than at the great article in the new HBR about how to drive agile at scale!

(1) Doing Agile at Scale

This is a very timely look at Agile adoption at scale in the enterprise. It starts with enshrining Agile values in leadership roles, which requires a continuous approach to strategy. The next key thing is a clear taxonomy of initiatives which may be classified into 3 categories: customer experiences, business processes, and IT systems. The next step is sequencing the initiatives, with a clear understanding of timelines. It can take 5-7 years for real business impact, but there should be immediate customer value. Enterprise systems such as SAP can be delivered using agile as well. But it needs the organisation to create and move with a common rhythm. There are businesses working in agile who use hundreds of teams, solve large problems, and build sophisticated products. This can be made easier with modular products and operating architectures – which essentially mean the plug and play capabilities of individual components. It’s important to have shared priorities and financial empowerment of teams. Talent acquisition management needs to be reshaped to meet the new needs. And funding of projects and initiatives needs to be seen as ‘options for further discovery’. After all, at the heart of agile is the ability to proceed with a clear vision but without necessarily knowing all the steps to get there.

(2) Artificial Intelligence At Scale – for non-technology firms

It’s clear that Tech firms from Google, to Amazon and Twitter, have all been able to deploy AI at scale – in enabling recommendations, analysis and predictive behaviour. For non-tech firms too, the time may have come for delivering scaled AI. One of the key areas where AI seems to be ready to scale is around computer vision (image and video analysis) – relevant to insurance, security, or agribusinesses. The article below from the Economist also quotes TCS’s Gautam Shroff, who runs the NADIA chatbot project. A critical assertion the article makes is that implementing AI is not the same as installing a Microsoft program. This might be obvious, but what is less so, is that AI programs by design get better with age, and may be quite rudimentary at launch. Businesses looking to implement AI may need to play across multiple time horizons. And while the short-term opportunity and temptation is to focus on costs, there role of AI in creating new value is clearly much bigger.


(3) The Ring of Success:

What makes a new product successful? I met Jamie, the founder of Ring a couple of years ago in London and was struck by his directness and commitment. He even appears in his company’s ads. Ring.com was recently acquired by Amazon for $1bn. Here one of the backers of Ring talks about the factors which made Ring a success. In a nutshell, the list includes (1) the qualities of the founder, (2) execution focus and excellence, (3) continuous improvements, (4) having a single purpose, (5) pricing and customer value. (6) integration of hardware and software. (7) clarity about the role of the brand.


(4) Blockchain and ICO redux:

Do you know your Ethereum from your Eos or your MIATA from your Monero? This piece from the MIT Tech Review will sort you out. And for those of you who are still struggling to understand what exactly blockchain is, here’s a good primer. Of course, you could always go look at my earlier blog post on everything blockchain.

Links:


(5) X and Z – The Millennial Sandwich

X & Z: Or the millennial sandwich. All the talk in the digital revolves around millennial, but there is a generation on either side. The generation X – followed the baby boomers, and it turns out they have a better handle on traditional leadership values than millennial. This article talks about Generation X at work.

On the other side, there’s a generation after the millennials – the generation Z. They’re the ones who don’t have TV’s, don’t do facebook, and live their lives on mobile phones. This article talks about how Financial services are being shaped by Gen Z.


(6) Big tech validates Industry 4.0

This week, the large tech players disclosed significant earnings, beating expectations and seeing share prices surge. In a way it’s a validation of the industry 4.0 model – the abundance of capital, data, and infrastructure will enable businesses to create exponential value, despite the challenges of regulation, data stewardship issues and other problems.  Amazon still has headroom because when push comes to shove, Amazon Prime, which includes all you can consume music and movies can probably increase prices still more.


(7) Defending Democracy

The US elections meets the technology arms race – this article presents experiences from a hacking bootcamp., run for the teams who manage elections. While the details are interesting, there is a larger story here – more than influencing the elections either way, the greater harm this kind of election hacking wreaks is in its ability to shake people’s faith in democracy. As always, there’s no other answer than being prepared, but that’s easier said than done!

Reading List: 7 for 7 – April 23: Palantir > Facebook, Generative Design, Alexa With Eyes, and More…

The 7 most interesting things I’ve read over the past 7 days.
 future doors
(image credit: Pixabay)

(1) The One Thing You Should Read about AI this week: 
In March, we ran a TCS DEX event where we posed the question to our partners and clients, around whether every company should have an AI strategy. While there was general agreement about the need for an AI strategy, there was no clear starting point. This may be the challenge for most companies. And perhaps the first steps towards a strategy are gathering information and running experiments.
If you read one thing this week, read this AI paper by the McKinsey Global institute – they publish results from a comprehensive survey and analysis of AI across industries, functions, and use cases and by the relevance of the different techniques, such as Transfer Learning, Reinforcement Learning, and Deep Learning Neural Networks. If some of that sounds obscure to you, I suggest reading up a little bit as these will become common business parlance in the not-too-distant future, and clients will be asking about them. In any case, succinct explanations are provided in the paper. It will probably take you a couple of hours to read (not skim) the 40 odd pages. Here are some of the very high-level takeaways:
  1. Industries where the number of use cases are the highest, include (1) Insurance, (2) Banking, (3) Retail, and (4) Automotive & Assembly.
  2. Functions with the highest number of use cases include (1) Supply Chain management and manufacturing, and (2) Marketing and Sales
  3. Specific domains where the impact might be the highest include (1) customer service & management (2) risk modelling (3) predictive service / intervention (4) workforce productivity and efficiency (5) analytics-driven hiring and retention, and (6) yield optimisation.
Some other takeaways:
  • The highest absolute impact of AI is to be found in Retail, but Travel and transport & logistics can extract the highest incremental value over other analytics techniques.
  • Image data is the highest value, after structured and time series data, and ahead of text.
  • Challenges and limitations: (1) labelling training data (2) obtaining large enough data sets (3) explaining the outcomes and decisions in clear enough terms – e.g. for product classification or regulatory (4) transferring of findings to adjacent use cases, and (5) risk of bias in data/ algorithms

(2) Data: Facebook is a misguided amateur compared to Palantir 
Palantir is much more dangerous than FB. Why? (1) Because Peter Thiel, the founder is a man of metamorphosis – he has quixotic views of the world – such as ‘freedom is not compatible with democracy’; (2) because Palantir is a much more shadowy and secretive organisation but built specifically for next-generation analytics for powerful clients. (3) Because this kind of analytics power can be destructive if individuals go rogue – the article talks about Peter Caviccia who ended up running his own spying operation within JPMorgan in what is described in the article as Wall Street meets Apocalypse Now, and (4) because tools like this are being used by police forces such as LAPD to predict crime – but also to do that to build deep and intricate views of a lot of individuals and their lives. The article also provides a very good visual model of Peter Thiel’s incredible original Paypal team and network which includes Elon Musk, Reid Hoffman (LinkedIn), Steve Chen (Youtube) and many others.

(3) Design: Welcome to Generative 3D Design 
What do you do when you need to design and build a spinal implant that needs to be appropriately strong, light and pliant? You use an algorithm-driven design process called generative design with 3D printing. Algorithmic design takes in your specifications or requirements and generates a number of options, which are developed faster than humans and enables a lot more personalisation of complex materials. In future, these will probably be custom built to specs in a way that humans simply can’t. It also uses the least amount of material possible (it’s one of the constraints/ objectives). This story in the Wired magazine talks about how Nuvasive does this using AI and 3D.

(4) eCommerce and Retail – change of guard, and disruption for the economy
This week we had a direct comparison between M&S vs ASOS: M&S is a struggling brand – losing share in apparel, and under pressure on foods. Other brick and mortar retailers like New Look are also in trouble. ASOS sales, on the other hand, hit £1.9 bn 2017 which amounted to a 33% increase. It’s also instructive to note that eCommerce contributes some 25% of British clothes retail numbers. In fact, the UK has the highest amount of online commerce (as a % of overall retail numbers – almost 18%), but the retail industry also accounts for 10% people and 10% of the economy – so significant disruptions lie ahead.

(5) Asset-light business models 
We’ve seen them in telecoms (MVNOs), in retail, and also in utilities. Lightweight, direct to consumer competitors who don’t carry the baggage of their larger competitors. They have no legacy IT and are built ground up on digital platforms, for a start, and also have a much more nimble operating model. Companies like Asos and Ovo energy are successful because they attract a particular consumer niche, operate in an agile way and are not weighed down by the legacy business and IT challenges of their larger peers (zero inventory, for example). This trend goes all the way down to micro brands in the consumer goods space. Many of these businesses will die or stay micro, but once in a generation, they will lead to the next FB or Amazon.

(6) Alexa Fashion – a glimpse of the future 
What’s Alexa’s next trick? How about a camera that can give you fashion feedback? Amazon’s Echo Look (not yet launched to the public, but on invitation only basis) has a camera and lets you take selfies and gives you feedback on what you’re wearing. For those worried about whether Amazon was listening to all your conversations, this will definitely be a step too far! This piece is a good take on the social and psychological implication of a tool like this. Of course, if you want algorithmic advice but don’t want something that invasive, you can always turn to Miquela

(7) Battery Wars 
We all know that a move to electric cars is a ‘when’ and not an ‘if’ question by now. What that means, however, is a near insatiable demand for batteries and a huge spotlight on battery technology. Currently, the minerals that go into batteries such as Lithium and Magnesium are seeing a huge spurt in demand. It turns out that DR Congo is the worlds dominant source of Magnesium. In all of this, the UK is seeking to play a leadership role in battery technology. But is it either feasible or desirable? On the other hand, Williams has been working on safer batteries which are tough-tested in the Formula E competition – where electric-only cars race, collide and crash.

When Technology Talks

Conversational Systems aka chatbots are starting to become mainstream – here’s why you should stay ahead of the game:

chatbots

The shape-shifting of the yin-yang between humans and technology is one of the hallmarks of digital technologies, but it is perhaps most pronounced and exploit in the area of Conversational Systems. But to truly appreciate conversational systems, we need to go back a few steps.

For the longest part of the evolution of information technology, the technology has been the unwieldy and intransigent partner requiring humans to contort in order to fit. Mainframe and ERP system were largely built to defend the single version of truth and cared little for the experience. Cue hours of training, anti-intuitive interfaces, clunky experiences, and flows designed by analysts, not designers. Most of us have lived through many ages of this type of IT will have experienced this first hand. If these systems were buildings they would be warehouses and fortresses, not homes or palaces. Too bad if you didn’t like it. What’s ‘like’ got to do with it! (As Tina Turner might have sung!)

Digital technology started to change this model. Because of its roots in consumer technology rather than enterprise, design and adoption were very much the problem of the providers. This story weaves it’s way through the emergence of web, social media and culminates with the launch of the iPhone. There is no doubt – the iPhone made technology sexy. To extend the oft-quoted NASA analogy, it was the rocket in your pocket! With the emergence of the app environment and broadband internet, which was key to Web 2.0, it suddenly introduced a whole new ingredient into the technology cookbook – emotion! Steve Jobs didn’t just want technology to be likable, he wanted it to be lickable.

The balance between humans and technology has since been redressed significantly – apps and websites focus on intuitiveness, and molding the process around the user. It means that to deal with a bank, you don’t have to follow the banks’ convenience, for time and place, and follow their processes of filling a lifetime’s worth of forms. Instead, banks work hard to make it work for you. And you want it 24/7, on the train, at bus stops, in the elevator and before you get out from under your blanket in the morning. And the banks have to make that happen. The mouse has given way to the finger. Humans and technology are ever closer. This was almost a meeting of equals.

But now the pendulum is swinging the other way. Technology wants to make it even easier for humans. Why should you learn to use an iPhone or figure out how to install and manage an app? You should just ask for it the way you would, in any other situation, and technology should do your bidding. Instead of downloading, installing and launching an app, you should simply ask the question in plain English (or a language of your choice) and the bank should respond. Welcome to the world of Conversational Systems. Ask Siri, ask Alexa, or Cortana, or Google or Bixby. But wait, we’ve gotten ahead of ourselves again.

The starting point for conversational systems is a chatbot. And a chatbot is an intelligent tool. Yes, we’re talking about AI and machine learning. Conversational systems are one of the early and universal applications of artificial intelligence. But it’s not so simple as just calling it AI. There are actually multiple points of intelligence in a conversational system. How does a chatbot work? Well for a user, you just type as though you were chatting with a human and you get human-like responses back in spoken language. Your experience is no different from talking on WhatsApp or Facebook Messenger for example, with another person. The point here is that you are able to ‘speak’ in a way that you are used to and the technology bend itself around you – your words, expressions, context, dialect, questions and even your mistakes.

Let’s look at that in a little more detail. This picture from Gartner does an excellent job of describing what goes into a chatbot:

The user interface is supported by a language processing and response generation engine. This means that the system needs to understand the users’ language. And it needs to generate responses that linguistically match the language of the user, and often the be cognizant of the mood. There are language engines like Microsoft’s LUIS, or Google’s language processing tool.

Behind this, the system needs to understand the user’s intent. Is this person trying to pay a bill? Change a password? Make a complaint? Ask a question? And to be able to qualify the question or issue, understand the urgency, etc. The third key area of intelligence is the contextual awareness. A customer talking to an insurance company in a flood-hit area has a fundamentally different context from a new prospect, though they may be asking the same question ‘does this policy cover xxx’. And of course, the context needs to be maintained through the conversation. An area which Amazon Alexa is just about fixing now. So when you say ‘Alexa who was the last president of the US’ and Alexa says ‘Barack Obama’ and you say ‘how tall is he?’ – Alexa doesn’t understand who ‘he’ is, because it hasn’t retained the context of the conversation.

And finally, the system needs to connect to a load of other systems to extract or enter data. And needless to say, when something goes wrong, it needs to ‘fail gracefully’: such as “Hmm… I don’t seem to know the answer to that. Let me check…” rather than “incorrect command” or “error, file not found”. These components are the building blocks of any conversational system. Just as with any AI application, we also need the data to train the chatbot, or allow it to learn ‘on the job’. One of the challenges in the latter approach is that the chatbot is prone to the biases of the data and real-time data may well have biases, as Microsoft discovered, with a Twitter-based chatbot.

We believe that chatbots are individually modular and very narrow in scope. You need to think of a network of chatbots, each doing a very small and focused task. One chatbot may just focus on verifying the customer’s information and authenticating her. Another may just do password changes. Although as far as the user is concerned, they may not know they’re communicating with many bots. The network of bots, therefore, acts as a single entity. We can even have humans and bots working in the same network with customers moving seamlessly between bots and human interactions depending on the state of the conversation. In fact, triaging the initial conversation and deciding whether a human or a bot needs to address the issue is also something a bot can be trained to do. My colleagues have built demos for bots which can walk a utility customer through a meter reading submission, for example, and also generate a bill for the customer.

Bots are by themselves individual micro-apps which are trained to perform certain tasks. You can have a meeting room bot which just helps you find and book the best available meeting room for your next meeting. Or a personal assistant bot that just manages your calendar, such as x.ai. We are building a number of these for our clients. Bots are excellent at handling multi-modal complexity – for example when the source of complexity is that there are many sources of information. The most classic case is 5 people trying to figure out the best time to meet, based on their calendars. As you well know, this is a repetitive, cyclical, time-consuming and often frustrating exercise, with dozens of emails and messages being exchanged. This is the kind of thing a bot can do very well, i.e. identify (say) the 3 best slots that fit everybody’s criteria on their calendars, keeping in mind travel and distances. Chatbots are just a special kind of bot that can also accept commands, and generate responses in natural language. Another kind of bot is a mailbot which can read an inbound email, contextualise it, and generate a response while capturing the relevant information in a data store. In our labs we have examples of mailbots which can respond to customers looking to change their address, for example.

Coming back to chatbots, if you also add a voice i.e. a speech to text engine to the interface, you get an Alexa or Siri kind of experience. Note that now we’re adding yet more intelligence that needs to recognise spoken words, often against background noises, and with a range of accents (yes, including Scottish ones). Of course, when it’s on the phone, there are many additional cues to the context of the user. The golden mean is in the space between recognising context and making appropriate suggestions, without making the user feel that their privacy is being compromised. Quite apart from the intelligence, one of the real benefits for users is often the design of the guided interface that allows a user to be walked step by step through what might be a daunting set of instructions or forms or a complex transaction – such as an insurance claim or a mortgage quote.

Gartner suggest that organisations will spend more on conversational systems in the next 3 years than they do on mobile applications. This would suggest a shift to a ‘conversation first’ interface model. There are already some excellent examples of early movers here. Babylon offers a conversational interface for providing initial medical inputs and is approved by the NHS. Quartz delivers news using a conversational model. You can also build conversational applications on Facebook to connect with customers and users. Chatbots are also being used to target online human trafficking. Needless to say, all those clunky corporate systems could well do with more conversational interfaces. Imagine just typing in “TravelBot – I need a ticket to Glasgow on Friday the 9th of February. Get me the first flight out from Heathrow and the last flight back to either Heathrow or Gatwick. The project code is 100153.” And sit back while the bot pulls up options for you, and also asks you whether you need to book conveyance.

Conversational systems will certainly make technology friendlier. It will humanise them in ways we have never experienced before. I often find myself saying please and thank you to Alexa and we will increasingly anthropomorphise technology via the nicknames we give these assistants. You may already have seen the movie “Her”. We should expect that this will bring many new great ideas, brilliant solutions and equally pose new social and psychological questions. Consider for example the chatbot that is desi§gned just for conversation – somebody to talk to when we need it. We often talk about how AI may take over the world and destroy us. But what if AI just wants to be our best friend?

My thanks to my colleagues and all the discussions which have sharpened my thinking about this – especially Anantha Sekar – who is my go-to person for all things Chatbots.

My book: Doing Digital – Connect, Quantify, Optimise – is available here, for the price of a coffee!

As with all my posts, all opinions here are my own – and not reflective of or necessarily shared by my employers.

The Unbearable Bigness Of Data

(And What We Should Be Doing About It)

Big-data


Welcome to the Data Deluge.  

By now you’ve probably gotten sick of hearing about big data, little data, fat data, thin data and all manner of data. You’ve gotten your head around Terabytes, Exabytes and Zetabytes. You’ve noted that the price of data has crashed by 90% over the past few years on a per unit basis. Your CIO has mastered Hadoop and MongoDB and you understand the benefits of data lakes over, say, data puddles. The scary part of all of this is that we’re still in the early days of the data deluge. We are hurtling into a quantified universe fed by smart cities, homes and cars; platform driven models and clickstream driven relationships. In fact, I was having coffee this morning with the well travelled, well informed, and always insightful John McCarthy from Forrester, and we were positing that in a few years from now, data will take over from ‘Digital’ as the centrepiece of the organisational transformation and focus across the world.

Right now, though, we’re caught in a deluge with no real clarity about how we’re going to actually use all the data that’s floating around. And here are three key challenges we’re going to have to deal with:

What, not Why – A New Mindset

A question I often ask my colleagues who are experts in data sciences is as follows: let’s suppose that when it rains, people drink more cappuccinos. Now, if Starbucks knew this, it could advertise or promote cappuccinos every time it rained. It could even launch branded umbrellas. But how would it discover this? Historically, the story would be one of a smart store manager who one day realises that rainy days increases his cappuccino sales, and having defined the premise, starts to collect the data to validate his hypothesis. Or even more traditionally, Costa Coffee runs focused groups, and the link between weather and coffee preferences is established. Critically, a qualitative hypothesis would be at the front of the process and data collection would follow. Because, how else would we know if it’s the rainfall or the pollen count or indeed, the volume of traffic on the roads that we should be correlating coffee sales with?

In the new world of data, or ‘big data’, this works the other way around. A brand like Caffe Nero could take all their sales data across the world, and run hundreds or thousands of analyses, searching for correlation, with any number of external and easily accessible data sources. This includes the obvious ones such as weather, or transport, but also for example days of week or month, time of day, and train and bus schedules, sales in other retail stores, etc. This list is only limited by your creativity and the data availability.

But most fundamentally, this is a shift from why, to what. As well highlighted by Cukier and Schonberger in their book on Big Data, in this new world, we find the correlation first and then the hypothesis. And we actually don’t care why. Let’s suppose we discovered that the coffee consumption actually varied with the tides. We would need to verify whether this was simply a spurious correlation, but from there on, we could go straight to predictability and dispense with the causality, or the ‘why’ question. This is a mind shift for those of us who are used to a ‘scientific’ mentality which requires us to establish causality in order for any approach to rise beyond heuristics into a scaled and logical argument.

The Crown Jewels?

If you haven’t read Adrian Slywotzky’s great book on Value Migration, this would be a great time to start. The book talks through how value migrates from older to newer business models or from a segment to another, or even one firm to another.

We are going to see significant value moving to those companies in each industry that get the value of the data. Be it healthcare, or education, or automobiles, or even heavy industry. Either an incumbent, such as GE, with it’s smart engines and its Predix platform, or challengers such as Amazon, in retail, or upstarts such as 23andme.

The question you want to be asking yourself is, in your industry and in your firm, what are some of the areas of opportunity where you can create new platforms to data-enable processes, or value to customers. How can you converge the primary and ancillary meaning in your data onto areas of your competitive strategy? And also, you may want to perform an audit of what data you might be giving away, perhaps because you feel that it’s not core to your business or you have a player in the industry who has historically be collecting this data. For example, Experian and credit scores. Ask yourself are you merely giving away data that you don’t use, or are you handing over the source of competitive differentiation in your industry? Remember the story about IBM, Microsoft and Intel? I argued this point in my post about Uber and taxi companies, too.

To underscore the earlier point, I believe that value will increasingly migrate, in each industry, to those who best manage, and build strategic & competitive alignment with their data strategies and/ or new offerings based on the data and its meaning.

Adding Love To Data

A couple of years ago, at the annual FT Innovate conference, a lively round table discussion followed after a well known retail CEO had made a presentation about data and analysis. The presentation covered examples of analysing customers to great and occasionally worrying insight, within the industry. From knowing if a woman is pregnant even before she knows it herself, to people having affairs, or stacking beer and nappies together, in front of the stores, all of this can today be deduced from data itself. The debates afterward spilled over onto lunch led to the insight that while there’s been a lot of talk about analysing customers, it misses the point of empathy.

Let’s remind ourselves though – the customer does not want to be analysed. As with any relationship, he or she wants to be loved, cherished, understood and served better.At the end of the day, for most businesses, this translates to a mind-shift again, of adding a layer of human understanding to data, to creatively and emotionally assess the customers’ needs and to allow the analytics to feed off the empathy and emotional connect, rather than be driven purely by the algorithm.

In Sum:

You will hear a whole lot more about data in the coming weeks and months. However, for starters, you could keep these 3 guidelines in mind:

  • Look for correlations, not causality. You want to throw tons of data together and find patterns that aren’t born in some logical causal hypothesis but is simply an observed correlation done at the data level.
  • Be aware that the future of your industry, just like any industry, will involve the value of data. So try to identify and own areas of data which help you drive competitive advantage and/or new products and services, and start building proofs of concept.
  • Add love to data. Don’t just analyse your customers. Bring observation and empathy to the table as well, and marry the analytics with the empathy for best results.
What are your lessons from working with big, small and tiny data so far?

Welcome to the 1980s

Antique Telephone

Data Antiquity Award

A fortnight ago, I lost my debit card. I say lost, but my 4-year-old daughter discovered it under the car seat the next day. Of course, by then I had cancelled the card and my bank assured me that a new one was on it’s way. We could do that on the website – it was easier than calling on the phone and listening to ‘music’ for hours. As it turns out this was within a day of my wife’s card expiring, so she was also talking with the same bank for the same purpose – a new debit card.

Cards ordered, we could relax, and get on with our lives. Although we had to rely on using our credit cards at ATMs to withdraw cash. But a week passed and no cards showed up. So we got onto the phone and spoke with the advisor at the call centre. Imagine our surprise, when we were told that the card had been dispatched, but to our previous address – which we had left exactly 13 months ago. The bank didn’t know that we had moved. How was this possible? Even worse, they had my old mobile phone number – which I have not used in 6 months.

We moved house at the end of May 2014. Having done this a few times, we have a fairly comprehensive checklist for all the various updates. From utility providers, to post office, to banks to employers, it’s all there and we’re pretty sure we did it all. In fact, our credit cards, with the same bank have all the right information. We get the statements, and our online purchases go through with the new address confirmation. Absurdly, this information has not filtered through to the savings account side of my bank.

Let’s assume for a moment that we may have made a ‘mistake’ in informing the credit card issuer, and not the retail bank. Is this really a mistake though? As consumers, do we need to inform each part of the bank individually? How bizarre that in the 13 intervening months, the bank has not picked up the fact that our address for the credit card issued by them is different from the address for the debit cards issued by them. This, by the way, is a major high street bank in the UK. I’m not naming them because that’s not the point of this story.

And consider this: quite apart from the inconvenience and the confusion, the bank has effectively posted my card AND my pin to the wrong person. It’ll get sent by registered post – but as we know, anybody can really sign for it. The gentleman who now lives in our house is a very nice man, who hails from China and works in the City, in London. But what if he was a villainous man, easily tempted into transgression? How ironic is it that after all the effort of sending the card and pin in separate packs and taking all the precautions of masking the pin, it gets sent to the wrong person! A reminder that you’re only as secure as your weakest link!

And there were so many opportunities to get it right! Even a simple pop up while ordering the new card, to say thanks, we will be sending your card to this address, and showing the last 3 digits of the post code could be an easy way to trap this error. To be fair, Samir, the guy at the other end of the phone at the call centre, did what he could to rectify the errors and the bank has offered us a £60 payment as an apology. Assuming that the new cards get here by Monday, I’m inclined to get over this and move on. Of course, if the cards don’t arrive as expected, and we have go on holiday on Wednesday without them, I will have to tell them what to do with the £60, and it will not be polite.

Fresh from this brush with data antiquity, we ran into another one, this time a Harley Street clinic, who called my wife for an appointment. “Can you come tomorrow?” they said. We made a herculean effort to get there, the next day, beating tube strikes and insane traffic. The appointment was actually on the following day, and the doctor was unavailable. I asked them, why didn’t you send through a confirmation email which would have sorted out the error or misunderstanding? “Mumble mumble” was the only answer I got.

It seems to me that despite all the hype about connected worlds, smart products and big, gargantuan data, we’re still at the starting block in so many ways. I’ve been in London for 12 years now. I’ve never tweeted a single word about Bollywood, yet Twitter regularly asks me if I want to follow the latest Bollywood stars, or Indian TV personalities. And I’m sure many of you, like me, have been unwilling recipients of re-targeting ads – being told about great new folding cycles a month after you searched for and actually bought it. In all these ways, we’re still in the trough of digital disillusionment, to borrow a phrase from Gartner.

I guess the question left in my mind is, how many businesses, big and small are discussing big data and digital transformation projects before getting the basics right? How many are trying to leap into the 21st century, with one foot still stuck in the 1980s?  A good digital strategy should ensure an appropriate choice architecture which allows you to focus on getting the basics right while simultaneously creating a roadmap towards a bigger vision.

So the next time you encounter data-antiquity as a customer, or in your business, remember that in the digital era, there is no excuse for getting the basics right and that a good data foundation is at the heart of any digital transformation roadmap.